text
stringlengths
0
2.11M
id
stringlengths
33
34
metadata
dict
Efficient Fully Bayesian Approach to Brain Activity Mapping with Complex-Valued fMRI Data Zhengxin Wang Clemson University Daniel B. Rowe Marquette University Xinyi Li Clemson University D. Andrew Brown Address for correspondence:D. Andrew Brown, School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC, USA. Email: [email protected] Clemson University=========================================================================================================================================================================================================================================================================================================== The joint analysis of multimodal neuroimaging data is critical in the field of brain research because it reveals complex interactive relationships between neurobiological structures and functions. In this study, we focus on investigating the effects of structural imaging (SI) features, including white matter micro-structure integrity (WMMI) and cortical thickness, on the whole brain functional connectome (FC) network. To achieve this goal, we propose a network-based vector-on-matrix regression model to characterize the FC-SI association patterns. We have developed a novel multi-level dense bipartite and clique subgraph extraction method to identify which subsets of spatially specific SI features intensively influence organized FC sub-networks. The proposed method can simultaneously identify highly correlated structural-connectomic association patterns and suppress false positive findings while handling millions of potential interactions.We apply our method to a multimodal neuroimaging dataset of 4,242 participants from the UK Biobank to evaluate the effects of whole-brain WMMI and cortical thickness on the resting-state FC.The results reveal that the WMMI on corticospinal tracts and inferior cerebellar peduncle significantly affect functional connections of sensorimotor, salience, and executive sub-networks with an average correlation of 0.81(p<0.001). § INTRODUCTIONNeuroimaging data play a fundamental role in deciphering the operations of the human brain, the most complex organ. These data come in various modalities, including magnetic resonance imaging (MRI), diffusion tensor imaging (DTI), and functional MRI (fMRI).Each modality reveals distinct aspects of the brain's structure and functionality.For example, MRI provides high-resolution images of the brain's structure, offering valuable physical information such as size, shape, and cortical thickness.DTI assesses the integrity of white matter microstructures by calculating fractional anisotropy.The fMRI data capture dynamic blood flow changes in different brain regions to measure localized neural activity and functional connections. In statistical analysis, neuroimaging data are commonly represented in two forms: vectors (e.g., a list of region-wise cortical thickness measures) and association matrices (e.g., functional connectivity strengths stored in a weighted adjacency matrix) <cit.>. Instead of studying brain structural imaging (SI) and functional connectivity (FC) data separately, exploring their intricate interplay could significantly deepen our understanding of the brain, including its development and aging <cit.>.For example, brain regions connected by white matter tracts with higher fractional anisotropy are more likely to demonstrate strong FCs, which, in turn, can influence cognitive processes such as attention, memory, and decision-making.There exists little work on the joint analysis of multi-modal neuroimaging data despite its clear importance, possibly due to the challenge presented by ultra-high dimensionality and intertwined data structures. In conventional brain connectome studies, researchers frequently collect 10^5 FC measures across hundreds of brain regions and up to 10^4 SI measures, resulting in billions (10^9) of FC-SI pairs.This not only creates significant computational demands but also poses challenges for multiple-testing correction. Traditional correction methods like the false discovery rate (FDR) and family-wise error rate (FWER) often yield almost no supra-threshold FC-SI pairs, as demonstrated by extensive simulation studies. Moreover, FC and SI display data structures indicative of certain connectomic network space and spatial dependence, respectively. A joint FC-SI analysis needs to incorporate these intertwined data structures into comprehensive statistical modeling, thus producing biologically plausible and interpretable results. Specifically, our goal in this work is to identify an array of SI variables that intrinsically influences a group of FCs within a brain connectome sub-network, rather than those randomly distributed across the whole-brain connectome, referred to as a systematic pattern of associations. These challenges underscore the necessity of developing a joint analysis method to address the complexity of multi-modal neuroimaging data. Recently, advanced statistical methods have been developed to jointly model two sets of neuroimaging features by leveraging techniques including regularization, low rank, and projection models <cit.>. Many of these methods have been successfully applied to multi-modal imaging data analysis and yielded interesting findings <cit.>. These statistical methods can be broadly classified into two categories. The first category uses regularization-based methods <cit.>, where a major limitation of these methods is that the sparsely selected associations fail to take into account the systematic network-level impacts of SIs on FC networks. The second category employs dimensional reduction strategies, such as principal component analysis (PCA) <cit.>, which first projects both FCs and SIs into a handful of top principal components and then performs regression analysis on these selected components. However, as an unsupervised dimension reduction technique, PCA-based analysis often extracts fewer associated principal components of outcomes and predictors, thereby missing the truly associated FC-SI pairs. Sparse canonical correlation analysis (sCCA) methods can be considered as an integration of these two categories and have been widely used in neuroimaging studies <cit.>.Yet, sCCA methods usually focus on vector-to-vector association analysis, which may also overlook the systematic vector-to-network association patterns that are of particular interest in this work (i.e., the associations between the SI vector and FC sub-connectome represented as a matrix).To bridge the methodological gap in modeling vector-to-matrix associations and incorporating latent network structures, we propose a new multi-level network association method (MOAT) to systematically investigate the FC-SI association patterns. <ref> presents an overview of the MOAT method, which is constructed based on a multi-level graph for structural-functional neuroimaging data. The first level is a bipartite graph that depicts the association patterns between the SI vector as predictors and vectorized FC outcomes, adjusted for other confounding covariates (<ref>(a)). Meanwhile, the second level is a complete unipartite graph that reconstructs the vectorized FCs back to a whole-brain connectome network. This multi-level structure enables the identification of subsets of SIs that systematically impact FC sub-networks.We have further developed computationally efficient algorithms to extract the multi-level sub-networks from the full graphs and have proposed a tailored network-based inference frame to individually test each sub-network with multiple corrections based on permutation tests. Our method is also compatible with the existing methods aforementioned (e.g., PCA, CCA). For example, applying CCA to FCs and SIs in an extracted multi-level sub-network provides an estimate of association in the context of multiple regressions. The contributions of this article are three-fold. First, we introduce MOAT, a novel method that can handle matrix-variate outcomes and vector-variate predictors. Compared to the existing models for multivariate outcomes and multivariate predictors <cit.>, MOAT can further account for the network structure within the matrix outcomes and between the outcome-predictor association patterns.MOAT naturally prohibits most false positive associations because these associations are more likely distributed sparsely rather than gathered in organized sub-networks. Secondly, we develop new algorithms to extract those multi-level sub-networks.The computational load is low because we developed a tailored greedy peeling algorithm with multilinear complexity, making our approach compatible with the commonly used permutation tests that are often computationally intensive. Lastly, we proposed a novel network-level inference framework, where we utilize novel test statistics derived based on the multi-level dense subgraph properties in terms of size and density. This inference framework leads to a simultaneous enhancement of both sensitivity and specificity by leveraging graph combinatorial theories.The rest of this paper is organized as follows. In Section 2, we formally define the multi-level network structure and present how MOAT works in network extraction with the network-based inference method. In Section 3, we perform extensive simulation analyses for method validation and comparison. In Section 4, we apply MOAT to a real structure-function neuroimaging dataset from the UK Biobank with 4,242 participants to systematically investigate the FC-SI associations. We conclude with discussions in Section 5. § OUR METHOD §.§ Data structure and problem set upWe collect structural-functional neuroimaging data from independent subjects, indexed as {1,…,D}.For each subject d: 1≤ d≤ D, we observe three sets of measurements:* Independent variables:a vector of m SI measures X^(d)= (x_1^(d), …, x_m^(d))^T. This vector characterizes anatomical structures of the brain, such as white matter microstructure integrity measured by fractional anisotropy from DTI <cit.> and region-wise cortical thickness obtained from MRI <cit.>. * Outcome variables: an adjacency matrix that stores pairwise FC measures Y^(d)_n× n between n brain regions.Each element y_ij^(d) of Y^(d) represents the strength of functional connection between brain regions i and j of subject d, calculated from functional imaging data such as resting state fMRI.Thanks to the Brainetcome Atlas <cit.>, researchers can align the FC brain region partitions across different participants, thus conveniently, their Y share a common node set. We model Y^(d) as the outcome variable due to the widely accepted view in neurology that brain structure determines neural functions <cit.>. * Confounding variables:η^(d)= (η_1^(d), …, η_p^(d))^T.These variables include profiling information such as age, sex, genetics, and environment that may potentially affect brain functional connectome in complicated ways.§.§ Multi-level graph representationWe explore the brain structural-functional relationship by considering the following regression model: for each subject d∈ [1, D], g(y_ij^(d)) =  θ_ij^0 + ∑_k=1^m β_(ij),k x_k^(d) + ∑_p=1^P α_ij^p η_p^(d),where g(·) is a link function,θ_i j^0 is the intercept,β_(ij),k is the coefficient of the SI measure x_k, and α_i j^p is the coefficient of the nuisance covariate η_p <cit.>.The focal parameter of interest in the above regression model (<ref>) is β_(ij),k, where a nonzero coefficient β_(ij),k≠ 0 signifies an association between an SI measure x_k and the functional connection y_ij between brain regions i and j. Consequently, learning the set {β_(ij),k≠ 0 } allows for the unveiling of brain-wide association patterns between SIs and FCs. A multi-level graph model targeting {β_(ij),k≠ 0} associations. To facilitate downstream analysis, we let a matrix β = {β_(ij),k}_∀ ijk∈ℝ^n2× mto denote all SI-FC pair-wise associations. <cit.>. We build the multi-level graph model based on the n2× m matrix β. Specifically, at the first level, we define a bipartite graph B=(S,F;H) to represent the matrix β, where S={1,…,m} (i.e., |S|=m) constitutes the node set of SI measures; F={1,…,n2} (i.e., |F|=n2) constitutes the node set of FC measures;and H denotes the edge set.Each element h_(ij),k∈ H signifies a non-zero association between FC and SI (i.e., β_(ij),k≠ 0). We demonstrate the first level bipartite graph B in the left panel of <ref> . The second level of the multi-level graph model is a classic graph model reflecting the whole-brain connectome network, denoted as G=(V;F), where V is the node set of brain regions with size |V|=n, while F is the edge set connecting brain regions with size |F|≤(n-1)/2. Noticeably, each node (i,j) in F can also be interpreted as an edge in the brain functional connetome network G(V;F). Thus, F denotes both (i) the node set of the bipartite graph B=(S,F;H) with F={f_(ij)}, where f_(ij) represents a node for the outcome Y_ij; (ii) the edge set of G(V;F) with F={f_i,j}, where f_i,j=1 indicates that brain areas i and j are connected.In light of the highly organized brain structures and functions, it is neurobiologically sensible to model {β_(ij),k≠ 0} in organized association patterns <cit.>. Specifically, we consider that a subset of brain structural predictors jointly influences connectome outcome variables within a functional subnetwork, which characterizes a plausible brain structure-function interaction <cit.>. Built upon this latent relationship pattern, we specifythat {β_(ij),k≠ 0}predominantly concentrates within specific subgraphs denoted as {B_c } where c=1, ⋯, C and B_c ⊂ B. For simplicity,we are going to illustrate the case where c=1 below. We specify B_1(S_1,F_1) as a doubly-dense multi-level subgraph (see dense graph studied in Tong MOAT, <cit.>). At the first level, a subset of SI predictors of {X_k, k ∈ S_1} condensely affect {Y_ij, ij ∈ F_1}:Pr(β_(i j), k≠ 0 | (ij)∈ F_1, k ∈ S_1)≫Pr(β_(i j), k≠ 0 | (ij)∉ F_1ork ∉ S_1).At the second level, a connectomic subnetwork G_1=(V_1; F_1) is an edge-induced sub-clique, where the edge subset of interest is {(i,j): (ij)∈ F_1}. G_1 is also dense reflecting that SIs of S_1 are associated with connectomic edges in a network rather than sparsely and randomly distributed in the whole-brain connectome (ref Tong 2023). In <ref>, we demonstrate a doubly-dense multi-level subgraph B_1 with red-bold edges. Provided with {B_c}, we can express the overall multi-level graph as (3) and (4) as follows:Level 1 (bipartite) network:B=∪_c=1^C B_c ∪ B_0,  B_c=(S_c, F_c; H_c); Level 2 (unipartite) network:G=∪_c=1^C G_c ∪ G_0,  G_c=(V_c; F_c),where B_c and G_c are dense subgraphs in B and G respectively, and B_0 and G_0 are the remaining graphs.Each node set F_c in B_c corresponds to a subset of edges in the functional connectome G, which induces one ormultiple cliques in G. For simplicity, we use G_c to denote the clique(s) for the corresponding B_c. If B is a random graph, then B = B_0 and for all c=1,...,C, B_c=∅. Similarly, if G is a random graph, then G_c=∅. Otherwise, G_c represents a connectome sub-network. In Figure <ref>, we demonstrate a graphical example of the multi-level structure of B_c and G_c when c=1. In summary, our multi-level network model assigns a small proportion of {β_(ij),k≠ 0} to structured subnetworks reflecting systematic FC-SI association patterns. The patterns may not be captured by neither shrinkage regression models nor clustering/biclustering methods. §.§ Multi-levelsubnetwork estimation In practice, neither {β_(ij),k≠ 0} nor {B_c } is known and it is challenging to simultaneously handle billions of FC-SI associations and estimate {B_c } ({β_(ij),k≠ 0}) in one big model such as (<ref>) <cit.>.To alleviate the computation burden, we take a divide-and-conquer approach and run one regression for each k, recognizing that both θ and α may also be different for each k. This strategy is commonly used in large-scale imaging and genetics data analysis <cit.>. Next, we extract the desired dense subgraphs {B_c} based on X^(d), Y^(d), η^(d).Since {β_(ij),k≠ 0} are unknown, we compute an inference measure a_(ij),k as a surrogate to β_(ij),k: each a_(ij),k is produced by the statistical inference of a regression model for x_k and y_ij.For example, a_(ij),k can be the -log(p) for β_(ij),k, where -log(p) is a widely used metric in high-dimensional data analysis, such asGenome-wide association studies (GWAS) and neuroimaging analysis <cit.>. Now we propose the following criterion for selecting (S_1,F_1): S_c ⊆ S, F_c ⊆ F, V_c ⊆ Vmax∑_c=1^C ∑_k ∈ S_c, (i j)∈ F_c a_(ij),k/(|S_c| |V_c|2)^λ_1 / 2+∑_i,j∈ V_c, i<j f_i j/|V_c|^λ_2,where λ_1, λ_2 tune the impacts of the densities of B_c and G_c, respectively.For example, when λ_1=2, the first term becomes the familiar quantity of subgraph density in network analysis. We typically search λ_1 within the range of (1,2). Empirically, setting λ_1=2 usually forces B_c's into singletons; while setting λ_1 below 1 often leads to sparse B_c's.Likewise, we explore the parameter λ_2 within the same interval (1,2). Deviating from this range for λ_2, either higher or lower, will yield results similar to those observed for λ_1. Here, we follow the convention in neuroimaging analysis and selectλ_1 and λ_2 using Kullback–Leibler (KL) divergence<cit.>. Detailed selection procedures are provided in Appendix A..Directly solving (<ref>) requires combinatorial computation. Therefore, we propose a greedy peeling algorithm as a fast approximation.Our algorithm extends the greedy algorithm for single-level bipartite subgraphs extraction in <cit.> and <cit.>.We present a condensed version as Algorithm <ref> below, and relegate the detailed step-by-step algorithm to Appendix B. For each multi-level subgraphB_c and G_c, Algorithm <ref> first initializes node sets S_c and F_c with the nodes S and F from the original full graphs, respectively.It then iteratively removes nodes with the smallest degree (say, τ∈ S_c and ϕ∈ F_c) from either S or F (see Line 5 of the algorithm).At the end of each iteration q, the updated node set F_c^(q) is used to construct the “level 2" subgraph G^(q)_c(V_c; F_c), and the corresponding output value of objective function (<ref>) is recorded. This process of node removal and the construction of “level 2" graph continues iteratively until all nodes have been excluded from S_c or F_c, with the termination determined by whichever node subset is exhausted first. Ultimately, the algorithm returns the dense subgraph B_c that maximizes (<ref>) among all {B_c^(q)} (see Line 14 of the algorithm). The computational complexity ofAlgorithm <ref> is 𝒪(𝒞|V|(|S|+|F|)), where 𝒞 depends on the number of the grid search, |V|=n,|S|=m and |F|=n2 are the numbers of regions, SI measures, and FC measures, respectively. Additionally, Theorem <ref> confirms the consistency of multi-level subgraph detection.In essence, the solution to the objective function (<ref>) gives a consistent estimation of the true multi-level sub-network structure represented by B_c (the set of edge-induced sub-networks).As the sample size D→∞, the likelihood of an incorrect edge assignment for B_c approaches zero.1 (Consistency of subgraph detection). let 𝐔^* ∈ℝ^|S|× |F| be a matrix storing the true edge membership in B, where each element u_(ij),k^*=1 if β_(ij),k≠0, and u_(ij),k^*=0 otherwise. Similarly, let 𝐔̂∈ℝ^|S|× |F| store the edge membership estimated by optimizing  (<ref>), where each element û_(ij),k=1 if x_k∈Ŝ_c and y_ij∈F̂_c; û_(ij),k=0 otherwise. Then, for an arbitrarily small ϵ, when the sample size D→∞, we haveℙ(||𝐔^*-𝐔̂||_F<ϵ) →1,where ||.||_F denotes the frobenius norm.The proof of Theorem <ref> is provided in Appendix B.2. §.§ Reduced false positive findings by B_cCompared to methods that individually select {β_(ij),k} such as multiple testing approaches, our method selects nonzero β's via dense FC-SI associated sub-network B_c, which can drastically reduce false positive findings.Let {β̂_(ij),k} denote the set of estimated association parameters from a sample; then {β̂_(ij),k≠ 0 | β_(ij),k = 0} indicate false positive findings.Following the common practice in neuroimaging and neurobiology <cit.>, we assume that false positive associations are randomly distributed in the brain space.The conventional approach using individual inference on {β̂_(ij),k} may likely select many false positives β̂_(ij),k≠ 0 | β_(ij),k = 0. In contrast, our method returns few false positives. The reason behind is demonstrated in the following lemma, which says false positives very rarely form dense subgraphs of moderate sizes. Assume that B_c is observed from a random multi-level binary graph with a bipartite graph B(S,F; H) in Level 1 and a unipartite graph G(V; F) in Level 2.Suppose that B_c is a multi-level subgraph that has: (1) Edge density in B_c with ∑_k ∈ S_c, (i j)∈ F_c I(β̂_(ij),k≠ 0 | β_(ij),k = 0)/|S_c||F_c|≥γ_1∈(p_1, 1), where p_1=∑_k ∈ S, (i j)∈ FI(β̂_(ij),k≠ 0 | β_(ij),k = 0)/|S||F| is the proportion of false positive associations in B;(2) Edge density in G_cwith ∑_i,j∈ V_c, i<j I(β̂_(ij),k≠ 0 | β_(ij),k = 0)/|V_c|2≥γ_2∈(p_2, 1), where p_2=∑_i,j∈ V, i<j I(β̂_(ij),k≠ 0 | β_(ij),k = 0)/|V|2 is the proportion of false positive associations in G. Furthermore, let m_0, n_0=Ω(max{m^ϵ, n^ϵ}) for some 0<ϵ<1, where Ω denotes a loose lower bound. Then for sufficiently large m,n with ζ(γ_1,p_1)m_0 ≥ 4log n(n-1),ζ(γ_1,p_1)n_0(n_0-1) ≥ 16log m, and ζ(γ_2,p_2)n_0 ≥ 4log n, we have ℙ(|S_c| ≥ m_0,|F_c| ≥n_02, |V_c| ≥n_0 ) ≤ 2 m n^2 (n-1) ·exp(-1/8ζ(γ_1, p_1) m_0 n_0 (n_0-1) -1/4ζ(γ_2, p_2) n_0^2),where ζ(a, b)={1/(a-b)^2+1/3(a-b)}^-1. Lemma <ref> is proved in Appendix B. Lemma 1 states that the probability of identifying a multi-level subgraph of B̂_c composed of false positive associations {β̂_(ij),k≠ 0 | β_(ij),k = 0}exponentially converges to 0 as the sizes and densities of the multi-level sub-network increase. In practice, the probability of a false positive network with reasonable size (e.g., |S|× |F|=10 × 10) and sound densities is less than 10^-16. It is very unlikely that false positive FC-SI associations {β_(ij),k≠ 0 } would form a large and dense subgraph B_c.§.§ Inference for extracted LgRecall from Section <ref> that our study aims to identify specific subsets of SIs and FCs that exhibit systematic association patterns encoded by B_c.Performing Algorithm<ref> returns a collection of such subgraphs B̂_c.Our next goal is to conduct a network-level statistical inference to gauge the significance of each B̂_c with multiple corrections <cit.>.Roughly speaking, we assess the statistical significance of each B_c by testing:ℍ_0 : B_c is nota dense multi-level subgraph constituted by associated SI-FC pairs; ℍ_a : B_c is a dense multi-level subgraph reflecting systematic FC-SI associations.More precisely, under ℍ_0, the edges of B_c is randomly distributed among all possible pairs.Per Lemma <ref>, it is rare to observe large and dense multi-level subgraph B_c under the null.Therefore, we can straightforwardly perform the commonly used permutation testing strategy in neuroimaging statistics to assess the significance of B̂_c while controlling the FWER (<cit.>.However, our testing object is a multi-level subgraph, which is different from the voxel-based “clusters” commonly encountered in conventional cluster-extent inference because the rareness ofB̂_c is jointly determined by both the densities and sizes of the dense bipartite and clique of B̂_c instead of a measure of cluster-extent (e.g., the number of voxels). To address this challenge, we propose a novel test statistic 𝒯(B̂_c), proportional to the upper bound of the probability of observing a clique of certain size and density in a random graph, which appeared in (<ref>): 𝒯(B̂_c)=exp(-1/4ζ(γ_1, p_1) |S_c||F_c| -1/4ζ(γ_2, p_2) |V_c|^2),where ζ(a, b)={1/(a-b)^2+1/3(a-b)}^-1,γ_1=|H_c|/|S_c||F_c|, p_1=∑_k ∈ S, (i j)∈ F I(β̂_(ij),k≠ 0)/|S||F|, γ_2=|F_c|/|V_c|2, p_2=∑_i,j∈V, i<j I(β̂_(ij),k≠ 0)/|V|2. We formally present our proposed network-based permutation test for the significance of each extracted B̂_c in Algorithm <ref>. The permutation procedure outlined in Algorithm <ref> is effective in simulating the null distribution of the test statistic T(B̂_c). Therefore, FWER can be controlled effectively, yielding a corrected p-value for each extracted B̂_c.Evaluating the joint effect of multiple SIs on FCs.With each B̂_c=(S_c, F_c), we have a set of structural measures { X_k, k ∈ S_c} associated with functional measures { Y_ij, (ij)) ∈ F_c}. However, this does not automatically provide the joint effect (i.e., ∑_(ij)) ∈ F_c, k ∈ S_cβ_(ij),k X_k) of the selected SIs on each selected FC measure. To assess the joint effect, we can adopt the existing multivariate-to-multivarite analysis tools (e.g., CCA).Detailed procedures for applying CCA on B̂_c is provided in Appendix D. Alternatively, one can conduct low-rank regression on outcomes and predictors in each B̂_c to estimate the final effect size <cit.>. § SIMULATIONIn this simulation study, we probed whether MOAT can extract informative subgraphs {B̂_c} from the multi-level graph B and G with high accuracy and replicability.We evaluate MOAT to finite-sample simulation data under various conditions (e.g., different sample sizes and effect sizes) with comparisons to several commonly used biclustering methods and sCCA-based methods. §.§ Synthetic data We generate synthetic FC data 𝐘^(d)={y^(d)_ij}_i,j<n and SI data X^(d)=(x_1^(d), …, x_m^(d))^T based on the following multivariate Gaussian distribution:[ X^(d); 𝐘^(d) ] ∼N [[ μ_X; μ_Y ], ([ Σ_X,X Σ_X,Y; Σ_Y,X Σ_Y,Y ]) ], where [ μ_X; μ_Y ] is the partitioned mean vector of SI and FC data respectively, and Σ=([ Σ_X,X Σ_X,Y; Σ_Y,X Σ_Y,Y ]) is the partitioned variance-covariance matrix. For simplicity, we set [ μ_X; μ_Y ] as a zero vector, representing normalized data, while the construction of Σ depends on two key factors: the multi-level network structure and effect sizes (i.e., FC-SI association strength). Both of these factors are elaborated upon in the following paragraph. In this simulation, we consider 500 SIs and 4950 FCs, where the FC measures are calculated based on a brain network with 100 regions, resulting in 100 2=4950 pairwise connectivity values.To determine the network structure for Σ, we consider the following multi-level graph consisting of (i) a bipartite graph B=(S,F;H) depicting the FC-SI associations, where |S|=500, |F|=4950; (ii) a unipartite graph G=(V;E) depicting the brain functional connectome, where |V|=100. Specially, we generate two sub-networks within B, denoted as B_1 and B_2, characterized by higher FC-SI partial correlations ρ_1 and ρ_2 than the rest of B. B_1 consists of 40 SI measures and 435 FC measures, where the 435 FC measures collectively compose a functional connectome G_1 of 30 brain regions; B_2 consists of 60 SI measures and 190 FC measures, where the 190 FC measures collectively compose another functional connectome G_2 of 20 brain regions. For a visual representation of these two sub-networks, please refer to the graph illustration in <ref>. Built on this network architecture, we configure the covariance matrix Σ such that ρ_1,ρ_2>ρ_0 to emulate different effect sizes. Here, we set ρ_0=0.15 as the partial correlation of FC-SI edges outside of B_1 and B_2. Next, by correlating the FC and SI data simulated from (<ref>) using the aforespecified [ μ_X; μ_Y ] and Σ, we obtain an FC-SI associationmatrix A_500×4950. A governs the edge variable H in thebipartite graph B=(S,F;H) by h_(ij),k=I(a_(ij),k>r), where r is a pre-selected threshold for correlation strength. Lastly, to assess MOAT performance under different settings,three configurations of (ρ_0,ρ_1,ρ_2; D) are simulated: (0.15, 0.55,0.60; 200), (0.15, 0.60,0.45; 300), and (0.15, 0.70,0.40; 400), where D represents the sample size as defined previously. For each configuration, we simulate 500 repeated data sets {A^l}_l∈(1,…,500) to better access accuracy and replicability of MOAT. labelformat=empty [figure]labelformat=default§.§ Performance evaluationFor each simulated dataset, we apply MOAT to estimate the multi-level sub-networks B̂_c containing strong FC-SI associations and perform our proposed network-based permutation test outlined in Algorithm <ref> on A^l. Regarding B_c extraction and {β_(ij),k≠ 0} identification, we benchmark MOAT against a few popular appoaches including (i) three biclustering methods that are commonly used for sub-network detection: Bipartite Spectral Graph Partitioning (BSGP) <cit.>, Information Theoretic Learning (ITL) <cit.>, and Factor Analysis for Bicluster Information Acquisition (FABIA) <cit.>; (ii) two sCCA-based methods that identify and measure the associations between two canonical/latent types of variables: a Large-Scale Sparse Kernel Canonical Correlation method proposed by <cit.>, and sCCA through a penalized matrix decomposition (sCCA-PMA) proposed by <cit.>. We evaluate methods' performance by assessing the deviation of the estimated B̂_c from true B_c at both node-level, and edge-level (i.e., β̂_(ij),k≠ 0 v.s true β_(ij),k≠ 0). Specifically, we consider the comparionsfrom the following three perspectives: SI variable selection, FC variable selection, and FC-SI pair selection. We use true positive rate (TPR) and true negative rate (TNR) as the evaluation criteria for both node-level and edge-level deviations. TPR is determined by the proportion of FC/SI nodes or FC-SI edges in B_c that can be recovered by B̂_c; TNR is determined by the proportion of FC/SI nodes or FC-SI edges in B/B_c that can be recovered by B/B̂_c. <ref> provides a graphical overview of the performance of each method. Table [fig:simu_table]1 demonstrates the performance of all methods under multiple settings. The TPR and TNR are determined by the accuracy of both sub-network extraction and network-level inference.In general, both MOAT and biclustering-based methods can recover sub-network patterns more accurately than sCCA-based methods because the network structures of FC-SI association patterns can be better recognized.Under different settings, MOAT can detect the target sub-networks with high sensitivity with few or none false-positive FC-SI edges because the cost of removing a true positive association or including a false positive edge is very high, as regulated by objective function (<ref>). The performance of biclustering methods is also improved with increased effect sizes with low false positive rates and medium to low sensitivity. In contrast, sCCA-based methods is invariant to different effect sizes, and may miss the underlying FC-SI association patterns due to various noise.Overall, MOAT is robust to noise and sensitive to organized FC-SI association patterns. MOAT outperforms comparable biclustering and sCCA methods under different settings, especially when systematic FC-SI association patterns are present. This superiority stems from MOAT's ability to accurately extract FC-SI association patterns through multi-level sub-network analysis and tailored sub-network-level inference. § STUDY OF FC-SI ASSOCIATIONS IN BRAIN CONNECTOME DATA §.§ UK Biobank sample and neuroimaging data We aim to investigate the systematic effects of certain structural brain imaging measures on the functional connectome usingUK Biobank data <cit.>. The UK Biobank is a vast biomedical database with approximately half a million participants from the UK, where a total of 40,923 healthy individuals were found to have usable resting-state fMRI (rs-fMRI) data that passed quality control <cit.>. Among them, a subgroup of 4,242 individuals possessed complete data onthe following three sets of measurements we have chosen to focus on in this study: * 105 SI measures: we collected 105 SI variables including 39 white matter integrity measures and 66 cortical thickness measures.The white matter integrity reflects the overall health and coherence of brain white matter and was assessed by fractional anisotropy (FA) obtained from DTI data in this study. The DTI data was pre-processed using ENIGMA DTI protocols <cit.> and white matter tracts were labeled based on the JHU ICBM DTI-81 Atlas <cit.>. A complete list of the 39 regional white matter tracts can be found in Appendix C.3. On the other hand, cortical thickness measures gauge the width of the gray matter of the human cortex, and were obtained from T1 MRI and labeled based on the FreeSurfer atlas <cit.>. * 30,135 FC measures: functional connectome data were obtained from rs-fMRI data based on Brainnetome Atlas <cit.>. We first performed rs-fMRI preprocessing for all participants and then extracted the averaged time series of blood-oxygen-level-dependent (BOLD) signals from 246 functional brain regions, resulting in 2462=30,135 region-pair FC measures. Details of imaging acquisition and fMRI preprocessing are provided in Appendix C.1. * 4 confounding variables: we adjusted fourconfounding variables including age (years: 61.46 ± 7.40), sex (M/F: 2003/2239), educational level (years: 17.37 ± 3.92), and body mass index (BMI) (kg/m^2: 26.35 ± 4.30). These variables have been used in previous neuroimaging literature on studying brain functional connectivity <cit.>.§.§ ResultsWe applied MOAT to the multimodal imaging data from the qualified 4,242 UK Biobank participants. First, we obtained the FC-SI association inference matrix A_105 × 30135. Each entry in A is a_(ij),k=-log (p_(ij),k), where p_(ij),k represents the p-value testing the association between the k-th SI measure and the FC outcome between two brain regions i and j.Next, we performed a hard-thresholding sparsity constraint by setting a_(ij),k=a_(ij),k I(a_(ij),k<ϵ) for some positive integer ϵ <cit.>. We then applied our proposed greedy peeling algorithm <ref> to the inference matrix A, with tuning parameters λ_1=1.25, λ_2=1.5selected by the KL divergence with a mixed Bernoulli distribution based on random graphs B and G. Algorithm <ref> returned one multi-level sub-network B̂_̂1̂∈ B.Lastly, we performed the network-level statistical inference on B̂_̂1̂ using Algorithm <ref>. The testing results showed that the systematic association pattern of B̂_̂1̂ is statistically significant (p<0.0001).Specifically, results show that B̂_̂1̂ comprised |S_1|=23 SI measures and |F_1|=1316 FC outcomes, as highlighted in <ref>(b). Furthermore, the extracted |F_1| unfolded into a dense clique Ĝ_1 ∈ G consisting of |V_1|=79 regions, as illustrated in <ref>(e).The FC-SI pairs within the identified sub-network B̂_1 demonstrate significantly stronger associations compared to those outside of the network,as evidenced by the high R^2 and t-statistics shown in <ref> (c-d). The 23 extracted SI measures consist of 3 cortical thickness measures and 20 FAs: the three cortical thickness measures correspond to the mean thickness of the parahippocampal, superior temporal, and cuneus gyrus; while for the 20 FA measures extracted, the top four with the strongest FC associations are CST-R (corticospinal tract, right hemisphere), CST-L (corticospinal tract, left hemisphere), ICP (inferior cerebellar peduncles), and FX (fornix). More detailed information about the remaining 16 FA measures can be found in Appendix E.3. <ref> (left panel) illustrates the names and spatial locations of the 20 selected FAs.The right panel in <ref> shows the spatial distributions of within-B̂_1 brain regions (79 regions in total), where they are predominantly located in six cortices: frontal, subcortical, temporal, parietal, insular, and limbic. Moreover, these regions consist of several well-defined brain functional networks includingtemporo-frontal, somatomotor, ventral attention, frontoparietal, and (partial) default mode network (DMN). Overall, <ref> provides a 3D demonstration showcasing the systematic association patterns between the subsets of SIs and FCs revealed by MOAT. Notably, both the FC-SI significantly associated sub-network (B̂_1) and the brain functional sub-connectome (Ĝ_1) exhibit well-organized topological structures. [figure]labelformat=empty labelformat=empty We further applied CCA on the extracted sub-network B̂_1 to quantitatively measure the canonical associations among the FC-SI pairs within B̂_1. Results showed that the sample canonical correlations of the first three canonical variate pair inB̂_1 were 0.81, 0.69, and 0.68 respectively.In contrast, we performed sparse CCA proposed by <cit.> on the full graph B and G, given the ultra high dimensionality of data. This yielded sample canonical correlations of 0.18, 0.15, and 0.14 for the first three canonical variate pairs, respectively. Notably, MOAT can better recognize the underlying large-scale FC-SI association patterns and then provide an improved estimation of the multivariate-to-multivariate association. In summary, the application of MOAT helps to unfold the complex yet systematical and strong interplay between subsets of structural and functional measures of the human brain. Our findings suggest i) FC-SI associations are highly concentrated in a subset of SIs and FC sub-networks rather than exhibiting a whole-brain diffuse distribution pattern; ii) several FC sub-networks are primarily influenced by white matter integrity measures (refer to Table 2 in Appendix E.3 for possible mapping relationships); iii) multiple SI measures jointly affect the overall FC outcomes based on MOAT-guided CCA analysis.While, on a high level, our results align well with previous medical findings <cit.>, MOAT reveals more refined patterns with improved spatial specificity and biological interpretability.§ DISCUSSIONOur newly developed approach, MOAT, offers a novel strategy to investigate the complex association patterns between multimodal neuroimaging data with matrix-outcomes (FCs) and a vector of imaging predictors (SIs). MOAT deciphers the complex FC-SI association patterns in a multi-level graph structure revealing the joint effect of a small set of SI predictors on FC sub-networks. The multi-level graph structure can effectively reduce the number of parameters while preserving the spatial specificity of FCs and SIs. MOAT delivers findings in organized multi-level sub-networks largely suppressing individual false positive FC-SI associations (see Lemma <ref> in section <ref>).We developed computationally efficient algorithms to extract multi-level sub-networks. We further showed the consistency of the MOAT method. In addition, we develop a tailored network-level inference approach to test the extracted multi-level sub-networks while controlling FWER. Last, MOAT is also compatible with existing multivariate-to-multivariate analysis tools (e.g., CCA).In our case study, we investigated the FC-SI associations based on a large sample and revealed systematic association patterns with neurological explanations. This may enhance our understanding of how the brain structure and function interactively work during resting states and may lead to insights that can guide future cognitive and psychiatric therapy. However, since UK biobank participants mainly consist of elder Caucasians, our conclusion may be limited. Further investigation and integrated analysis is required to gain more comprehensive understanding of the FC-SI associations. The software package for MOAT is available at https://github.com/TongLu-bit/MultilayerNetworks-MOAThttps://github.com/TongLu-bit/MultilayerNetworks-MOAT. Declaration of interest: none.Acknowledgments Tong Lu and Shuo Chen were supported by the National Institutes of Health under Award Numbers 1DP1DA04896801, EB008432, and EB008281. Yuan Zhang was supported by the National Science Foundation under Award Number DMS-2311109.apalike
http://arxiv.org/abs/2310.18533v1
{ "authors": [ "Tong Lu", "Yuan Zhang", "Vince Lyzinski", "Chuan Bi", "Peter Kochunov", "Elliot Hong", "Shuo Chen" ], "categories": [ "stat.ME", "q-bio.NC", "q-bio.QM", "stat.CO" ], "primary_category": "stat.ME", "published": "20231027231827", "title": "Evaluating the effects of high-throughput structural neuroimaging predictors on whole-brain functional connectome outcomes via network-based vector-on-matrix regression" }
1Department of Electrical Engineering, California Institute of Technology, Pasadena, CA, 91125, USA2Department of Computer Science, The University of Maryland, College Park, MD 20742, USA3Department of Electrical and Computer Engineering, Rice University, Houston, TX, 77005, USA† The authors contributed equally to this work. *[email protected], [email protected] stacks provide invaluable 3D information in various biological and pathological imaging applications.Fourier ptychographic microscopy (FPM) enables reconstructing high-resolution, wide field-of-view image stacks without z-stack scanning, thus significantly accelerating image acquisition.However, existing FPM methods take tens of minutes to reconstruct and gigabytes of memory to store a high-resolution volumetric scene, impeding fast gigapixel-scale remote digital pathology. While deep learning approaches have been explored to address this challenge, existing methods poorly generalize to novel datasets and can produce unreliable hallucinations. This work presents FPM-INR, a compact and efficient framework that integrates physics-based optical models with implicit neural representations (INR) to represent and reconstruct FPM image stacks. FPM-INR is agnostic to system design or sample types and does not require external training data. In our demonstrated experiments, FPM-INR substantially outperforms traditional FPM algorithms with up to a 25-fold increase in speed and an 80-fold reduction in memory usage for continuous image stack representations.§ INTRODUCTION Computational microscopy models the forward propagation of a light field, from illumination and light-sample interaction to sensor measurement formation, and then computationally inverts this forward model to form an image. This fusion of optics and algorithms allows computational microscopy to offer substantial advantages over traditional brightfield microscopy. Computational microscopy has improved microscope resolution <cit.>, imaging speed <cit.>, cost <cit.>, and field-of-view <cit.>; has enabled quantitative phase retrieval <cit.>; and has unlocked new capabilities such as automatic aberration correction <cit.> and digital refocusing<cit.>. Computational microscopy is now widely used in biological<cit.>, clinical<cit.>, and pathological imaging<cit.>; non-invasive surface inspection<cit.>; and aberration metrology<cit.>. Fourier ptychographic microscopy (FPM), which enables wide field-of-view imaging, is one of the most successful and widely utilized computational microscopy techniques and has been extensively studied since 2013 <cit.>.One of the most important features of FPM is its ability to correct for aberrations, notably defocus, post-capture. Defocus aberration manifests when the region of interest within the specimen deviates from the front focal plane of the microscope objective lens. This deviation from the ideal focal point may arise from various factors, including the inclined disposition of the sample and sample unevenness across the region. With its digital refocusing capability, FPM can computationally reconstruct optical fields at distinct planes situated along the optical axis. Consequently, this functionality not only eliminates the need to perform physical re-scanning, but also facilitates sparse volumetric (z-stack) imaging.If the sample contents are distributed sparsely within the volume, then the sample can be approximated and reconstructed as a succession of 2D cross-sections <cit.>. This approximation is valid for a range of digital pathology slide analyses such as those from fine needle biopsy aspirates <cit.> and brain tumor biopsies <cit.>. Laser illumination allows FPMs to acquire all the measurements required to form a high-resolution wide field-of-view volume within a second <cit.>. However, the computational demands of current FPM reconstruction algorithms remain a significant obstacle for high-throughput pathological imaging applications. Existing FPM algorithms reconstruct each slice of a z-stack image independently, solving a time-consuming optimization problem for each slice. As a result, reconstructing a high-resolution z-stack can take tens of minutes on a Graphics Processing Unit (GPU) (Nvidia RTX A6000), which is impractically slow for interactive pathology applications. Moreover, the z-stacks generated by existing FPM algorithms are high-dimensional data, leading to high storage and transmission costs. This inhibits the broader integration of FPM into digital pathology <cit.> and collaborative diagnosis <cit.>, where there is a growing need for remote diagnosis, inter-institutional data transfer, and compact and efficient data packaging. An attempt has been made with a deep learning method to tackle such challenges <cit.>, but it requires external training data and depends on system design and sample types (details in Section <ref>). In this work, we introduce a compact, computationally efficient, and physics-based framework for reconstructing and representing FPM image stacks, termed Fourier ptychographic microscopy with implicit neural representation (FPM-INR). FPM-INR combines implicit neural representations (INRs), efficient volume decomposition, GPU acceleration, and strategic optimization, to efficiently solve the FPM image stack reconstruction problem. The difference in data representations between conventional FPM and the proposed FPM-INR is particularly noteworthy. FPM generates a z-stack with the same architecture as a physical z-stack, i.e., a Cartesian volume of M × N × P voxels, where M and N represent the lateral pixel counts and P represents the z-gradation.In contrast, FPM-INR encapsulates the physical z-stack data into a compact feature volume coupled with the weights of a small neural network. In essence, the pattern and sparsity of the sample are efficiently captured by the novel parameter space of FPM-INR.FPM-INR leverages the known physics-based FPM forward model and is compatible with any FPM microscope without necessitating hardware modifications. In addition, it does not require any pre-training. In our demonstrated experiments, FPM-INR can reduce the reconstructed data volume by 80×, accelerate the reconstruction process by up to 25×, and generate image stacks with fewer artifacts.We outline and explain FPM-INR in Section <ref>. Experiments in Section <ref> validate our method, where we quantitatively compare the quality, time, and data storage performance of our method with the conventional FPM approach, and we demonstrate its applicability from a human blood smear sample to cytology imaging of thyroid gland lesions. Section <ref> summarizes the key features and concepts of our method and discusses implications to broader applications of FPM.§ RELATED WORK §.§ FPM Reconstruction FPM processing is typically performed with a combination of alternating projection algorithm and embedded pupil function recovery algorithm <cit.>. Some of the recent FPM developments center on improving reconstruction quality or adapting to challenging scenarios. To date, only a few of these developments have attempted to speed up the reconstruction process and/or alleviate the massive computation load in z-stack imaging. One proposed approach solves the FPM imaging problem through neural network modeling in a forward pass <cit.>. This method speeds up the FPM reconstruction by taking advantage of the GPU acceleration for 2D phase retrieval. However, adapting this method to z-stack imaging would simply include an additional loop to the reconstruction pipeline, which neither exploits the inherent anisotropic optical resolution nor reduces the data volume. Another type of attempt is through digital refocusing in a post-reconstruction manner. One proposed solution <cit.> is to digitally propagate the optical field after the FPM reconstruction to obtain focused images at different planes. If feasible, this would greatly simplify z-stack image generation. Unfortunately, this approach violates the physics principle of the FPM forward model and has been demonstrated to be problematic <cit.>. Deep learning has been explored in the context of post-reconstruction digital refocusing, where a deep neural network is trained with supervised learning to learn a prior over z-slices <cit.>. This method can reduce the image stack data volume and quickly generate images of different slices, but deep-learning-based methods generally have several limitations, including (a) a strict requirement of a large dataset with defocus distance values; (b) a computationally intensive training process; (c) the susceptibility to generalization challenges under unseen sample categories; (d) the reliance on a particular system design that the model is trained under, including factors like illumination patterns, numerical apertures (NA) of the objective lenses, and camera and magnification settings; (e) the restriction to a set of discrete z-planes. The constraints inherent to conventional deep learning methods pose significant issues for digital pathology applications, where even minor inaccuracies are unacceptable due to the critical nature of the context. §.§ Implicit Neural Representations The limitations of prior studies strongly indicate that a physics-based and fast FPM reconstruction technique with low data volume representation is highly desirable, but it is missing from the current state-of-the-art. We propose using implicit neural representation (INR) to address this gap. INR is a relatively new computational concept centered on mapping spatial coordinates to image pixel values with a multi-layer perceptron (MLP) model acting as a continuous mapping function <cit.>. This concept has been instrumental in the recent advances of computer vision, computer graphics, and generative artificial intelligence <cit.>. However, few studies have applied INR in the context of computational microscopy. A recent work <cit.> used INR in lensless microscopic imaging to map 2D spatial coordinates to 2D amplitude and phase with an embedded forward model. A concurrent work <cit.> applied INR to intensity diffraction tomography to achieve a continuous recovery of a volumetric refractive index map; the method has been improved in a later work <cit.> by adding a learnable hash encoding layer to speed up the convergence of the algorithm.These works employ the MLP model as an encoder-decoder functionality, which is computationally intensive.A more recent work <cit.> applied the MLP model as a decoder and trained convolutional neural networks as encoders to extract features from raw measurements. Their work achieved wrapping-free phase retrieval for 2D samples with fewer artifacts compared to conventional quantitative phase imaging techniques.§ METHOD§.§ General FrameworkOur FPM-INR framework for image stack reconstruction is depicted in Fig. 1. The INR renders the high-resolution optical field from random initialization and is self-supervised by the FPM measurements through the physics forward model of FPM.First, an FPM optical system is modeled mathematically from illumination to detection. The oblique LED illumination on the sample can be approximated by a plane wave. The plane wave modulated by the complex sample function o(x,y;z) then is transferred to the pupil plane of the image system by an optical Fourier transform. At the pupil plane, the oblique angle illumination is converted to the lateral translations of the sample spectrum. By utilizing various angles of the illuminations both low and high spatial frequency components can be covered and captured. A set of raw measurements I_i(x,y;z) associated with different illumination angles can be obtained by the tube lens performing an inverse Fourier transform. The forward model can be explicitly expressed as:I_i(x,y;z) = |ℱ^-1{O(k_x-k_x_i,k_y-k_y_i) P(k_x,k_y;z)} |^2where I_i(x,y;z) is the measurement from i^th LED illumination; z indicates the defocus distance (from the sample to the front focal plane of the objective lens), which corresponds to the pre-defined quadratic defocus aberration added to the phase of the pupil function; ℱ^-1 is the inverse Fourier transform operator; O(k_x-k_x_i,k_y-k_y_i) is the spectrum of the o(x,y;z) from i^th LED illumination; P(k_x,k_y;z) is the pupil function; k_x and k_y are spatial frequency coordinates. For simplicity, we start introducing our framework with a 2D thin sample. Our FPM-INR framework tries to solve the problem by modeling the forward pass of FPM (Eq. (<ref>)).The mapping between the optical system and the physics-based forward model embedded in our framework is depicted in Fig. 1(a). The framework begins with the random initialization of the feature space volume. The feature vectors for each point are then taken as the input to two MLP models, each predicting the amplitude √(I(·)) and phase ϕ(·) of a high-resolution complex field √(I(·))exp(jϕ(·)). This high-resolution complex field can be considered as an analog to the complex sample function. Illuminated by an oblique plane wave, this high-resolution complex field propagates through the objective lens and covers a part of the spectrum at the pupil plane as highlighted in the green circular region in Fig. 1(a). The corresponding spectrum then formulates an estimated measurement (f_i) through an inverse Fourier transform and a square function. This resembles the functionality of the tube lens and the camera in the optical system. The optimization objective minimizes the difference (smooth L1 loss) between captured raw measurements and estimated measurements. Subsequently, the weights of the MLP model and parameters (M and u) of the feature space volume are updated through gradient descent. After iterating the above process till convergence, the high-resolution complex field is reconstructed. The z-dimension will be introduced in Section <ref>. §.§ Feature Space Design To model a volumetric sample, instead of explicitly storing each discrete 3D voxel with its complex value, we construct a feature volume V (Fig. 1(b)), where each voxel stores a learnable Q-channel feature vector: V_x_n,y_n,z_n∈ℝ^Q, n = 1,2,...N. The size of this feature volume may be smaller than the size of the digitized sample, and we can use bilinear interpolation to obtain the feature for any continuous spatial coordinate. A compact MLP is trained to convert such a feature vector into the value at(x_n,y_n,z_n) in the field.As the optical resolution for FPM is spatially anisotropic, with the lateral (x- and y-axis) resolutions higher than the axial (z-axis) resolution, we adopt a low-rank-decomposed representation of V in practice. Specifically, we use a 1D vector u to succinctly represent the variations along the z-axis, while maintaining a full-rank matrix M to capture variations across x and y. Each location in u and M stores a Q-channel feature vector that can be updated during optimization. To obtain the feature at a point (x_n, y_n, z_n), we project (x_n, y_n) onto M and project z_n onto u to obtain feature vectors M_x_n, y_n and u_z_n. As illustrated in Fig. 1(b), the Q-channel feature vector at location (x_n, y_n, z_n) in the 3D feature volume is the Hadamard product between the feature vectors M_x_n,y_n and u_z_n:V_x_n,y_n,z_n = M_x_n,y_n⊙ u_z_nwhere ⊙ denotes Hadamard product, and M_x_n,y_n, u_z_n∈ℝ^Q. Effectively, our design is equivalent to approximating a 3D volume through a tensor product between a 2D matrix and a 1D vector. This approach falls under tensor decomposition strategies <cit.>commonly used to parametrize a 3D volume represented by an INR, which can effectively enhance the INR's ability to represent signals while simultaneously reducing the number of required parameters. Given a specific defocus distance z=z_n, we first obtain V_x_n, y_n, z_n, the Hadamard product between the feature vectors u_z_n and M_x_n,y_n. With this Q-channel feature vector as input, the MLP model has Q channels in its first layer. The MLP model consists of two non-linear layers following with ReLU activation function and a linear layer producing a final output value. To render a complex-valued high-resolution optical field, we use two real-valued MLPs with two feature space volumes, and these two MLPs produce the amplitude and phase parts of the complex output separately. The discretized pixel count of the feature plane M in each feature channel is one-sixteenth of the amplitude or phase outputs. The gap between the pixel counts is addressed by the bilinear interpolation along the x- and y-axis.Our neural representation is highly compact, comprising only a few thousand parameters, which facilitates the acceleration of reconstruction.§.§ Optimization and Inference To efficiently reconstruct the image stack, the key idea in our optimization strategy is to employ feature space interpolation and an alternating z-slice selection strategy. The optimization process requires selecting specific z values denoting the defocus distance, and the defocus distances can be continuous values within a range of [z_min,z_max]. The limits of the defocus distance range are determined by the FPM digital refocusing maximum capacity. The extended depth of field for FPM can be influenced by many practical factors — including but not limited to the precision of LED position calibration, coherent area of the LED illumination, total synthetic numerical aperture, and the wavelength of the illumination light. As such, it is difficult to establish an analytical formula or even an empirical equation to quantify the digital refocusing capability of FPM. Therefore, the defocus distance range is generally assessed to be an empirical range of 3-6 times larger than the incoherent brightfield microscope depth of field, or sample thickness prior <cit.>. To numerically change defocus distances, the conventional FPM method associates the arbitrary defocus distance with the defocus aberration in the Fourier domain.To fulfill this functionality in our method without unnecessarily learning infinitely many z-slices, we perform interpolation along the z-axis when sampling from the feature space.Within the digital refocusing capacity [z_min,z_max], we first determine a few z-planes with uniform separations and initialize their feature representation in u. Each feature vector stored in u corresponds to a discretized point on the z-axis.For any continuous z value, we can linearly interpolate its two nearest discretized feature vectors on u to obtain its feature vector.As shown in Fig. 1(c), we select different z values for optimization at different epochs. At each odd number epoch, z_n values are selected uniformly corresponding to the discretization of u, and the resulting u_z_n is multiplied with the lateral feature vector M_x_n,y_n. The product of these is then sent to the MLP model. At each even number epoch, z_n values are selected randomly with the resulting u_z_n obtained through linear interpolation. This selection strategy avoids naively sampling infinitely many z-planes for optimization and speeds up reconstruction. Once the weights of MLP and the feature volume parameters are optimized, these data are fixed and can be saved as storage data for the sample.During model inference (Fig. 1(d)), the feature space can be continuously sampled to generate the image stack. Our experiments reported in Section <ref> provide more context to this consideration.§ RESULTS §.§ Proof of ConceptTo validate our proposed method, we used a human blood smear slide (Carolina Biological Supply Company, Wright's stain) as the initial test target. We tilted the slide at a 4-degree angle to the optical axis of the microscope. An LED array (Adafruit 32×32 LED matrix, 4 mm pitch) together with a 16-element LED ring was used for illumination. The illumination NA was matched with the objective lens' NA (Olympus PLN 10×/0.25NA). In total, 68 LEDs were used for sequential illuminations. We imaged at a center wavelength of 522 nm. The sample was placed 74 mm from the LED panel. A monochromatic camera (Allied Vision Prosilica GT 6400) with a pixel pitch of 3.45 microns was used. All these components were installed and customized on an Olympus IX51 inverted microscope body. For comparison, we captured brightfield images in the same setup with all LEDs lit. To avoid non-uniform illumination patterns, a piece of lens wiper (Kimtech Science) was placed between the LED array and the sample to help scatter the illumination. The image stack captured under the incoherent illumination was taken as the ground truth for our image stack from z=-20 μ m to z=20 μ m with a step of 0.25 μ m (161 layers in the z-stack). Fig. 2(a) presents some brightfield microscope images.Conventional FPM reconstruction algorithms have different variants <cit.>. The sequential gradient descent algorithm <cit.> is chosen for our comparison purpose as it is generally considered to be faster than the second-order methods (sequential Gauss-Netwon algorithm) <cit.> and the convex-base method (PhaseLift)<cit.>. For simplicity, we will refer to the sequential gradient descent algorithm as the "FPM algorithm" in the following text. To minimize aberration influence (except defocus aberration) on reconstruction quality and convergence speed, the central field of view of the camera was selected as the region of interest with 1024×1024 pixels. To make a fair comparison, GPU parallel computing was also implemented for the FPM algorithm. To guarantee consistent good convergence, we ran 25 iterations of the FPM algorithm for each z-plane. The FPM reconstructed images are shown in Fig. 2(b). The full stack image reconstruction is presented in Supplementary Video S1. As introduced in Section <ref>, FPM-INR employs a z-plane selection strategy. Here, z-planes with a uniform separation of 5 microns were selected as candidates for odd-number epochs, while three z-planes were randomly chosen for optimization at even-number epochs. In total, a number of 15 epochs were completed to establish good convergence. The Adam optimizer <cit.> was used with a learning rate of 10^-3 and a learning scheduler with a 10 times learning rate decay for every 6 epochs. The related images are shown in Fig. 2(c). Due to the tilted sample geometry (Fig. 2(d)), the sample content was focused on a continuum of z-planes, providing a good example to validate the feasibility of our method, with sample information distributed in every slice. The full stack image reconstruction from FPM-INR is also presented in Supplementary Video S1. To evaluate our reconstructed image stack quality, both visual inspection and quantitative error metrics were applied. In general, FPM and FPM-INR can obtain similar image stack quality. From Figs. 2(a1,b1,c1), the images at z=0 μ m plane showed consistent quality for the white blood cell and red blood cells. In addition, the L2 error maps were computed by comparing FPM and FPM-INR images with brightfield measurement. The error maps and metrics indicated that our FPM-INR algorithm performed slightly better than the FPM algorithm. Another example for images at z=18 μ m led to the same conclusion. Additionally, the FPM-INR image had fewer artifacts than the FPM result compared with the ground truth image via a visual inspection. To further establish a quantitative analysis for reconstruction quality, the L2 error and error map were calculated over the all-in-focus images over the image stack. The all-in-focus images were constructed by using the normal variance method in Refs. <cit.>. FPM-INR (L2 error: 1.41× 10^-3) still gave better image quality than the FPM algorithm (L2 error: 2.34× 10^-3). Although our goal is not to boost the image stack reconstruction quality, we did observe that the FPM-INR algorithm reduces artifacts, especially at large defocus distances. To benchmark the compression ratio and time performances of FPM-INR v.s. conventional FPM, the same set of data was used on the same GPU device (Nvidia RTX A6000). The data volume size generated by the FPM was presented in Fig. 3(a). The high-resolution image stack had a size of 2048 pixels along lateral axes, and 161 z-slices along z-axis. In total, this data volume had 644 megapixels with 4 bytes for each pixel. This adds up to 2576 MB for the human blood smear sample. In contrast, FPM-INR only needs to save the feature space parameters and model weights (Fig. 3(a)). The feature plane M had 512×512 pixels covering the xy plane, each storing a feature vector of Q=32 channels. The feature representation u along the z-axis is uniformly discretized by 5 (number of pre-defined z-planes), each storing a feature vector of Q=32 channels. Interpolation is used to enable continuous sampling on M and u. The feature parameters in total took up 32 MB in storage. The MLP model consisted of two non-linear layers and one linear layer with 32 neurons and 1 bias node. The number of weights can be calculated as (32+1)× 32 × 2 + (32+1)× 1 = 2145, which is equivalent to 8.4 KB. Therefore, the total storage needed for FPM-INR was about 32 MB. The compression ratio, defined by FPM data volume over FPM-INR storage volume, achieved a factor of 80.5. The above calculations are done for amplitude images; to include phase images, the data volume and data storage size will be doubled for both FPM and FPM-INR. We further examine the performance of FPM-INR and FPM at various patch sizes. Commonly, conventional FPM algorithms reconstructed square patches with sizes of 2^7, 2^8, 2^9, and 2^10 pixels along each lateral dimension. If the patch size is too small, the reconstruction may suffer from the lateral shift effect from oblique illumination at a large defocus plane (see Supplementary document). If the patch size is too large, the region of interest may exceed the coherent area of illumination which can be roughly estimated by the Van-Zernike-Cittert theorem <cit.>. This coherent area is not a hard limit, but it would violate the coherent FPM forward model gradually. In our evaluation, the patch sizes were chosen considering that the fast Fourier transform algorithm prefers the image dimension to be of powers of two.As shown in Fig. 3(b), the FPM-INR algorithm significantly outperformed the FPM algorithm in computational time with 9.8×, 11.8×, 7.5×, and 5.3× increase for patch sizes of 2^7, 2^8, 2^9, and 2^10, on the same GPU device. This was confirmed across five experiments, as depicted by the error bars in Fig. 3(b). In addition, a compression ratio of about 80 times can be consistently achieved across different patch sizes, as indicated by the circle area in Fig. 3(b). The inference speed was approximately 460 MB/s on Nvidia RTX A6000 GPU for reference. This model inference time can be negligible in practice and will be further reduced with the rapid advancement of GPU devices.§.§ Application to Digital Pathology Digital pathology is a growing application in clinical diagnosis and disease analysis. Cytology, also known as cytopathology, is a branch of diagnostic pathology that studies whole cells from bodily tissues and fluids. Our FPM-INR algorithm can further facilitate FPM digital pathology applications in these fields. Here we report a demonstration experiment where FPM-INR was used on a cytology specimen collected through thyroid fine needle aspiration. A fine needle aspiration biopsy Papanicolaou smear (pap smear) of papillary thyroid carcinoma was imaged by our system. Part of the data was obtained from Ref. <cit.>. The sample has a thickness of about 30 μ m (from -10 μ m to 20 μ m) and cell aggregations at different heights. In clinical diagnosis, pathologists need to evaluate cellular structural information and color staining contrast over the whole sample volume. Regular brightfield microscope takes a long time to scan the sample for discrete z-slices and it results in a huge data volume. This hinders efficient data collaboration and quick pathological analysis. FPM relieves the burden from the massive scanning duty but still suffers from a long reconstruction time and a tremendous data size. The proposed FPM-INR framework can substantially solve the current dilemma. The sample was imaged by a 20×/0.40NA objective lens with matched illumination NA using 145 LEDs, and the distance between the LED panel to the sample was 66 mm. A CCD camera (ON Semi KAI-29050, 5.5 μ m pixel pitch) was used to capture raw measurements. Similar to Section <ref>, the FPM algorithm was optimized for 121 z-slices, and FPM-INR was also implemented with the same set of hyperparameters: including the learning rate and scheduler, parameter initialization strategy, and the number of epochs. The differences were that in this case, the number of feature channels Q was set to be 24, six planes were uniformly selected in the odd epochs, and three planes were randomly selected in the even epochs. The image stack reconstructions of FPM and FPM-INR are presented in part in Fig. 4. The full image stack reconstructions are presented in Supplementary Video S2.In terms of the storage memory requirement, FPM-INR retains similar compression performance as in Section <ref>, achieving a data compression ratio of 80.5 on the thyroid gland lesion data across different patches.Using the same GPU,FPM-INR is 24.7×, 17.9×, 7.3×, and 5.0× faster than the FPM algorithm at patch sizes of 2^7, 2^8, 2^9, and 2^10.Figures (a1,b1) present the all-in-focus images reconstructed by FPM and FPM-INR, respectively. The white dashed lines are associated with xz-plane and yz-plane sub-figures. The sub-figures along the z-axis demonstrate that FPM-INR digital refocusing quality is slightly better than the FPM algorithm. Figures (a2,b2) are the zoom-in section of the yellow box. The red arrows in Figures (a3-a5,b3-b5) are examples of cells focused at various depths. The white arrows in Fig. 4 point out the artifacts in the FPM images, while the FPM-INR does not have any such artifacts in the corresponding regions. This observation is consistent with the experiment using the human blood smear slide in Section <ref>. Additional experiments were provided in the supplementary document.§ DISCUSSIONThe central challenges in high-throughput, high-resolution pathological imaging using FPM lie in the computational, storage, and bandwidth demands associated with reconstructing and transferring z-stacks.While deep learning methods, in principle, could address these challenges, existing approaches generalize poorly to new data and can produce hallucinations that violate physical constraints.In this study, we sidestep these issues by introducing FPM-INR, a compact, fast, and physics-informed FPM image stack reconstruction framework. In our demonstrated experiments with validation data including human blood smear and thyroid gland lesion pap smear specimens,the FPM-INR framework speeds up FPM reconstruction by up to 25× and compresses FPM z-stack data by 80×. Importantly, the image stack quality is also enhanced both qualitatively and quantitatively with fewer artifacts than the conventional FPM algorithm.While the FPM-INR framework draws inspiration from research on neural networks and deep learning, FPM-INR is physics-based, fully respects the physical model underlying the FPM measurement process, and only changes how we represent the z-stack data. FPM-INR does not merely treat the neural network as a black-box predictor, but rather leverages neural network's unique strengths in learning useful features and non-linear interpolation strategies based on the gradient-based feedback from data. Moreover, unlike deep learning methods which often require pre-training on external datasets with specific discrete z-planes,FPM-INR is broadly adaptable to any FPM setup, regardless of the hardware specifics like objective lens, LED numbers, camera pixel pitch, or image patch size. FPM-INR sidesteps the generalization issues that often plague purely data-driven deep learning approaches, especially in critical applications like healthcare.The key innovations behind FPM-INR hold numerous advantages: (a) The physics-based pipeline using INR significantly improves upon conventional methods, while avoiding artifacts and generalization issues commonly associated with deep learning methods. (b) The proposed method moves away from operating solely on the domain of physical space, which often involves anisotropic optical resolution, to operating on a feature space with efficient representation. This new paradigm allows for complex, high-resolution signals in the physical domain to be efficiently represented and recovered in the feature space, in conjunction with a physics-based inference process involving a compact neural network. (c) INR enables continuous representations compactly and efficiently, which is leveraged by FPM-INR to offer high-resolution sample visualization, and can further enable more streamlined pipelines for downstream tasks.Funding H.Z., S.L., M.L., and C.Y. would like to thank the Heritage Research Institute for the Advancement of Medicine and Science at Caltech (HMRI-15-09-01). B.Y.F. and C.A.M. were supported in part by the AFOSR Young Investigator Program Award no. FA9550-22-1-0208. Disclosure The authors declare no conflict of interest.Data Availability Statement Code will be available through https://github.com/hwzhou2020/FPM_INRGitHub. Data is available at https://doi.org/10.22002/7aer7-qhf77CaltechData.Supplemental Document See the supplement document and videos for supporting content and additional discussion.
http://arxiv.org/abs/2310.18529v2
{ "authors": [ "Haowen Zhou", "Brandon Y. Feng", "Haiyun Guo", "Siyu Lin", "Mingshu Liang", "Christopher A. Metzler", "Changhuei Yang" ], "categories": [ "physics.optics", "eess.IV" ], "primary_category": "physics.optics", "published": "20231027231349", "title": "FPM-INR: Fourier ptychographic microscopy image stack reconstruction using implicit neural representations" }
Functional Limit Theorems for Local Functionals of Dynamic Point Processes [ January 14, 2024 ========================================================================== An important assumption that comes with using LLMs on psycholinguistic data has gone unverified. LLM-based predictions are based on subword tokenization, not decomposition of words into morphemes. Does that matter? We carefully test this by comparing surprisal estimates using orthographic, morphological, and BPE tokenization against reading time data. Our results replicate previous findings and provide evidence that in the aggregate, predictions using BPE tokenization do not suffer relative to morphological and orthographic segmentation. However, a finer-grained analysis points to potential issues with relying on BPE-based tokenization, as well as providing promising results involving morphologically-aware surprisal estimates and suggesting a new method for evaluating morphological prediction. § INTRODUCTION There is widespread consensus that human sentence processing includes word-level prediction <cit.>; see <cit.> for a review. A growing body of research is making use of language models as computational proxies for human prediction at the word level, including traditional n-gram models <cit.>, syntax-based models <cit.>, and more recent work that makes use of neural language models <cit.>.As an overall paradigm, research in this area generally correlates surprisal, computed using corpus-based probability estimates(-log (|)), against measurable indices of human processing effort. These include measurement of processing activity using fMRI <cit.>, MEG <cit.>, EEG <cit.>, and reading times <cit.>.This correlation paradigm has produced useful insights about the role of prediction in language comprehension. In addition, correlations between language model surprisal and human processing activity have been taken to be an indication that language models are capturing relevant aspects of how human language works <cit.>.[These two kinds of results are important to distinguish. In the first case, human beings and their cognitive systems are the object of study. The second is an instance of what <cit.> referred to as “science of the artificial”, where in this case the constructed computational system is itself the object of study.]Lost in the shuffle of this research progress, however, is the question of what, exactly, “prediction at the word level” is supposed to mean. Within current linguistic theory, the very construct of “word” is receiving critical attention: albeit controversially, convincing arguments exist that the term word lacks a consistent, scientifically valid definition that holds consistently across the full range of human languages <cit.>; see <cit.> for an overview connecting these theoretical claims to psycholinguistics and neurolinguistics. Even setting aside that theoretical debate, however, the move to psycholinguistics and neurolinguistics using large language models has generated a mismatch between the units analyzed in human studies — typically orthographic words — and the subword tokens over which LLMs operate via approaches like WordPiece and Byte-Pair Encoding <cit.>.To take an example, a typical LLM tokenization of the word decomposition using a widely used BPE tokenization <cit.> yields subword units dec, om, position.[We use the GPT2 implementation of BPE throughout.]In a typical human subjects experiment, measurements such as first-pass reading time would typically involve a region from the beginning of the whole word to its end. More generally, the measurement of activity for a word w in the human experimentation is related to language model predictions using w's subword units . How is a correlation computed between measurements over different units?The solution adopted by most researchers <cit.> is to compute surprisal separately for the s_i, and then approximate the model's surprisal for w as the sum of those individual subword surprisals. The logic behind this choice is that, if the linking hypothesis behind the work connects surprisal with cognitive effort, the effort for the entire word should be the sum of the effort on each of the parts <cit.>.[In principle one could be more sophisticated by calculating the uniqueness point within the word in a given lexicon <cit.>and only summing up assigned complexity values for wordpieces that precede that point.]This, however, leads to another question: is that a reasonable approximation, given that the subwords produced by LLM tokenization bear no clear correspondence to the subword decompositions in human processing?Consider again the word decomposition.A large body of theoretical and empirical work would suggest that to the extent subword effort takes place, it would involve morphological units, in this case de, compos(e), and (i)tion <cit.>.[We discuss another illustrative example in Appendix <ref> and Section <ref> provides details on the morphological segmenter we use in our experimentation.]The question we set out to answer in this paper, then, is whether the divergence between LLM subword tokenization and human morphological decomposition is something to worry about in computational psycholinguistics and computational neurolinguistics research. Operationalizing the question, would the sum-of-surprisals approach with morphologically valid units yield a better correspondence with human measurements than the standard approach using LLM subword tokens?We would argue that the result is important regardless of which way the experimentation goes.If morphological units turn out to be a significantlybetter fit for human measurements, then cognitive researchers using LLMs should be using them — which could potentially raise real challenges given the widespread use of off-the-shelf pretrained LLMs.If statistically-driven subword units work just as well, then we have checked an important, previously unchecked box in terms of validating their use.This work is very much in the spirit of <cit.>, who evaluated a “fleet” of language models across architectures, plus orthographic n-gram models, against eyetracking and self-paced reading data. However to our knowledge, this study is the first to consider the assumption that LLM-subwords can be used in lieu of morphological units. § METHODSWe trained three n-gram models under different word segmentation methods and evaluated them against reading time data from psycholinguistic experiments conducted in English. Our choice of evaluation corpora and metrics are consistent with previous literature, such that we were only examining the effect of word segmentation without model architecture as a confound. Our implementation, along with instructions for accessing the associated data, is available at <https://github.com/sathvikn/dl-psych-tokenization/>.§.§ Language Models Each n-gram model was a 5-gram model trained on the publicly available section of the Corpus of Contemporary American English (COCA, <cit.>). The models were trained under KenLM <cit.> using modified Kneser-Ney smoothing <cit.>.As a control, we used a model trained on COCA which was trained to predict the next orthographic word without any subword tokenization.To test BPE-based tokenization, we used the Huggingface implementation of the GPT2 tokenizer <cit.> for each sentence in the corpus, and trained the n-gram model on individual tokens rather than words. Most current psycholinguistic work uses off-the-shelf GPT2 implementations and GPT2 (and variants) have been shown to be better fits to reading time data than larger models <cit.>. Finally, for a more linguistically informed approach to word segmentation, we trained an n-gram model based on the output of a morphological transducer <cit.> that is far more accurate than WordPiece tokenization at word and sentence-level morpheme segmentation tasks in a variety of languages, including English <cit.>.[This analyzer came in second in the SIGMORPHON competition. We chose it because the first place system's implementation was not publicly available.] The morphological transducer was based on an encoder-decoder architecture, which used a two-layer stacked LSTM as the encoder and performed greedy decoding.We judged the corpus too small for retraining a GPT-style model, and we did not train an LSTM because <cit.> conclusively showed 5-gram models are stronger predictors of results in broad-coverage psycholinguistic experiments.[In this study we used a publicly available subset of the COCA corpus for replicability, and it is less than 2/3 the size of the training dataset for the smallest orthographic GPT2 model in <cit.>. A near-term aim for future work is to replicate this experimentation with the full COCA corpus, which requires a license.] §.§ Psycholinguistic EvaluationOnce we trained the models, we computed their surprisal estimates for words in eyetracking and self-paced reading corpora and fit regression models evaluating surprisal as a predictor of the reading times from these corpora. §.§.§ CorporaWe used averaged eye movement data from the Dundee corpus <cit.> and self-paced reading times from the Natural Stories corpus <cit.> made available by <cit.>.Both corpora are representative of the material in COCA. In the Dundee corpus, each word's fixation time in milliseconds was averaged across 10 English-speaking participants reading newspaper articles. The Natural Stories corpus consists of sentences from narrative texts that were edited to include syntactic constructions that are rare in spoken and written English for psycholinguistic analysis. The self-paced reading times were recorded from 181 English speakers who read the texts as they were presented word-by-word. We used the per-word presentation times that were averaged across participants.§.§.§ Measuring Predictive Power of SurprisalTo compare the reaction times to the models, we computed the surprisal for each word under each n-gram model. The model trained on orthographic words generated a surprisal for each word, but since the BPE tokenizer and the morphological transducer used subword information, we tokenized the texts from the behavioral experiments and computed each token's surprisal. If a word was split into multiple tokens, its surprisal under the other two models was the sum of the tokens' individual surprisals.[We excluded data for certain words from our analysis following <cit.>. These were words preceding and following punctuation, words that contained non-alphabetical characters, and words that were out of vocabulary for the language models. If any token under a language model was not in its vocabulary, the entire word was excluded.] We then fit regression models predicting reading time based on surprisal.For each word segmentation method, we compared the per-token log likelihood (Δ LogLik) under two multiple linear regression models, following previous literature to quantify how much surprisal contributes to reading time prediction, independent of other predictors. One model used the control features of word length and log unigram frequency to predict reading times as a baseline model, and the other used these factors in conjunction with surprisal. If the regression model with surprisal-based features generated more accurate predictions, the difference between the log likelihoods would be above zero.Although Δ LogLik as the measure of predictive power is standard for this literature, we highlight two specific methodological details from <cit.>.First, the predictive power normalizes the regression models' aggregate log likelihood since we are comparing this metric on corpora with different sizes. Second, we used 10-fold cross validation to report Δ LogLik on a held-out test set. The training and testing data were consistent across all models for each fold. Reporting the value of a cross-validated regression model is important to ensure that the predictive power measures computed on the complete dataset are not the result of overfitting <cit.>. Due to spillover effects from previous words <cit.>, we also included the surprisal, length, and log frequency of the previous word as predictors in the the regression models for the Dundee corpus, and similarly for the previous three words for the Natural Stories corpus.[This difference arises because the corpora were used with different psycholinguistic tasks<cit.>; we report results for each corpus separately for this reason, and because the corpora have different sizes and material.]§ RESULTSAll our results show surprisal improves predictions of reading time relative to the control features, comparably to the 5-gram models' results in <cit.> and <cit.>.reports the difference in per-token log likelihood between the linear regression models predicting words' reading times using surprisal as a feature and the models which simply used length and log frequency as features. We also report a more conventional measure of effect size using Cohen's f^2 in Table <ref>. The surprisals of the previous and current word were statistically significant predictors of reading time for all models on all corpora (p < 0.001).[For Natural Stories, the tokens at w_i-3 and w_i-2 also had some predictive power (p < 0.01 and p < 0.1, respectively).]We also report Δ LogLik on a held-out test set using 10-fold cross validation (Figure <ref>). The training and testing data were consistent across all models for each fold. Reporting the value of a cross-validated regression model is important to ensure that the predictive power measures computed on the complete dataset are not the result of overfitting. We compared the distribution of these values under a Wilcoxon rank-sum test (Table <ref>). The predictive power of surprisal under the models using BPE and the morphological transducer's output did not show a statistically significant difference from the model using orthographic words. For the Natural Stories corpus, the predictive power of surprisal was lower than the Dundee corpus, which is expected since the Natural Stories corpus contained rare syntactic constructions.On the face of it, these results seem to show that LLM-style tokenization may not be an issue in psycholinguistic modeling. However, finer-grained analyses suggest otherwise. First, few words in the psycholinguistic corpora were split by the BPE tokenizer in the first place.As it turns out, the BPE tokenizer only split 5% of the tokens in the psycholinguistic corpora (11% when ignoring stopwords), compared to  25% and  44% respectively for the morphological analyzer (complete results in Table <ref>).Moreover, the standard linking theory for surprisal suggests that effort for the entire word should be the sum of the subword efforts <cit.>, and therefore that processing effort should increase incrementally with the number of units a word is segmented into.But this appears to be true only for the morphological tokenization: for BPE tokenization there is a sharp jump from low surprisal with unsplit words to essentially equal surprisal for words split into 2, 3, and 4 tokens (Figure <ref>).The data therefore suggest that surprisal based on BPE tokenization is less cognitively realistic than surprisal over morphological units. In addition, replicating the entire analysis separately for non-segmented and segmented words, we find that the predictive power of the BPE-based model is significantly worse for words that do get split by the tokenizer, and this is not true for the morpheme-based model (Figure <ref>).We conclude that, despite the aggregate results in Figure <ref>, LLM-style tokenization should be viewed with caution in cognitive investigations.§ CONCLUSIONS This study was a small, focused contribution that tackled an essential question previously unaddressed in psycholinguistics research that uses LLMs and their subword tokenizations. Previous work has demonstrated a linear relationship between LLM surprisal and human processing as measured by reading time, but there is good evidence that aspects of human processing are mediated by morphological units. For cognitive research, mightn't it be important to model surprisal using morphological units, rather than distributionally derived subword tokens that often deviate drastically from a word's morphological decomposition?On the one hand, our work offers the first comparison demonstrating that using distributionally derived rather than morphological subwords does not affect aggregate correlations between surprisal and reading times, a widely used behavioral measurement of human processing. On the other hand, given that psycholinguistics work is increasingly using proprietary pre-trained models with non-morphological subword tokenization, far beyond the scale available for academic model training, our finer-grained analysis indicates that a degree of caution is warranted for more fine-grained studies. Perhaps our most important take-away is that cognitive investigations require a careful look at the cognitive plausibility of the models' units of analysis.Our results also suggest new directions for more cognitively realistic models of prediction in language comprehension. We view the results from our finer-grained analysis as a step in this direction, and they also suggest going more deeply into the role of morphological segmentation on an item-by-item basis; for example, training an LLM on morphological units and evaluating it on diagnostics for morphosyntactic generalization.Our work here also introduces morphological surprisal computed automatically using a morphological segmenter, and validates its predictive power. This would be a natural fit for further work on morphological prediction at the neural level <cit.>, including looking at the role of morphemes in phoneme prediction <cit.>, and as a new representational level within integrative processing models that take phonological, word-level, and sentence-level contexts into account <cit.>. This implementation could be refined through inferring hierarchical structure over morphological units in the style of <cit.> to conduct larger-scale analyses.Finally, regarding broader theoretical discussion, we note that surprisal (as operationalized using an LLM or any other probability estimates) generally contributes to explanations at Marr’s “computational” level <cit.>.[As an interesting and promising exception, <cit.> take a step in the direction of the algorithmic/representational level by bringing memory considerations into the model.] Moreover, LLM predictions represent a black-box combination of categories of information that both theoretical and experimental considerations suggest are processed in distinct ways <cit.>.We would therefore argue that, despite their undeniable convenience and power, the widespread use of LLMs as probabilistic predictors deserves drastically more careful consideration than it has received if the field is to move in the direction of deeper insights into human representations and mechanisms in language processing. § LIMITATIONSThe study was limited to English, and it is possible different results might obtain in languages with other morphological structures. However, the morphological transducer can be trained on any language for which a suitable morphologically segmented corpus is available, and it has already been evaluated on a multilingual test suite <cit.>, so this is a promising topic for future work.This is an active area of research, especially because not all languages have the same notion of what counts as a “word.” Existing work <cit.> has evaluated predictions on a multilingual eyetracking corpus with some typological diversity, but still trains transformer language models on subword tokenization. More work is also needed to see if results vary across mono- and multi-morphemic words; see Appendix <ref> for an indication that LLM subword tokenizations can still be problematic at the level of individual predictions, even for words that do not include much morphological complexity. We also note that the publicly available version of COCA we used was preprocessed by <cit.>, and this may have led to some small discrepancies with the results from previous studies trained on other academically feasible datasets.Perhaps our most significant limitation was in using n-gram versus state-of-the-art LLM architectures for our comparison, which in principle may not generalize to the best models. We would strongly encourage those who are able to train LLMs at scale to consider offering models with morphologically valid segmentation, both to facilitate more extensive language model comparisons, and to support scientific studies involving morphological representations as articulated in Section <ref>. § ETHICAL CONSIDERATIONSAll data we used are publicly available. Human experimentation was approved by the institutions who conducted the research, including our own. The software we used was publicly available, and the trained morphological segmenter was distributed by the authors of the paper, so our implementation and data analyses do not require specialized computing hardware.§ ACKNOWLEDGEMENTSThis material is based upon work supported by ONR MURI Award N00014–18–1–2670.We would like to thank Silvan Wehrli for providing the English morphological segmentation model. John Hale, Tal Linzen, Brian Dillon, and Allyson Ettinger provided invaluable discussion as we formulated our research question, and Marine Carpuat, Utku Turk, and other members of the Computational Linguistics & Information Processing and Psycholinguistics groups at UMD provided insightful feedback on this work during various stages. Last, we thank the reviewers for suggesting improvements and clarifications to the paper, particulary comments and questions that motivated our finer-grained analysis. acl_natbib§ COMPARING WORD SEGMENTATION METHODSIn this illustrative example, the BPE-based tokenizer fails to split up most multimorphemic words. When it succeeds, the words are not segmented by morpheme (relegated, fringes). The morphological transducer is able to make cognitively plausible choices involving tense (relegates), possessives (its), and plurals (fringes). It also splits up more complex words into roots, prefixes, and suffixes (coverage, reporters, journalistic, community). Both tokenizers marked word boundaries in their output, although they are not shown in this example.In Tables <ref> and <ref> we report the number of words that were split by both the BPE tokenizer and the morphological segmenter in the psycholinguistic corpora.§ EXAMPLES OF SURPRISAL DIFFERENCES FOR MORPHEMES AND BPE TOKENS Figures <ref> and <ref> provide two illustrations of surprisal differences between subword segmentations.Note the major difference in the surprisal of bulb when summed over BPE tokens when compared to morphological units . In the sentence in Figure <ref>, the GPT2 tokenizer split tulips intoand did not split bulbs. It is reasonable for a human comprehender to expect the word bulb immediately after tulip since they would cooccur frequently in text, but it is less predictable after . This is reflected in the higher surprisal under the BPE-based model. To take a more morphologically complex example, this difference also occurs for carefully in Figure <ref>, which is not split up by the BPE tokenizer.The morphological transducer, on the other hand, split carefully into , producing a much lower surprisal. This suggests that further item-wise comparisons involving words with more morphologically relevant units may be worth investigating.§ STATISTICAL TESTING FOR PREDICTIVE POWER OF SURPRISAL§.§ Effect SizesIn Table <ref> we report effect sizes for the regression models trained on the full psycholinguistic corpora via the widely used Cohen's f^2. However, since our work was replicating a subliterature that almost exclusively uses Δ LogLik, we used that measurement in the main body of the paper.We refer readers to <cit.> for further statistical justification of that choice.§.§ Statistical Significance TestingFor the aggregate analysis (Figure  <ref>, we used a Wilcoxon rank-sum test to compute significance. We find no statistically significant difference between the Δ LogLik estimated for the folds for the two subword tokenization methods relative to predictions over orthographic words.
http://arxiv.org/abs/2310.17774v1
{ "authors": [ "Sathvik Nair", "Philip Resnik" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231026205529", "title": "Words, Subwords, and Morphemes: What Really Matters in the Surprisal-Reading Time Relationship?" }
[Corresponding author : ][email protected] Institut Pprime, CNRS - ISAE-ENSMA - Université de Poitiers, 11 Bd. Marie et Pierre Curie, Site du Futuroscope, TSA 41123, 86073 Poitiers Cedex 9, France Institut Carnot IFPEN Transports Energie, IFP Energies nouvelles, 1-4 avenue de Bois-Préau, 92852 Rueil-Malmaison, France Univ. Lille, CNRS, ONERA, Arts et Métiers ParisTech, Centrale Lille, UMR 9014- LMFL- Laboratoire de Mécanique des fluides de Lille - Kampé de Feriet, F-59000 Lille, FrancePreprint submitted to Physical Review FluidsAn online Data Assimilation strategy based on the Ensemble Kalman Filter (EnKF) is used to improve the predictive capabilities of Large Eddy Simulation (LES) for the analysis of the turbulent flow in a plane channel, Re_τ≈ 550. The algorithm sequentially combines the LES prediction with high-fidelity, sparse instantaneous data obtained from a Direct Numerical Simulation (DNS). It is shown that the procedure provides an augmented state which exhibits higher accuracy than the LES model and it synchronizes with the time evolution of the high-fidelity DNS data if the hyperparameters governing the EnKF are properly chosen. In addition, the data-driven algorithm is able to improve the accuracy of the subgrid-scale model included in the LES, the Smagorinsky model, via the optimization of a free coefficient. However, while the online EnKF strategy is able to reduce the global error of the LES prediction, a discrepancy with the reference DNS data is still observed because of structural flaws of the subgrid-scale model used. Synchronization and optimization of Large Eddy Simulation using an online Ensemble Kalman Filter M. Meldi October 2023 ================================================================================================§ INTRODUCTIONAmong the state-of-the-art tools in Computational Fluid Dynamics (CFD) for the analysis of complex flow configurations, the Large Eddy Simulation (LES) <cit.> is arguably the most investigated strategy in the last decades. LES relies on the application of statistical hypothesis related to turbulence theory to filter out the smallest physical scales of motion, so that the number of degrees of freedom to be simulated is drastically reduced when compared with Direct Numerical Simulation. The effects of such filtered eddies and their interactions with the resolved flow are taken into account by a specific SubGrid-Scale (SGS) closure. One of the most interesting features of LES is that it can naturally represent the unstationary, three-dimensional features of the flow. This key property, which is not obtained by most of the closures used to simulate turbulent flows, is essential for example for the prediction of extreme events. These rare occurrences must be fully taken into account in industrial applications and they are observed in a large spectrum of applications, such as internal flows for the study of combustion cyclic variability <cit.>, non-cyclic phenomena <cit.> or direct spray injection and aerodynamics in transient combustion engines <cit.> and external flows for wind / urban engineering <cit.>. The representation of instantaneous features of the flow also exhibits a great potential for LES applications in the framework of Industry 4.0 <cit.>. Within this digital revolution, envisioned applications predict and control real configurations, usually referred to as physical twin, using a numerical counterpart, the digital twin <cit.>. Most studies in the literature for fluid mechanics couple the physical system with reduced-order models or low-rank CFD <cit.> and thus the communication and control is limited to statistical macro-features of the flow. Applications of LES in this context are potentially groundbreaking because the real-time coupling of a real flow with LES is consistent in terms of physical representation. Successful implementation of a LES-based digital twin could potentially anticipate extreme events via numerical simulation and prevent catastrophic occurrences for the physical twin. However, three barriers must be lifted to see the fruition of this futuristic application. First, computational resources required to perform LES are orders of magnitude larger than the real time of physical applications of industrial interest. While this barrier seems unbeatable, new technologies such as quantum computing <cit.> may provide a needed breakthrough in terms of power needed for extended digital twin applications. Second, low-rank CFD is affected by a bias associated with the turbulence / SGS closures which often interact with the discretization error as well as explicit/implicit filtering for LES. These non-linear interactions between error sources may severely impact the accuracy of the results as they are often very sensitive to the test case of investigation. Therefore, general guidelines for applications are elusive. Third, CFD and in particular LES is extremely sensitive to perturbations and uncertainty in the initial and boundary conditions. Such perturbations, which also interact with the discretization error and the SGS modeling, may produce significant instantaneous decorrelation of initially identical fields in very short times. The second and third barriers listed, namely the accuracy of turbulence closures and the possibility for scale-resolved CFD to follow with good correlation a physical flow, have been recently investigated using data-driven methods. Uncertainty Quantification techniques have been extensively used to improve the predictive features of LES <cit.> and, more recently, works in Data Assimilation <cit.> optimized the behavior of SGS modeling in different numerical solvers <cit.>. In particular, Mons et al. <cit.> have performed an advanced optimization of the Smagorinsky model <cit.>, one of the most used SGS closures in the literature, for the test case of the plane channel flow. In their work, the DA procedure relies on statistical features of the flow for optimization. While the results obtained significantly increase the global accuracy of the LES solver, this procedure is not fit for on-the-fly optimization in the framework of a digital twin. A number of DA works have also targeted numerical synchronization and reconstruction of turbulent instantaneous flows from limited data. Using DA formalism, this procedure can be referred to as state augmentation. Such studies have been relying on DNS <cit.> as well as LES <cit.>. The main conclusions that can be drawn by these studies is that the efficiency in the synchronization of the flow depends on the number and positioning of sensors, as well as on the DA technique used. Among the proposals in the literature, the Ensemble Kalman Filter <cit.>, which relies on an ensemble of numerical realizations to perform optimization and state reconstruction, appears to be a perfect candidate for this task. Thanks to its sequential features which allow to perform an instantaneous, on-the-fly update of the physical field, this tool shows potential for future integration in digital twins.The present work proposes an extensive analysis of an EnKF-based tool application to LES in terms of i) optimization of the SGS model and ii) state augmentation. The test case of investigation is the turbulent channel flow, which has already been analyzed using DA techniques <cit.>. The novel point here is that both the optimization and the state augmentation are performed on-the-fly, progressively informing the LES ensemble members with time-resolved DNS data which are sampled at a limited amount of sensors near the wall. The objective here is to assess the robustness of the procedure, both in terms of optimization as well as flow reconstruction, when spatial-temporal sparse data are used. The on-the-fly coupling of LES simulation and DNS data is performed via CONES <cit.>, a library developed by the team to perform online coupling between different solvers.The article is structured as follows. In section <ref>, the numerical tools used for the analysis are going to be presented and discussed. This includes the numerical LES solver, the EnKF methodology, and the platform CONES. In section <ref>, the test case and the set-up of the DA runs are going to be introduced. In section <ref>, the results of the optimization of the SGS model are discussed. In section <ref>, the global impact of the DA methodology over the instantaneous flow predicted and the correlation with the DNS data available is investigated. Finally, in section <ref> concluding remarks are drawn and future perspectives are investigated. § NUMERICAL TOOLSAll the numerical ingredients used to perform the present analysis are presented in this section. These tools include a description of the dynamic equations and the numerical solver used, details about the EnKF, and information about the platform CONES used to perform online DA. §.§ Dynamic equations and numerical solverThe Navier–Stokes equations for incompressible flows and Newtonian fluid can be formulated as: ∂ u_j/∂ x_j =0∂ u_i/∂ t + ∂ u_iu_j/∂ x_j =- 1/ρ∂ p/∂ x_i + ν∂^2u_i/∂ x_j ∂ x_j + f_i where 𝐮=[u_1, u_2, u_3] = [u_x, u_y, u_z] is the velocity field, ρ is the density, p is the pressure, ν is the kinematic viscosity and 𝐟=[f_1, f_2, f_3] is a volume forcing. Repetition over the index j is employed for the sake of conciseness. In the LES formalism, equations <ref> and <ref> are filtered to obtain a global reduction of the degrees of freedom of the physical system: ∂u_j/∂ x_j =0∂ ũ_i/∂ t + ∂ u_iu_j/∂ x_j =- 1/ρ∂ p/∂ x_i + ν∂^2u_i/∂ x_j ∂ x_j - ∂τ_ij/∂ x_j + f_i The tilde symbol stands for filtered variables and τ_ij=u_i u_j - u_i u_j is the subgrid scale stress tensor. In the Smagorinsky model <cit.>, the deviatoric part of τ_ij is modelled as an eddy viscosity effect: τ_ij - 1/3τ_kkδ_ij = -2 ν_sgsS_ij, ν_sgs = (C_S Δ)^2 √(2 S_ijS_ij) where S_ij = 1/2( ∂u_i/∂ x_j + ∂u_j/∂ x_i) is the rate-of-strain tensor of the resolved velocity field, Δ is the filter width and C_S is a model coefficient that can be selected by the user. Classical values found in the literature are C_S ∈ [0.1, 0.2]. This formulation, which is derived from the asymptotic turbulence theory by Kolmogorov, fails to provide an accurate prediction of the interactions between the resolved and filtered physical variables. The reason is that the SGS stress tensor in equation <ref> is inherently dissipative and affects all the simulated scales of the flow <cit.>. Despite these negative features, the direct and simple implementation of such a model made it a popular choice for most solvers.The numerical simulation of equations <ref> - <ref> is performed using the open-source code OpenFOAM <cit.>. This C++ library provides a finite volume <cit.> discretization of the dynamic equations and modules for turbulence / SGS closure are already implemented. The equations are resolved using a PISO loop <cit.> which employs a Poisson equation to iteratively obtain a solenoidal condition for the velocity field, starting from the prediction obtained by the resolution of the momentum equation <ref>. Second-order centered schemes have been used for the discretization of spatial derivatives. A second-order backward scheme has been used for the time advancement of the solution. The LES equations are closed using the classical Smagorinsky model previously introduced. The implementation in OpenFOAM relies on two model constants, the parameter C_k and the normalized dissipation parameter C_ε. The latter usually exhibit high sensitivity to turbulence production effects and lack of homogeneity of the flow <cit.>. In the case of turbulent equilibrium, such as in Kolmogorov theory, C_ε=const, and its value can be set by the user. OpenFOAM suggests a default value of C_ε=1.048, which is in the range of experimental and numerical findings. Within this framework, the connection between C_S and C_k is: C_S^2 = C_k √(C_k/C_ε) The LES filtering is performed implicitly using the grid resolution. The filter width Δ is thus locally proportional to the volume of each cell V_c (cube-root volume filter option in OpenFOAM) and more precisely Δ =√(V_c).§.§ Data Assimilation Data Assimilation <cit.> includes a large spectrum of data-driven techniques whose main goal is to obtain an augmented prediction of a random process investigated, combining different sources of information. The tools are usually grouped in two main categories. The variational approaches perform the DA strategy via an optimization problem. The sequential approaches usually rely on probabilistic approaches which are based on Bayes' theorem. This work will be performed using the Ensemble Kalman Filter <cit.>. This tool, which has been extensively used in meteorological applications in the last decades, has seen numerous recent applications for problems in fluid mechanics <cit.>. The most interesting feature of the present work is that the EnKF operates sequentially i.e. it can combine data in-streaming obtained from different sources. This key feature will be exploited for on-the-fly coupling of high-precision, localized DNS data with running LES calculations. §.§.§ Ensemble Kalman Filter (EnKF)The Kalman Filter (KF) is a well-known DA tool first introduced in 1960 by R.E. Kalman <cit.> to estimate an augmented system state from sparse external data, or observations. Both sources of information are affected by uncertainties, which are approximated to be Gaussian random variables. The augmented state is obtained by combining a set of observations and a state vector obtained via a model. In the present work, the physical quantity updated is the velocity field 𝐮, which is obtained via LES (the model). Observation is sampled at specific locations from a high-resolution simulation (DNS) and indicated as α. Corresponding sampled quantities at the same locations for the state vector are indicated as 𝐬 = 𝐇𝐮. 𝐇 is a projection matrix that maps the values of the model state to the observation space. Let us consider the time advancement of the model from the time step k to k+1 in the case observation is available for the latter time. The augmented state is obtained as: 𝐮_k+1^a = 𝐮_k+1^f + 𝐊_k+1(α_k+1-𝐬_k+1) The superscript f (forecast) represents the time advancement of the physical quantities by the model from time k to k+1. The superscripta (analysis) represents the final augmented state of the algorithm. The Kalman gain 𝐊 is obtained from manipulation of the error covariance matrix 𝐏=𝔼((𝐮-𝔼(𝐮))(𝐮-𝔼(𝐮))^T), which measures the correlations between the state vector and the observations. It takes into account the level of confidence in the model and in the observation, respectively, which is measured by the variance of the uncertainties affecting the two sources of information. More precisely, the model and observation uncertainties can be described by an unbiased Gaussian distribution with variances 𝐐_𝐤 and 𝐑_𝐤, respectively. The main drawback of the classical KF resides in the costly manipulations of the matrix 𝐏 and also the necessity to use linear models. The Ensemble Kalman Filter (EnKF) <cit.>, which is an advanced DA tool based on the KF, is extensively used in weather sciences <cit.>. It overcomes the aforementioned drawbacks by using the Monte Carlo method to estimate the error covariance matrix 𝐏 through the use of an ensemble of pseudo-random realizations.An ensemble of N_e physical states 𝐮, each of them described by N degrees of freedom, is advanced in time using a model ℳ, which can in this case be non-linear. A state matrix 𝐔 of size [N, N_e] is assembled at each analysis phase. Each column i = 1, ⋯, N_e of the state matrix represents a physical state 𝐮_i obtained by the i^th ensemble member. Considering the time advancement of the solution from the instant k to k+1 such as in equation <ref> for the KF, the EnKF provides an ensemble estimation of the error covariance matrix 𝐏 using the hypothesis of statistical independence of the members :𝐏 = Γ(Γ)^Twhere Γ is the anomaly matrix, which is derived from the state matrix 𝐔 of the ensemble members. It quantifies the deviation of the state vectors fromtheir ensemble means:Γ_k+1 = 𝐮_i,k+1^f-⟨𝐮⟩_k+1^f/√(N_e-1), ⟨𝐮⟩_k+1^f = 1/N_e∑_i = 1^N_e𝐮_i,k+1^f In order to obtain a well-posed mathematical problem, the array of N_o available observations is artificially perturbed to obtain N_e sets of values. To do so, a Gaussian noise based on the covariance matrix of the measurement error 𝐑_k+1 is added to the observation vector:α_i,k+1 = α_k+1 + 𝐞_i,k+1, with 𝐞_i,k+1𝒩(0, 𝐑_k+1)The model realizations and the observations are combined over the observation space using the projection matrix 𝐇:𝐬_i,k+1 = 𝐇𝐮_i,k+1^f These elements provide a closed form for the Kalman gain: 𝐊_k+1 = Γ_k+1(𝐒_k+1)^T [𝐒_k+1(𝐒_k+1)^T + 𝐑_k+1]^-1with𝐒_k+1= 𝐬_i,k+1-⟨𝐬⟩_k+1/√(N_e-1), ⟨𝐬⟩_k+1 = 1/N_e∑_i = 1^N_e𝐬_i,k+1In a limited ensemble size, 𝐑_k+1 is preferred to the anomaly matrices product of the errors 𝐄_k+1(𝐄_k+1)^T in equation <ref>. It provides a simplified algorithm and reduced computational cost <cit.>. Finally, the physical state predicted by each ensemble member is updated using the Kalman Gain: 𝐮_i,k+1^a = 𝐮_i,k+1^f + 𝐊_k+1(α_i,k+1-𝐬_i,k+1) The approaches based on the EnKF can also simultaneously optimize the free parameters of the model to minimize the discrepancy between the model and observation during the analysis phase. These parameters are usually assembled in an array referred to as θ. A straightforward strategy to perform such optimization is the so-called extended state <cit.>. Here the EnKF problem is resolved for a state vector 𝐮_𝐞𝐱𝐭 defined as:𝐮_𝐞𝐱𝐭 = [ 𝐮; θ ]The size of the extended state is now equal to N_ext=N + N_θ, where N_θ is the number of parameters to be optimized. This modification brings a negligible increase in computational costs if N_θ << N and it simultaneously provides an updated state estimation and optimized parametric description for the model at the end of the analysis phase.§.§.§ Inflation One of the major drawbacks of the Ensemble Kalman Filter is the fast collapse of the state matrix variability. The consequence of the unwanted reduction of the variability is the convergence of the state matrix towards a localized optimum, which is strongly tied with the prior state provided. If the latter is not accurate, then the precision of the optimization via EnKF can be severely impacted. One can increase the global variability of the system and decrease the sampling errors using a higher number of members in the ensemble, gaining accuracy in the prediction of the EnKF. However, this strategy is not conceivable for fluid dynamics applications where computational costs preclude the usage of large ensembles. In fact, the number of members generally used for three-dimensional runs is around N_e ∈ [40,100] <cit.>, which is pretty far from classical Monte-Carlo convergence. This problem is usually mitigated by inflating the variance of the ensemble after the analysis phase. This can be easily obtained by increasing the discrepancy between each state vector u_i,k+1^a and the ensemble mean ⟨𝐮^a⟩ by algebraic operations driven via a coefficient λ. This procedure is referred to as multiplicative inflation. The way this procedure is performed can be deterministic or stochastic:deterministic 𝐮_i^a ⟶⟨𝐮^a⟩ + λ_i(𝐮_i^a-⟨𝐮^a⟩)withλ_i > 1stochastic 𝐮_i^a ⟶ (1+λ_i) 𝐮_i^awithλ_i 𝒩(0,σ) The deterministic implementation can be very efficient during the initial analysis phases of the calculation. Considering it is applied to the discrepancy from the mean values of the ensemble, the process is quite stable, and higher values of λ can be used. Nonetheless, it is less efficient when the ensemble exhibits a strong collapse of the physical solution (𝐮_i^a-⟨𝐮^a⟩≈ 0). On the other hand, stochastic inflation is very useful to mitigate a fast collapse of the state matrix, allowing it to target a global optimum solution. The Gaussian distribution used to determine λ_i is usually truncated to avoid the generation of outliers which could lead to the divergence of the EnKF. §.§.§ Localization The coefficients of the state matrix correspond to values of the flow variables (namely the velocity field) in specific points of the physical domain, usually the center of the mesh elements. As discussed in Sec. <ref> and shown in eq. <ref>, the Kalman gain establishes a correlation between those values and the values of the state matrix projected in the observation space i.e. sensors where high-fidelity data is available. Considering that the physical correlation naturally decays with distance in continuous systems, the approximations used to determine an ensemble Kalman gain can lead to spurious effects on the analyzed state matrix for large domains. These effects can be responsible for critical problems such as unphysical solutions, which can lead to the divergence of the calculations. Again, these problems can be reduced by increasing the number of ensemble members, which is not a cost-efficient solution for applications involving CFD. Therefore, different strategies need to be employed to mitigate the effects of spurious correlations. The most used strategy to reduce them is to operate on the coefficients correlating variables in the EnKF which are calculated in points far from each other. In this case, one would expect that the physical phenomena are completely decorrelated. Two possible strategies may be adopted to obtain this result <cit.>. The Covariance localization directly operates on the coefficients of the error covariance matrix 𝐏_k+1^f, pre-multiplying them with a term that tends to zero as the physical distance between observations sensors and elements of the state increases. This process is mathematically performed using a coefficient-wise multiplication between the covariance matrix and a correction matrix referred to as 𝐋. This expression can be directly added in the algorithm without any structural modification. The localized Kalman gain becomes:[𝐏^f_k+1]_i,j[𝐋]_i,j⟶𝐊_k+1^loc = [𝐋]_i,j[𝐊_k+1]_i,jThe structure of the matrix 𝐋 must be set by the user. In fluid systems, and in particular for turbulence, the correlation decreases fast in space. Therefore, a generally used structure for the localization matrix is an exponential decay form: 𝐋(i,j) = e^-Δ^2_i,j/lwhere Δ_i,j is the distance between the given observation sensor and the point of evaluation of the model (center of the mesh element in CFD). l is a correlation length scale that can be tuned accordingly to the local characteristics of the test case. Another way to localize the Kalman gain is to use physical localization. The principle is quite straightforward. Instead of performing the EnKF on the entire physical domain, one can proceed to do the calculation on a clipped domain. The reduced space must contain the observation sensors. This strategy also has the advantage of reducing the number of degrees of freedom operating in the DA procedure, which can produce a significant gain in terms of computational resources required. Covariance localization is commonly used together with physical localization to avoid discontinuities of the updated physical state, in particular at the interface of the clipped domain. This strategy prevents potential divergence of the model runs. This method is very efficient in speeding up the calculation and simultaneously improving the stability of the calculation and the accuracy of the prediction for a reduced ensemble size such as the ones currently usable for CFD-based studies <cit.>.The DA procedure used in this study is qualitatively shown in Fig. <ref> and a detailed algorithm of the EnKF (including state-of-the-art modifications) is provided in Alg. <ref>.§.§.§ CONESCoupling OpenFOAM with Numerical EnvironmentS (CONES) is a C++ library add-on to the open-source CFD software OpenFOAM. CONES allows OpenFOAM to exchange field data through MPI communications <cit.>. The coupling of OpenFOAM with other numerical environments is operated by CWIPI (Coupling With Interpolation Parallel Interface) developed by CERFACS and ONERA <cit.>. CONES has been developed by the team in order to perform on-the-fly DA with OpenFOAM, which has been coupled with a tailored EnKF code for this purpose.The main advantages CONES provides to perform DA with OpenFOAM are:* Data Assimilation is performed online without stopping the CFD runs, which represent the ensemble members. The computational resources required to restart the simulations after an analysis phase are large, usually more than the total computational cost for the DA run if several analysis steps have to be performed.* Communication of large physical fields (arrays of millions of elements such as the velocity field) is performed rapidly and efficiently.* Compilation of additional functions is performed via wmake routine in the user-dedicated library of OpenFOAM. * Coupling between codes is performed preserving the original structure of the existing CFD solvers. Every CONES-related function is contained in a Pstream (Part of OpenFOAM) modified library, hence, data exchange is done at the end of the solver loop by calling specific functions, and the calculation loop remains unmodified.* Direct HPC communications are established between multiple processors, which handle partitions of the numerical simulations and the DA process. Data flow and exchanges between codes are summarized in Fig. <ref>. As CWIPI is based on the MPI library, both MPI and CWIPI environments have to be initialized when launching the calculation. Similarly, they have to be finalized at the end. Once the forecast(s) step(s) of the EnKF algorithm is performed, sampled data s_k+1 is interpolated for each member and transferred to the EnKF code for the analysis step. The entire velocity field and the studied parameters are also sent in order to perform the EnKF algorithm. CWIPI exchanges data through coincident meshes in CONES. However, in case the mesh is not coincident, the field data is interpolated automatically. This is an important feature for the potential use of multigrid-based DA algorithms in the future <cit.>. The observation is uploaded just before the analysis step. After the state vectors 𝐮_i,k+1^a have been updated, the information is sent back to each member to resume the forecast steps with the updated physical states and/or values of the model constants. The state matrix contains the velocity fields of all the members and the constant C_k of the turbulence model optimized in this study. Details about the optimization of this parameter will be provided in Sec. <ref>. The observation, containing velocities of the reference data for all times available, is stored in a single .txt file that is read at each analysis phase. The related computational cost is negligible compared to the calculation of the Kalman gain when performing the EnKF algorithm as shown in appendix <ref>.§ TEST CASE AND SET-UP OF THE DA ANALYSIS §.§ Turbulent plane channel flow, Re_τ≈550The test case chosen to perform the DA analysis is the turbulent plane channel flow for Re_τ = u_τ h / ν = 546. Here u_τ = √(τ_w / ρ) is the friction velocity and τ_w is the shear stress at the wall. h is the half-height of the channel and ν is the kinematic viscosity. This academic test case, which is driven by shear mechanisms at the wall and naturally excludes complex aspects associated with favorable/adverse mean pressure gradients <cit.>, is nonetheless problematic for LES <cit.>. Complex non-linear interaction occurs between two main error sources, namely those associated with the numerical discretization and the SGS closure. These mechanisms are responsible for very high sensitivity to relatively small variations in the grid discretization and the SGS closure selected. Therefore, this test case is an excellent candidate to study the objectives presented in the introduction. Results obtained from the large-eddy simulations performed in this work will be compared with DNS data on the same test case previously performed by the research team <cit.>.The geometric features are shown in Fig. <ref>. The size of the domain investigated is 3 π h × 2h ×π h. x is the streamwise direction, y the normal direction and z the spanwise direction.The top and bottom boundaries are no-slip walls. A periodic boundary condition is applied on the four lateral sides. A source term, already integrated within the solver of OpenFOAM, is included in the dynamic equations to preserve the global mass flow rate in time. More precisely, the source term targets the conservation of the bulk streamwise velocity u_b = ∭_V_D u_x dV^' / V_D, where V_D is the volume of the physical domain investigated. The targeted criterion used for all simulations is u_b = 0.899 u_c, where u_c is the mean streamwise velocity at the center of the channel obtained by the DNS. The kinematic viscosity ν is the same for the DNS and LES calculations. The bulk Reynolds number obtained by the DNS is equal to Re = 2hu_b/ν = 20124.A baseline LES is performed using the well-known Smagorinsky subgrid-scale model <cit.> (see Sec. <ref>). This simulation is run by the pimpleFoam solver of the OpenFOAM CFD library. It is a solver tailored for the simulation of incompressible turbulent flows using the PIMPLE algorithm. The grid is composed of 350 000 cells, whose details are reported in Tab. <ref> along with the reference DNS. The size of the grid elements is adimensionalized with respect to the viscous wall unit δ_ν = ν / u_τ. Superscript ⋆ is used when normalizations are performed using the u_τ calculated by the DNS. On the other hand, the superscript + is used when u_τ is obtained by each LES simulation. Δ x^⋆ and Δ z^⋆ are obtained using a uniform distribution. A geometric expansion is used to control the size of the elements Δ y in the normal direction to grant higher resolution at the wall. The size of the smallest element Δ y^⋆_1 at the wall and the largest element Δ y^⋆_c at the centerline are reported. The size of the mesh elements used for the calculation of the baseline simulation is larger than typical values observed in LES for this case, which are Δ x^⋆≈ 50, Δ y^⋆_1 ≈ 1 and Δ z^⋆≈ 20 <cit.>. This choice was made in order to i) assess the capabilities of the DA method to provide an accurate state estimation and parametric inference even in under-resolved conditions and ii) to obtain faster runs of the DA algorithm using a sufficiently large ensemble of simulations. The initial conditions for the baseline LES case were set using an interpolated field from a DNS solution. The simulation was carried out for a duration of 50 advective times, calculated as t_A = h / u_c, in order to dissipate the initial field. Then, average quantities have been calculated over a time window of 900 t_A. The time step for the advancement of the solution is constant and equal to Δ t = 0.02 t_A.Results from the baseline LES are now compared with the DNS and additional reference DNS results freely available online for a very similar Re_τ <cit.>. Fig. <ref> show the normalized mean streamwise velocity profile u^+ = u_x / u_τ. Averages (indicated with the overline) are performed in time as well as in the streamwise and spanwise direction, in order to obtain improved statistical convergence. One can see that the discrepancy between the LES prediction and the DNS results is significant. One of the key elements affecting this lack of accuracy is the erroneous prediction of the shear stress at the wall τ_w and thus of the friction velocity u_τ. For this parameter, a discrepancy of 28 % with DNS results is observed. The main source of error for the prediction of this quantity is related to the SGS closure used, for which ν_sgs does not correctly scale to zero approaching the wall, as shown in Fig. <ref>. A large discrepancy is also observed for the accuracy in the prediction of the components of the resolved Reynolds stress tensor u_i^' u_j^'. The quantitiesu_x^' u_x^'^+ = u_x^' u_x^' / u_τ^2, u_y^' u_y^'^+ = u_y^' u_y^' / u_τ^2, u_z^' u_z^'^+ = u_z^' u_z^' / u_τ^2 and u_x^' u_y^'^+ = u_x^' u_y^' / u_τ^2 are shown in Fig. <ref>. One can see that both the magnitude and the position of the peak are not accurately predicted. The results obtained via the baseline LES indicate that, using the numerical set-up described combined with the Smagorinsky SGS closure, an accurate prediction of the statistical moments of the flow field is not obtained. In subsection <ref>, the DA procedure used to improve the flow prediction using this LES setup is detailed. §.§ Data Assimilation strategyThe DA simulations performed in this work aim to provide instantaneous augmented states of the test case investigated. This objective will be achieved by coupling on-the-fly the numerical prediction of the LES solver with localized information sampled from the DNS reference. This strategy relies on three main ingredients:* The model, which provides a quasi-continuous description of the physical phenomenon investigated. In this analysis, the model is the LES setup presented in section <ref>.* The observation. Time-resolved samples of the instantaneous velocity field from the reference DNS are used for this purpose. The samples are collected over 10800 sensors in the physical domain for 0.48 ≤ y^+ ≤ 56.4 i.e. in the viscous sublayer, in the buffer region, and in the inertial range. Sampling in time is performed at a constant rate of Δ t_DA = 0.04 t_A.* The coupler. CONES is used to couple the incompressible OpenFOAM solver pimpleFoam with an EnKF algorithm as presented in Sec. <ref>.The setup of the EnKF procedure is now detailed. The size [N_ext, N_e] of the state matrix 𝐔 is given by N_e=40 (number of ensemble members) and N_ext = N + N_θ. Here N_θ is the number of parameters optimized by the EnKF and it is different for the two DA runs which will be presented in the following. N = 3 n_cells is equal to three times the number of grid elements that are used in the DA procedure. This is because the number of degrees of freedom considered in the DA procedure is the three components of the velocity field for each of the n_cells mesh elements. The value of n_cells is strictly connected with the physical localization performed by clipping the numerical domain analyzed. This procedure, which is illustrated in Fig. <ref>, consists in excluding from the DA calculation the grid elements for 0.18 < y/h < 1.82. These elements are relatively far from the sensors and therefore the risk of spurious correlation affecting the stability of the DA algorithm is high. In addition, the excluded domain represents around 56 % of the total number of cells used by the LES model. The computational gain for the calculation of the Kalman gain is also approximately 56% as shown in Tab. <ref>. Covariance localization is applied as well to the calculation so that discontinuities associated with the physical clipping are smoothed out. The structure of the matrix L used for covariance localization is the one presented in equation <ref>, where the parameter l=0.175 in streamwise and spanwise directions and l=0.000985 in wall-normal direction.Observation is obtained from 408 sensors which have been selected among the 10800 available. The constraint x ∈ [0.6π, 2.4 π], z ∈ [0.25π, 0.75 π] has been applied in the selection to take into account the different domain sizes for the LES and the DNS and to exclude potential problems emerging with the periodic boundary conditions. The location of the probes, which are indicated as red dots, is shown in Fig. <ref>. As previously discussed, the three components of the instantaneous velocity field are sampled. However, in the following configurations, the observation array is composed of 408 samples of the streamwise velocity only. The confidence in the DNS data is driven by the matrix R presented in Sec. <ref>. The matrix is diagonal and expressed as R = σ_m^2I, where σ_m quantifies the uncertainty of the measurements. An accuracy of 20% is applied as a percentage to the values for each observation. This implies that the variance of the velocity field oscillates between 0.003 and 0.188 depending on the distance from the wall of the sensor considered. These last remarks also imply that the weight given to each observation is the same. A specific DA run (DA-LESA), presented in appendix <ref>, has been performed taking as observation the three components of the velocity field for each sensor.The general algorithm for the DA run is now presented. The ensemble members are initialized with a prior state in terms of initial physical field and parametric description of the SGS model. The former is the field of the converged Smagorinsky simulation shown Sec. <ref> and is the same for every member of the ensemble.The initial conditions for the parametric description of the SGS model are different for the two DA simulations performed and they will be described in sections <ref> and <ref>. Once the initial state is provided, the DA procedure advances in time the LES ensemble members for a total of 300 t_A times, performing an analysis phase each 0.12 t_A. This choice, which implies that only one out of every three observation samples is integrated within the DA scheme, results in a total of 2500 analysis phases. If one considers that the time step for the LES simulations is Δ t = 0.02 t_A, this indicates that one analysis is performed every six forecast steps. No state inflation is used in the DA runs. However, a time-varying parametric stochastic inflation is included to improve the efficiency of the DA optimization. No inflation was used from t_A = 0 to t_A = 12. Then, a relatively strong inflation was included for t_A = [12; 24] with λ = 10%, followed by λ = 5% for t_A = [24;36]. Finally, λ = 1% was used to carry out the averaging for the calculation of the statistical moments. Statistical averages are calculated in the range t ∈ [50, 300], in order to safely dissipate high levels of variance previously used for the convergence of the parametric description.The two main DA simulations are now presented in detail, highlighting the differences among the procedures.§.§.§ DA run 1 (DA-LES1): optimization of the coefficient C_kIn this first DA run (referred to as DA-LES1) the vector of the parameters to be optimized consists of one element, which is the model constant C_k of Smagorinsky's SGS closure. This is equivalent to optimizing the well-known coefficient C_S, which has been studied in the literature in particular in the framework of UQ analyses <cit.>. As previously stated, the value of this global constant is updated at each analysis phase. Initial values of the N_e = 40 ensemble simulations are determined using a bounded Gaussian distribution 𝒩(μ_u,σ_u^2). Considering data in the literature <cit.>, μ_u = 0.094 and σ_u = 0.03 were chosen to investigate a suitably large parametric space. The Gaussian distribution is constrained to values in the range μ_u ± 2σ_u, in order to avoid initial nonphysical parametrization which could lead to the divergence of the algorithm. §.§.§ DA run 2 (DA-LES2): model spatial expansion for C_k Following the results of DA-LES1, a more complex optimization is targeted to improve the predictive capabilities of the LES solver. Exploiting the homogeneity features of the test case in the streamwise direction x and the spanwise direction z, the optimization of this second run (referred to as DA-LES2) targets the behavior of a functional expression for C_k(y). More precisely, the free coefficients in a Gaussian expansion of C_k are considered as variables to be optimized: C_k = ∑_i=1^n exp(a_i-(y-y_i)^2/σ_i^2) For each of the n Gaussian functions used in the decomposition, the free parameters to be determined are a_i (intensity of the peak), σ_i (width of the function), and y_i (position of the peak). The functions are considered to be symmetric with respect to the half channel height, owing to the statistical symmetry in the wall-normal direction y. The decomposition is performed using n=5 Gaussian functions. This adds up to 15 parameters in the control vector θ to be optimized via the EnKF. The average prior distribution for these functions is shown in Fig. <ref>. This initial distribution is chosen so that the peak of three functions is closer to the wall, in order to provide a suitable representation of ν_sgs in this region. For each ensemble member, the value of the 15 free coefficients is determined using a Gaussian truncated (± 2σ) perturbation so that a_i = 𝒩(-4.5,0.3^2), σ_i = 𝒩(0.18,0.04^2) and y_i = 𝒩(0.15,0.05^2). § PREDICTION OF THE STATISTICAL FEATURES USING ON-THE-FLY DA The previous discussion stressed how the DA tools provide an update of the physical state as well as an optimization of the model. In this section, particular attention is focused on the latter aspect. Results from DA-LES1 and DA-LES2 are investigated to observe how the DA procedure dynamically affects the value of the parameter C_k as well as to assess the effects of the parametric optimization over the flow statistical behavior. One important point that must be stressed is that such statistical moments are not directly observed by the DA algorithms. In fact, unlikely recent analyses in the literature <cit.>, the DA procedure relies on instantaneous flow fields obtained from the model and sampled as observation.First, the optimized behavior of the parameter C_k is investigated. For classical simulations using the prescribed values of the numerical code, one has C_k=0.094, C_ε=1.048 which corresponds to C_S ≈ 0.17.Once the convergence of the parametric description is obtained, the coefficients exhibit a very weak time evolution. Results obtained from the run DA-LES1, which targets a global C_k optimization, show that the time-averaged optimized value for C_k ≈ 0.014. This result, which corresponds to C_S ≈ 0.04, is 7 times smaller than the default C_k value provided by the code. The uncertainty associated with the limited amount of ensemble members has been assessed repeating the initial DA phases using different random distributions for C_k. These results indicated that optimized values fall in the range C_k ∈ [0.012, 0.017]. Within these ranges, variations in the predicted physical quantities are very small and they fall within the confidence threshold (i.e. values of the matrix R) provided for this study.The results for the run DA-LES2 are shown in Fig. <ref>, where the profile of C_k and each function of the Gaussian spatial distribution are shown. The final C_k profile in red is again significantly lower than the distribution used as prior. The values range from 0.008 in the core flow region to a maximum value around 0.012 reached close to the wall around y^+ = 60. In addition, the contributions of the modes of the Gaussian Expansion to describe the augmented C_k profile appear to be very different. Two main modes govern the shape of C_k. The first mode exhibits a slow, quasi-linear decrease moving from the centerline towards the wall. On the other hand, the second one exhibits a maximum in the near-wall region ( y^⋆≈ 30). The magnitude of the other three modes is significantly lower and they mainly smooth out the profile for C_k. Despite the higher complexity of this strategy, one can see that the distribution in the y direction of C_k is quasi-constant and C_k ≈ 0.01 i.e. very similar to the global value obtained in the DA-LES1 run. The enhancement of the predictive capabilities via DA optimization is now assessed by comparing the statistical moments of the velocity field with the available DNS results as well as with the baseline LES. First of all, the prediction of the friction velocity u_τ, which is one of the key features of this test case, is significantly improved for all DA runs. In fact, the targeted DNS friction velocity isu_τ = 0.048 and baseline LES simulation predicts a u_τ = 0.0614, which represents an over-prediction of 28%. DA-LES1 and DA-LES2 predict almost the same friction velocity, with u_τ^DL1 = 0.04617 and u_τ^DL2 = 0.04619. In this case, the friction velocity is under-predicted when compared with the DNS, but the discrepancy is only 4%. This increase in accuracy comes with a significant reduction of the subgrid-scale viscosity in the near-wall region, which does not scale correctly for the Smagorinsky LES. Considering that the values obtained for C_k in the near wall region with the two DA procedures are almost identical, it is not surprising to observe minimal variations in the prediction of u_τ. Similar conclusions can be drawn by the analysis of Fig. <ref>, where the normalized mean velocity profile u^⋆ against y^⋆ are shown. Averages for the DA procedures are performed so that u^⋆ = ⟨u_x⟩/ u_τ where ⟨ . ⟩ is the ensemble average operator and . is the time-average operator. The results obtained via the two DA procedures show a global improvement in the prediction of the velocity field, reducing on average the discrepancy with the DNS data. This observation is a direct consequence of the improved prediction for u_τ. The apparently more accurate behavior of the baseline LES close to the center of the channel is actually a compensation of errors between the local numerical error and the erroneous prediction of u_τ, which can be observed in Fig. <ref>. In fact, with a more accurate prediction of u_τ, the baseline LES would almost exactly collapse on the results obtained by DA. Minor discrepancies can be observed between the runs DA-LES1 and DA-LES2, which are arguably associated with the rate of convergence of the EnKF using 40 ensemble members.The normalized components of the resolved Reynolds stress tensor are shown Fig. <ref>. Again, for the DA runs, u_i^' u_j^'^⋆ = ⟨u_i^' u_j^'⟩ / u_τ^2. A global improvement in the accuracy of the prediction of such quantities is observed. For all the components, the location of the peak is accurately predicted. The magnitude of the components also exhibits a general improvement, which is however dependent on the component considered. In fact, while a very good agreement with DNS data is observed for u_z^' u_z^'^⋆, a slight decrease in accuracy is instead obtained for u_y^' u_y^'^⋆. The almost identical results obtained with the two runs DA-LES1 and DA-LES2 suggest that the variations of C_k in the y direction for the latter do not affect the flow prediction. One could argue that the present optimization reached the best performances obtainable with Smagorinsky LES, whose subgrid-scale representation is affected by strong, intrinsic limitations <cit.>. Another possibility is the combination of prior state and inflation employed in the present analysis for the model coefficients was not sufficient to perform a complete exploration of the parametric space, and the final solution for DA-LES2 was drawn to the same local optimized state obtained for DA-LES1. The analysis of the physical quantities normalized over the u_τ calculated by each simulation (suffix +) leads to similar conclusions. The mean streamwise velocity profiles, which are shown in Fig. <ref>, confirm the global lack of accuracy of the baseline simulation, which is now even more magnified by the significant error in the prediction of u_τ. The components of the resolved Reynolds stress tensor, which are reported in Fig. <ref>, also provide very similar indications.Finally, a spectral analysis of the velocity field 𝐮 is performed in Fig. <ref>. This flow variable has been sampled in time at four probes located at y^⋆∈ [1.45,46.74]. Power spectrums have been obtained using a Morlet Transform <cit.> for the baseline LES, the run DA-LES2, and the DNS. The spectra are plotted over the dimensionless wave number, κ^+ = κν/u_τ with κ = 2π f/u_c. f is here the set of frequencies used for the Morlet transform. On the first line, data for the streamwise component u_x are shown at locations where observation is available and data from that sensor is used in the DA analysis phase (indicated as U-DA in the legend). Comparing the baseline simulation and the DA run, one can see that the accuracy of the spectra has been improved for every y^⋆ investigated. The best result is observed in the proximity of the wall as shown in Fig. <ref>. For this location, the energy's amplitude is improved by approximately one order of magnitude.The comparison of the spectra from the DNS and the run DA-LES2 also indicates an offset of the wavenumber for which the spectral density starts to decrease fast. This offset, which is around one octave, is very close to the ratio of the mesh resolution in the streamwise direction (see Tab.<ref>). For the baseline LES, this drop in energy begins at lower wavenumbers. This observation can be justified by the discrepancy in the prediction of u_τ (which is used to obtain κ^+) as well as by the Smagorinsky closure, which provides an unwanted dissipative effect at the large scales. Results on the second line of Figs. <ref> and <ref> are obtained at a location where a DNS sensor is available and used for DA analysis, but the information assimilated (streamwise velocity) is not the one here investigated. More precisely, the power spectra for the spanwise and vertical components are shown. One can see that, similarly to what was observed for the spectra of the streamwise velocity, a global improvement is obtained for the DA-LES2 run. This result confirms the global beneficial DA effect over the complete flow field, and not just for the variables for which observation is available. The analysis is completed by the results inFig. <ref>, where the spectrum of the streamwise velocity sampled at a location not used in the DA analysis (N-DA) is shown. Again, one can see that the spectrum shows an improvement similar to what was observed at sensors actively used in the DA procedure, indicating that the optimization of the SGS closure is globally beneficial in particular to reduce the dissipation of the resolved energy at large scales.In summary, on-the-fly DA using instantaneous measurements is able to improve the accuracy of LES via calibration of the SGS closure. An interesting point is that present results are similar to findings by Mons et al. <cit.>, which were however obtained via observation of the physical quantities used to evaluate the performance of the LES simulations. In this case, the optimization process is more complex, because of the instantaneous nature of the observation as well as for its sparsity in space and time. Thus, the present findings open perspectives of real-time optimization of scale-resolving CFD using tools based on the EnKF, once the computational architectures are strong enough to do so. However, similarly to what was observed by Mons et al. <cit.>, the parametric optimization can mitigate but not eliminate the discrepancy between Smagorinsky LES and DNS, due to the intrinsic limitations of the structural form of the SGS model. While this problem is difficult to challenge, one can arguably consider that on-the-fly DA has a higher potential to determine in real-time SGS model structural forms and correction for a specific case than offline EnKF approaches. Lastly, both strategies used in this analysis indicate that the best accuracy is obtained for very low values of the model constant C_k. Despite the run DA-LES2 provides a more sophisticated space distribution of this parameter, values are low enough to consider that the dynamic effect of the SGS closure becomes globally and locally minor, as shown by the profiles obtained for DA-LES1. These results are consistent with recent works presenting extensive comparisons between explicit and implicit SGS closures <cit.>. § SYNCHRONIZATION OF THE FLOW FIELDThe synchronization capabilities of the DA algorithm are now investigated. With synchronization, we indicate the capability of the DA algorithm to progressively reduce the discrepancy between the instantaneous model solution and the observation, both in proximity and far from the sensors. If successful, the only state corrections applied by the analysis phase are due to the accumulation of error in forecast step(s), due to the lack of accuracy of the model.Even though synchronization is not necessary for the analysis of statistical moments, such as the ones investigated in Sec. <ref>, it has crucial importance for the analysis of instantaneous features of unstationary flows. In fact, in a digital twin system, efficient synchronization enables the model to identify extreme events and thus prevent critical occurrences for the physical counterpart.Tools based on the EnKF can naturally act on the synchronization of the instantaneous flow. Thanks to the flexibility of the quantity observed and the local correlation captured between the physical variables, their efficiency in this task is supposedly higher than classical Nudging. However, during the DA calculations, the variability of the ensemble tends to diminish relatively fast, potentially precluding an efficient synchronization. To avoid this issue, the hyperparameter known as inflation must be properly optimized. In this section, a number of DA runs are performed to study the effects of inflation over the rate of synchronization. In this case, the attention is focused on the very first analysis phases, and results are investigated over two advective times t_A. The DA analyses are now performed every two time steps i.e. 0.04t_A, which corresponds to a total of 50 DA state updated over the time window of investigation. Such a high frequency in updates has been imposed to ensure that errors due to the sparsity in time of the data are neglected <cit.>. The state estimation is obtained via the flow prediction of 40 members, which are initialized using a different velocity field but share the same value of C_k for the SGS model obtained in the DA-LES2 procedure. The 40 velocity fields used as prior states have been generated running a single simulation with the same optimized SGS model obtained Sec. <ref> and sampling complete flow fields every 10t_A. The inflation is here applied only to the state estimation via the stochastic approach described in Sec. <ref>. The inflation applied to the parametric SGS description is here set to zero in order to exclude effects due to different behaviors of the LES closure. In addition, the covariance matrix ℛ is also the same for each DA run and it is set to R = σ_m^2 I with σ_m = 5%. More details about these two last hyperparameters are provided in appendix <ref>, where parametric inflation is shown to have negligible effect for the purpose of this analysis. The effectiveness of the synchronization is evaluated using the following information: * The velocity field obtained by the ensemble members is sampled in correspondence of three sensors, which are selected among the 10800 sensors previously used in the reference DNS. Details about the sensors are given in Tab. <ref>. One can see that two of the probes are used in the DA algorithm, while the last one is not directly used. Still, the data obtained for the latter can be used for comparison.* A global estimation of the normalized root mean square deviation (indicated as Φ) for the velocity field is performed considering data from the 408 sensors used within the DA algorithm and for 408 sensors that were not used in the EnKF. The definition of Φ is given below, for an instant k : Φ_k = √(∑_j=1^N_o(⟨ s_j,k⟩-α_j,k)^2/N_o)/α_k^mean with ⟨ s_k ⟩ = ∑_i=1^N_eHu_i,k, α_k^mean = ∑_j=1^N_oα_j,k/N_o and N_o the number of observationsThe evolution of the instantaneous streamwise velocity u_x over the centerline average velocity u_c is shown in Fig. <ref> for the three probes. The velocity sampled from the DNS, which is used as observation, is shown in blue. Data sampled in the same location from the ensemble members of the DA procedure is shown in black. In this case, the black line corresponds to an ensemble average. Shaded areas visually represent the confidence level/variability in the data. More precisely, the blue area is connected with the values included in the covariance matrix R showing an area of thickness 2*σ_m. On the other hand, the grey area represents the 95% confidence interval for the model representation. This quantity is driven by the distribution of the prior states for t_A = 0, and it is progressively affected by the inflation applied to the physical state as more analysis phases are performed.The three probes have been selected to highlight different features of the flow field. Probes 1 and 2 correspond to sensors used in the DA procedures, but they are located at different distances from the wall (y^⋆ = 1.45 and y^⋆ = 19.84, respectively). On the other hand, the probe 3 is located at y^⋆ = 26.85 and the corresponding sensor is not used in the DA analyses. One can see in the first line of Fig. <ref> that, if no state inflation is used, the initial model variability due to the prior states collapses very rapidly with a drastic shrinking of the grey area. The grey and blue areas exhibit a very limited superposition, which prevents the model realizations from synchronizing with the observation. In the second, third, and fourth lines of Fig. <ref>, progressively more state inflation is used during the analysis phases. One can distinctively see an increase in the grey area associated with model variability, which does not decrease for larger simulation times. The analysis of the results for probes 1 and 2 clearly indicates that a threshold level of 15% Gaussian state inflation appears to be enough to obtain a convincing synchronization of the velocity field in correspondence with the sensors. This threshold could potentially be even lower if more sophisticated algorithms for state inflation are used. Significant improvements with increasing inflation are observed as well for the probe 3, even if the synchronization is not completely obtained. Therefore, these results confirm that the effect of the EnKF is not just local but, thanks to the scale interactions captured by the underlying LES model, a global improvement in the instantaneous flow prediction is obtained. One conclusion that can be drawn is that, once a significantly large superposition of the confidence areas is obtained for a sufficiently long time, a good synchronization is obtained. Similar behavior was also observed by Tandeo et al. <cit.> but for a one-dimensional model.The normalized root mean square deviation defined in equation <ref> is now used to provide a global assessment of the capabilities of the DA algorithm to synchronize the LES model with the DNS available data. Results are shown in Fig. <ref> for the four cases previously analyzed i.e. 0%, 5%, 15%, and 25% state inflation. The red line corresponds to a limit Φ_lim calculated comparing the values of the not inflated simulations against 1000 observations times. Therefore, for an infinite number of observations and locations, Φ_lim for Fig. <ref> (a) and (b) should be the same. It is here different due to the limited amount of probes and the heterogeneous set of coordinates of the probes. Results in Fig. <ref> (a) corresponds to the average discrepancy observed over the 408 locations where sensors are used for DA. The DA runs perform significantly better in the first stages, thanks to the variability initially provided with the choice of the prior states. However, results tend to degrade pretty rapidly for the DA run without state inflation. It could be expected to show very similar errors to Φ_lim after a sufficiently long time. On the other hand, the three DA experiments with non-zero state inflation behave very similarly. Their magnitude is significantly smaller than Φ_lim and it does not appear to deteriorate in the time window analyzed. Fig. <ref> (b) shows the results for the normalized root mean square deviation in correspondence of sensors where DNS data is available, but it is not used in the DA procedure. Results are qualitatively similar to what previously discussed for Fig. <ref> (a) even if, in this case, results for the DA runs are closer to Φ_lim. This observation is due to the lack of correct representation of the correlation between variables, which is due to the limited number of ensemble members (sampling error). In this case, results seem to be more sensitive to the value of the state inflation, as very strong inflation seems to perform worse than moderate state inflation. One could expect in this case that the perturbations might be strong enough to introduce an unwanted noise effect on the flow prediction, degrading the global accuracy. Another potential issue with hyperparameters, which is not studied in the present work, is associated with the characteristic length used for covariance localization. If the selected length is large, spurious correlations may appear because of sampling errors. On the other hand, a short length may preclude an accurate representation of the correlation between the variables, working as a filter over the multi-scale non-local interactions observed in turbulent flows. In summary, the analysis of the global quantity Φ stresses how much DA can be important to provide a successful instantaneous state estimation, which could be even more important than an accurate parametric optimization for the prediction of rare extreme events and for the optimization of unstationary flows exposing a strong time evolution of its features. § CONCLUSIONAn online DA strategy based on state-of-the-art techniques for the Ensemble Kalman Filter has been used to improve the predictive capabilities of Large-Eddy Simulation. The attention of the work is mostly devoted to the correct representation of instantaneous features, which can be essential to predict and anticipate extreme events affecting industrial applications. To perform the analysis, an on-the-fly coupling has been performed via the platform CONES, combining LES solver runs using OpenFOAM and localized instantaneous high-fidelity information obtained from a DNS. First, the DA runs used instantaneous values of the velocity field to optimize the parametric behavior of the Smagorinsky model used for subgrid closure. Two strategies have been proposed to obtain an optimized value of the model constant C_k. Despite the difference in complexity, both strategies provide a similar result, which is a significant reduction of the intensity of C_k and of the SGS model. These conclusions support recent discussion in the LES community about the usage of explicit and implicit SGS modeling <cit.>. This optimization reduces the discrepancy of the statistical moments of the flow field with DNS data, but it does not eliminate it, as observed by Mons et al. <cit.>. The reason behind this observation is associated with the structural limitations of the Smagorinsky model, whose intrinsically dissipative nature is not able to fully take into account the effects of the filtered scales and their interactions with the resolved flow field.The DA model has then been used to analyze the efficiency in flow reconstruction and synchronization with the high-fidelity sparse data available. It was shown that DA is able to significantly improve the correlation between model results and observation, but the efficiency in such synchronization is governed by the state inflation applied. This hyperparameter is an essential key feature of the DA algorithm which deserves more specific studies in the future. Similarly, the effects of physical and covariance localization, which were excluded in the present analysis, will be extensively investigated in future research for online DA strategies.Our research activities are supported by the funding of the French Agence Nationale de la Recherche (ANR) through project PRC 2020 ALEKCIA.§ USAGE OF MULTIPLE PHYSICAL INFORMATION FOR EACH SENSOR IN THE DA PROCEDURE In order to test the sensitivity of the DA algorithm to multiple physical information available at one sensor, an additional DA run has been performed. This test, referred to as DA-LESA, is almost identical to DA-LES1. The only difference is that, for each sensor, the three components of the velocity field are here provided. We remind that for the runs DA-LES1 and DA-LES2 only the streamwise component of the velocity field was used as observation in the analysis phase. Therefore for DA-LESA, the observation matrix is composed of 408 × 3=1224 values at each analysis phase. The covariance matrix of the measurement error is expressed as R = σ_m^2I, where σ_m quantifies the uncertainty of the measurements. In this case, σ_m is the same for every sensor and it is calculated accounting for a 5% uncertainty over the maximum velocity observed in the DNS to mimic the accuracy of experimental measurements. Therefore, σ_m ≈ 0.045 in this case. This choice implies that the confidence in the DNS results is lower approaching the wall. This decision is beneficial to obtain a robust behavior of the EnKF because large discrepancies between DNS and LES can be observed very close to the wall. The results of the optimization are similar to those of the other DA runs, indicating that the EnKF procedure is robust. The optimized value of the model constant is C_k ≈ 0.025 which is 3.7 times smaller than the baseline LES and corresponds to C_S ≈ 0.06. During the DA run, values exhibit oscillations in the range C_k ∈ [0.020, 0.030]. The DA-LESA also shows a good improvement in the prediction of the friction velocity u_τ= 0.052 with an over-prediction of the friction velocity of 8.3%, compared with the 28% of the baseline LES. The normalized mean velocity u^+ over y^+ and the normalized resolved shear stress of DA-LES1 and DA-LESA are shown in Fig.<ref>. Other statistical moments of the velocity field are not shown here for the sake of conciseness, as they provide similar information. Differences between the DA runs for the prediction of the statistical moments are noticeable and mainly associated with the different prediction of the friction velocity, which is less accurate for DA-LESA. One possible reason is associated with the level of confidence in the observation, which was set at the same level for the three components of the velocity field. In the near wall region, the streamwise component is around one order of magnitude larger than the other two components, and uncertainties propagated in the observation vector act as a random noise for u_y and u_z. The problem of determining an optimized hyperparametric description of the confidence level of observation, which degraded the global accuracy of the DA run in this case, deserves future investigation when such a quantity is not directly quantifiable.§ COMPUTATIONAL RESOURCES REQUIRED TO PERFORM THE DA RUN The computational resources required to perform the DA runs are now discussed. Tab. <ref> shows information about preliminary tests performed varying a number of key parameters such as the number of mesh elements used for the LES model N, the amount of sensors/observations N_o and the size of the ensemble N_e. In particular, the values investigated for N (350000 and 154000) correspond to the numbers of mesh elements for the complete and clipped physical domain used in the present work. Comparing the completion time between lines 1 and 2 of Tab. <ref>, one can see that the reduction of the degrees of freedom of the model is beneficial in terms of computational cost, dividing by 2.25 the completion time. However, the most important parameter is the number of observations. The comparison of lines 2, 4, and 5 shows a dramatic reduction of the computational resources required with fewer sensors. This point stresses out the importance of the quality of observations used in the DA rather than quantity, as previously shown in <cit.>. At last, one can see that the comparison of results in lines 2 and 3, where a different number of ensemble members N_e is used, has a lower impact on the computational cost when compared with the previous parameters of investigation.§ SUPPLEMENTARY DETAILS ABOUT SYNCHRONIZATIONThe sensitivity of synchronization to inflation in the parametric description of the model and in the variance of the observation is here discussed. Fig. <ref> shows the normalized root mean square deviation with variation of the inflation on the parameters and state inflation. Three levels of parameter inflation are used from light to dark color: 0%, 2%, and 5%. State inflation is set to 0% in blue, 5% in green, and 15% in orange colors. Inflation for the model parameters appears to have a negligible effect on the synchronization obtained via DA when compared with state inflation. Fig. <ref> shows four levels of the prescribed variance for the observation, for 5% inflation of the state. Again, synchronization does not seem to be affected by the level of confidence in the observations here tested, which is in the range of recommendations for robust application of the EnKF. Very low or very high confidence in the observation can lead to poor synchronization as well as inaccurate parametric optimization, as shown by Tandeo et al. <cit.>.
http://arxiv.org/abs/2310.18016v1
{ "authors": [ "Lucas Villanueva", "Karine Truffin", "Marcello Meldi" ], "categories": [ "physics.flu-dyn" ], "primary_category": "physics.flu-dyn", "published": "20231027094456", "title": "Synchronization and optimization of Large Eddy Simulation using an online Ensemble Kalman Filter" }
1]Mingyang Xu 1] Yujie Tanmycorrespondingauthor [mycorrespondingauthor]Corresponding author. [email protected] 1]Yurong Liangmycorrespondingauthor [email protected] 1]Jiawen Zhi 1]Xiaoyang Guo 1]Dan Luo 1]Panpan Wangmycorrespondingauthor [email protected] 1,2]Hanzhong Wumycorrespondingauthor [email protected] 1]Chenggang Shaomycorrespondingauthor [email protected][1]MOE Key Laboratory of Fundamental Physical Quantities Measurements, Hubei Key Laboratory of Gravitation and Quantum Physics, PGMF and School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China [2]State Key Laboratory of Applied Optics, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, ChinaIn this work, we have built an experimental setup to simulate the clock noise transmission with two spacecrafts and two optical links, and further demonstrated the extraction of picometer level signal drowned by the large laser frequency noise and clock noise with the data post-processing method. Laser frequency noise is almost eliminated by using the idea of time-delay interferometry (TDI) to construct an equal arm interferometer. Clock asynchronism and clock jitter noise are significantly suppressed by laser sideband transmitting the clock noise using an electro-optic modulator (EOM). Experimental results show a reduction in laser frequency noise by approximately 10^5 and clock noise by 10^2, recovering a weak displacement signal with an average amplitude about 60 picometer and period 1 second. This work has achieved the principle verification of the noise reduction function of TDI technique to some extent, serving the data processing research of space-borne gravitational wave detection. Space-borne gravitational wave detection;Time-delay interferometry; Clock noise reduction; Clock synchronization § INTRODUCTIONWith the successful direct detection of gravitational wave (GW) by the Laser Gravitational-Wave Observatory (LIGO) in 2016 <cit.>, people can study the universe in a unique and entirely new way. Ground-based GW detection is mainly sensitive to GWs in the frequency band of 10-10^4 Hz, which is limited by ground vibration noise and gravity gradient noise below 10 Hz. In order to find more abundant GW sources, space-borne GW detection mission has been proposed, including LISA <cit.>, DECIGO <cit.>, Tianqin <cit.>, Taiji <cit.>, etc.The space-borne GW detector, such as LISA, consists of three spacecrafts. Two drag-free test masses are housed in each spacecraft serving as inertial references. When GWs pass by, the distance between the test masses in two adjacent spacecraft changes slightly. Heterodyne laser interferometers are used to measure this tiny distance change, in which a clock is required to trigger the digital sampling process. The GW signals are so weak that a lot of noise will overwhelm them. Typically, the laser frequency noise and the clock noise are the two main noise sources, which are about 7 and 3 orders of magnitude higher than the typical GW signals, respectively. To suppress these noises from hardware is cost and difficult. Thus, a post-processing technology named time-delay interferometry (TDI) is proposed <cit.>. TDI uses the different combination of data streams to construct a virtual equal arm interferometry which can reduce laser frequency noise. To suppress the clock noise, an EOM is used to transfer local clock noise to remote spacecraft which can construct a clock comparison chain <cit.>. Another clock noise reduction strategy uses optical frequency comb to link laser and clock, and then one can simultaneously remove the laser and clock noises by modifying the TDI combination <cit.>. Theoretical researches have shown the importance and necessity of TDI post-processing algorithm in space-borne GW detection. To demonstrate the aspects of the signal processing chain, several laboratory experiments are designed. The first demonstration experiment <cit.> using the Sagnac interferometer showed that the laser frequency noise can be eliminated by using the laser round-trip data streams, and the clock noise can be eliminated by using the EOM sideband modulation. Other experiments have demonstrated the Michelson combination to suppress laser frequency noise using long fiber delay <cit.> or electronic-phase-delay unit delay <cit.>, and recently, researchers use a hexagonal optical bench to verify clock synchronization scheme down to LISA performance levels between three satellites <cit.>. Optical frequency comb based TDI experiments <cit.> also demonstrate post-processing algorithms to suppress laser frequency noise and clock noise. Researchers have made a lot of effort to suppress the interferometer noises and electronic noises, so these experimental results demonstrated the excellent performance of the TDI suppression technique. The previous works focus on the noise floor of the system <cit.> or the concept presentation <cit.>, however, they did not introduce a real displacement signal, and did not check whether the real picometer (pm) level signal could be extracted after the laser frequency noise and clock noise were suppressed.In this work, we present a TDI like system for extracting pm level displacement signal under large laser frequency noise and clock noise. The results show that the post-processing technology reduces laser frequency noise and clock noise by 10^5 and 10^2, recovering a real displacement signal with average amplitude 60 pm driven by a piezoelectric (PZT) ceramic. This confirms that weak signals can be restored by post-processing even if they are drowned by large laser frequency noise and clock noise. § EXPERIMENT SETUPFig. <ref> shows the experimental setup. This experiment is set up to resemble laser interference measurements between two spacecrafts, and each spacecraft carries one laser, one independent clock, one phasemeter, and one clock transfer chain based on the EOM. A 1064 nm fiber laser (RIO) is split into two paths, one path pass through a fiber acousto-optic modulator (AOM, AAoptics-80 MHz) with 75 MHz driving frequency, the other pass through another AOM2 with 85 MHz driving frequency. In AOM1, a frequency noise is added to the driving frequency <cit.>. So, the laser frequency noises from AOM1 and AOM2 are independent and different. A square loop consisting of four beam splitters is designed to resemble the laser interference between two satellites. And a PZT-driven mirror is used to introduce a real weak displacement signal. The two laser interferences then can be received in the photodetectors (KEYANG), and the phase or frequency information can be obtained from homemade FPGA phasemeters <cit.>. Each phasemeter is triggered by an independent clock, so clock noise is introduced during the heterodyne measurements. To eliminate clock noise, a high frequency source (Rigol DSG821) is used to convert the 10 MHz clock to GHz, then the up-conversion clock noise is modulated to the sideband of the laser by the EOM (iXblue NIR-10G). In order to suppress the noise of the interferometer, the optical fiber section and the free optical path are placed separately in two thermal insulation boxes to minimize environmental disturbances (airflow and temperature fluctuations).The carrier data stream 1 and low sideband data stream 1 measured by the phasemeter 1 can be written as:s_1^c( t_1 ) =p_2( t_1 ) -p_1( t_1 ) -a_1q_1( t_1 ),s_1^sb( t_1 ) = p_2( t_1 ) -p_1( t_1 ) -m_2q_2( t_1 ) +m_1q_1( t_1 )-( a_1-m_2+m_1 ) q_1( t_1 ).Here, p(t) is the laser frequency noise, q(t) is the dimensionless relative clock noise, a is the heterodyne interference frequency with a_1=v_2-v_1=85 MHz-75 MHz=10 MHz, where v_1 is the center frequency of the laser after passing through AOM1 and v_2 is the center frequency of the laser after passing through AOM2. m is the EOM modulation frequency. Through the combination of carrier and sideband, one can extract expressions that mainly contain clock noise:r_1(t_1)=s_1^c( t_1 ) -s_1^sb( t_1 ) =m_2q_2( t_1 ) -m_2q_1( t_1 ).Similarly, the carrier data stream 2 and low sideband data stream 2 measured by the phasemeter 2 can be written as:s_2^c( t_2 ) =p_1( t_2 ) -p_2( t_2 ) -a_2q_2( t_2 )+h(t_2), s_2^sb( t_2 ) = p_1( t_2 ) -p_2( t_2 ) -m_1q_1( t_2 ) +m_2q_2( t_2 )-( a_2-m_1+m_2 ) q_2( t_2 )+h(t_2),where a_2=-a_1 and h(t) is the displacement signal driven by the PZT, and the combination of carrier 2 and sideband 2 is:r_2(t_2)=s_2^c( t_2 ) -s_2^sb( t_2 ) =m_1q_1( t_2 ) -m_1q_2( t_2 ).It can be seen that the main noise of r_1 and r_2 is consistent, but the asynchronous of the clocks leads to residual noise in the subtraction of the two noises. The asynchronous of the clocks can be written as <cit.>:t_1( τ) =t_2( τ) +δτ _2,0+δτ _2.where δτ _2,0 is the constant initial time offset between clock1 and clock2, δτ _2 is the timing noise of clock2 relative to the clock1. Therefore, we need to synchronize the sampling time of the two clocks.Taking clock1 as the primary clock, then, the difference of timing noise between clock2 and clock1 can be obtained from the data r_2 collected by clock2, which is:δτ _2≈∫_0^τr_2/m_1 dτ.And the constant initial time offset between clock1 and clock2 can be obtained by pseudo-random code ranging (PRNR) <cit.> or time-delay Interferometry ranging (TDIR) <cit.>. In our experiment, we use the TDIR process which is also a post-processing technology. That is:γ(Λ)=a_1/m_2r_1(t_1)+a_1/m_1r_2(t_2+Λ+δτ _2).By using dynamic fractional delayed interpolation <cit.>, we can translate r_2 on any time scale without introducing additional data flows. Then, the constant initial time offset δτ _2,0 is determined by finding the minimum value of γ(Λ) for different translation times Λ. After clock synchronization, combining two carrier data streams can eliminate laser frequency noise:s_1^c( t )+s_2^c( t ) =h(t)-a_1(q_1( t )-q_2( t )).At this time, the clock noise dominates, and then through the combination of the carrier and sideband data streams, we can further eliminate the clock noise while the signal remains:η=s_1^c( t ) +s_2^c( t )-a_1/m_2r_1(t)=h(t). § EXPERIMENT RESULTSFig. <ref> shows that the clock offset grows to 2 ms over 16000 s according to Eq. (<ref>) (blue line), and the orange line shows the detrended clock timing noise. Additionally, Fig. <ref> shows the Amplitude spectral density (ASD) of Eq. (<ref>) at different translation times. With the improvement of clock synchronization accuracy, the residual noise in γ becomes less which ultimately limited by the noise floor of the transmission link. In our experiment, the transmission link noise floor is dominated by high frequency signal sources. Finally, the initial offset can be determined to sub-microsecond accuracy with δτ _2,0≈0.0065505 s. In our experiment, the data collected is the form of the frequency unit, in order to get the information of the displacement, the data in frequency unit is integrand to get the information of the phase. Then, by times λ/2π, where λ is the wavelength of the laser, we can convert phase into displacement.Fig. <ref> shows the frequency domain results. The line A shows the raw measurement data s_1^c which is dominated by laser frequency noise with 1×10^-7 m/ Hz^1/2 at 1 Hz and 2×10^-3 m/ Hz^1/2 at 1 mHz. The line B shows the result of eliminate laser frequency noise before clock synchronism with s_1^c(t_1)-s_2^c(t_2) which is dominated by clock noise and residual laser frequency noise due to the clock asynchronism. The line D shows the result of eliminate laser frequency noise with clock synchronism as shown in Eq. (<ref>) with 3×10^-9 m/ Hz^1/2 at 1 Hz and 1×10^-5 m/ Hz^1/2 at 1 mHz. Below 10 mHz, the line B and line D are almost same which is mainly limited by clock noise. Above 10 mHz, the noise of line B is higher than line D, which is due to the residual laser frequency noise caused by clock asynchronism. Since the clock asynchronism is mainly caused by the initial clock difference, and the difference between the two clocks is small, this clock asynchronism has less effect on the long time, mainly affecting the high frequency. The line C is related to the clock noise which is multiplied by the signal sources, shown in Eq. (<ref>). The lines C and D are trending in the same direction, indicating that they are affected by the same noise, i.e., the clock noise. Then, with Eq. (<ref>), We can suppress the clock noise below the interferometer noise floor, which is 1×10^-11 m/ Hz^1/2 at 1 Hz and 2×10^-8 m/ Hz^1/2 at 1 mHz. The noise of the interferometer is mainly caused by the stability of the optical path and the fluctuation of temperature. Finally, the laser frequency noise and clock noise are suppressed by about 5 and 2 orders of magnitude and the weak displacement signal at 1 Hz can be seen as shown in line E. After 16000 s time accumulation, the ASD of this displacement signal reaches about 4×10^-9 m/ Hz^1/2 at 1 Hz. This is equivalent to a displacement signal with an average amplitude of 60 pm period 1s. Fig. <ref> shows the time domain of raw data of s_1^c and the processed data of η. The blue line shows the random laser frequency drift and clock drift, drowning out the signal. After data process, we can see that the noise in the blue line was largely suppressed as shown in the red line. The inset in Fig. <ref> enlarges a small section of the red line,a displacement signal with average amplitude about 60 pm and period 1 s can be clearly seen.§ CONCLUSIONIn the future space-borne GW detection, the sensitive signal is extremely weak, and can be easily drowned out by noises. Laser frequency noise and clock noise are dominate noises which are introduced by unequal arm length and digital sampling process of the heterodyne interference signal. TDI post-processing techniques are used to eliminate them. In our experiment, although no delay is introduced, the idea of TDI constructing equal arms and the idea of clock sideband transfer comparison are used. The laser frequency noise is suppressed by 5 orders of magnitude, the clock noise is suppressed by 2 orders of magnitude, and finally, a 60 pm signal is restored, which is limited by the noise floor of the interferometer. The experimental results show that if the signal really exists, with the main noise being suppressed, the weak signal can be restored. At present, the system is mainly limited by interferometer noise. In the future, we will build an integrated optical bench to suppress optical noise. In addition, signal sources that multiply the clock frequency to GHz also introduce additional noise, and we will also focus on this electrical noise.§ CREDIT AUTHORSHIP CONTRIBUTION STATEMENTMingyang Xu: Investigation, Methodology, Formal analysis, Writing – original draft, Writing – review & editing. Yujie Tan: Conceptualization, Funding acquisition, Writing – review & editing. Yurong Liang: Methodology, Funding acquisition, Writing – review & editing. Jiawen Zhi: Formal analysis. Xiaoyang Guo:Methodology. Dan Luo: Formal analysis. Panpan Wang: Investigation, Validation, Writing – review & editing. Hanzhong Wu: Investigation, Methodology, Funding acquisition, Writing – review & editing. Cheng-Gang Shao: Validation, Writing – review & editing, Funding acquisition, Project administration, Supervision.§ DECLARATION OF COMPETING INTERESTThe authors declare no conflicts of interest.§ DATA AVAILABILITY Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request. § ACKNOWLEDGMENTSThis work is supported by National Key Research and Development Program of China (2022YFC2204601, 2022YFC2203903); National Natural Science Foundation of China (11925503, 12275093 and 12175076); Natural Science Foundation of Hubei Province (2021CFB019), and State Key Laboratory of applied optics (SKLAO2022001A10). elsarticle-num
http://arxiv.org/abs/2310.18380v1
{ "authors": [ "Mingyang Xu", "Yujie Tan", "Yurong Liang", "Jiawen Zhi", "Xiaoyang Guo", "Dan Luo", "Panpan Wang", "Hanzhong Wu", "Chenggang Shao" ], "categories": [ "astro-ph.IM", "gr-qc", "physics.ins-det" ], "primary_category": "astro-ph.IM", "published": "20231027022344", "title": "Experimental demonstration of picometer level signal extraction with time-delay interferometry technique" }
[1]Equal Contribution [2]Corresponding Authors Automated threshold selection and associated inference uncertainty for univariate extremes Conor MurphyThis paper is based on work completed while Conor Murphy was part of the EPSRC funded STOR-i centre for doctoral training (EP/S022252/1), with part-funding from Shell Research Ltd., Jonathan A. TawnDepartment of Mathematics and Statistics, Lancaster UniversityandZak VartyDepartment of Mathematics, Imperial College London Received date; accepted date ============================================================================================================================================================================================================================================================================================================================================================================ Recently, <cit.> establish a lower bound of iteration complexity for the first-order optimization under an L-smooth condition and a bounded noise variance assumption. However, a thorough review of existing literature on Adam's convergence reveals a noticeable gap: none of them meet the above lower bound. In this paper, we close the gap by deriving a new convergence guarantee of Adam, with only an L-smooth condition and a bounded noise variance assumption. Our results remain valid across a broad spectrum of hyperparameters. Especially with properly chosen hyperparameters, we derive an upper bound of iteration complexity of Adam and show that it meets the lower bound for first-order optimizers. To the best of our knowledge, this isthe first to establish such a tight upper bound for Adam's convergence. Our proof utilizes novel techniques to handle the entanglement between momentum and adaptive learning rate and to convert the first-order term in the Descent Lemma to the gradient norm, which may be of independent interest. § INTRODUCTION First-order optimizers, also known as gradient-based methods, make use of gradient (first-order derivative) information to find the minimum of a function. They have become a cornerstone of many machine learning algorithms due to the efficiency as only gradient informaiton is required, and the flexibility as gradients can be easily computed for any function represented as directed acycliccomputational graph via auto-differentiation<cit.>. Therefore, it is fundamental to theoretically understand the properties of these first-order methods. Recently, <cit.> establish a lower bound on the iteration complexity of stochastic first-order methods. Formally, for a well-studied setting where the objective is L-smooth and a stochastic oracle can query the gradient unbiasly with bounded variance(see Assumption <ref> and <ref>),any stochastic first-order algorithm requires at least ε^-4queries (in the worst case) to find an ε-stationary point, i.e., a point with gradient norm at most ε. <cit.> further show that the above lower bound is tight as it matches the existing upper bound of iteration complexity of SGD <cit.>.On the other hand, among first-order optimizers, Adam <cit.> becomes dominant in training state-of-the-art machine learning models <cit.>. Compared to vanilla stochastic gradient descent (SGD), Adam consists of two more key components: (i) momentum to accumulate historical gradient information and (ii) adaptive learning rate to rectify coordinate-wisestep sizes. The psedo-code of Adam is given as Algorithm <ref>.While the sophisticated design of Adam enables its empirical superiority, it brings great challenges for the theoretical analysis. After examininga series of theoretical workson the upper bound of iteration complexity of Adam <cit.>, we find that none of them match the lower bound for first-order optimizers: they not only consume more queries than the lower bound to reach ε-stationary iterations but also requires additional assumptions (see Section <ref> for a detailed discussion). This theoretical mismatch becomes even more unnatural given the great empirical advantage of Adam over SGD, which incites us to think: Is the gap between the upper and lower bounds for Adam a result of the inherent complexity induced by Adam's design, or could it be attributed to the proof techniques not being sharp enough?This paper answers the above question, validating the latter hypothesis, by establishing a new upper bound on iteration complexity of Adam for a wide range of hyperparameters that cover typical choices.Specifically, our contribution can be summarized as follows:* We examine existing works that analyze the iteration complexity of Adam, and find that none of them meets the lower boundof first-order optimization algorithms;* We derive a new convergence guarantee of Adam with only assuming L-smooth condition and bounded variance assumption (Theorem <ref>), which holds for a wide range of hyperparameters covering typical choices;* With chosen hyperparameters, we further tighten Theorem <ref> and show that the upper bound on the iteration complexity of Adammeets the lower bound,closing the gap (Theorem <ref>). Our upper bound is tighter than existing results by a logarithmic factor, in spite of weaker assumption.To the best of our knowledge, this work provides the first upper bound on the iteration complexity of Adam without additional assumptions other than L-smooth condition and bounded variance assumption. It is also the first upper bound matching the lower bound of first-order optimizers. Organization of this paper. The rest of the paper is organized as follows: in Section <ref>, we first present the notations and settup of analysis in this paper ; in Section <ref>, we revisit the existing works on the iteration complexity of Adam; in Section <ref>, we present a convergence analysis of Adam with general hyperparameters (Theorem <ref>); in Section <ref>, we tighten Theorem <ref> with a chosen hyperparameter, and derive an upper bound of Adam's iteration complexity which meets the lower bound; in Section <ref>, we discuss the limitation of our results.§ PRELIMINARY The Adam algorithm is restated in Agorithm <ref> for convenient reference. Note that compared to the orignal version of Adam in <cit.>, the bias-correction terms are omitted to simplify the analysis, and our analysis can be immediately extended to the original version of Adam because the effect of bias-correction term decays exponentially. Also, in the original version of Adam, the adaptive learning rate is η/√(_t)+λ1_d instead of η/√(_t). However, our setting is more challenging and our result can be easily extend to the original version of Adam, since the λ term makes the adaptive learning rate upper bounded and eases the analysis.Notations. For a,b∈ℤ^≥ 0 and a≤ b, denote [a,b]={a,a+1,⋯,b-1,b}. For any two vectors ,v∈ℝ^d, denote ⊙v as the Hadamard product (i.e., coordinate-wise multiplication) betweenand v. When analyzing Adam, we denote the true gradient at iteration t as _t = ∇ f(_t), and the sigma algebra before iteration t as _t= σ(_1,⋯,_t-1). We denote conditional expectation as ^|_t[∗]=[∗|_t]. We also use asymptotic notations o, 𝒪, Ω, and Θ, where h_2(x)=o_x→ x_0(h_1(x)) means that lim_x→ x_0h_2(x)/h_1(x)=0 (when the context is clear, we abbreviate x→ x_0 and only use o(h_1(x))); h_2(x)=𝒪(h_1(x)) means that there exists constant γ independent of x such that h_2(x)≤γ h_1(x); h_2(x)=Ω(h_1(x)) means that h_1(x)=𝒪(h_2(x)); and h_2(x)=Θ(h_1(x)) means that h_2(x)=𝒪(h_1(x)) and h_2(x)=Ω(h_1(x)). Objective function. In this paper, we consider solving the following optimization problem: min_∈ℝ^d f(). We make the following assumption on the objective function f. [On objective function]We assume f to be non-negative. We further assume that f satisfies L-smooth condition, i.e., f is differentiable, and the gradient of f is L-Lipschitz.We denote the set of all objective functions satisfying Assumption <ref> as ℱ(L).Stochastic oracle. As f is differentiable, we can utilize the gradient of f (i.e., ∇ f) to solve the above optimization problem. However, the ∇ f is usually expensive to compute. Instead, we query a stochastic estimation of ∇ f through a stochastic oracle O. Specifically, thestochastic oracleconsists of a distribution 𝒫 over a measurable space 𝒵 and a mapping O_f: ℝ^d×𝒵→ℝ^d. We make the following asssumption on .[Onstochastic oracle]We assume thatis unbiased, i.e., ∀∈ℝ^d,𝔼_z∼𝒫_f(,z)=∇ f(). We further assumehas bounded variance, i.e., ∀∈ℝ^d,𝔼_z∼𝒫 [‖_f(,z)-∇ f() ‖^2 ] ≤σ^2.We denote the set of all stochastic oracles satisfying Assumption <ref> with variance bound σ^2 as 𝔒(σ^2).Algorithm. Adam belongs to first-order optimization algorithms, which is defined as follows:An algorithmis called a first-order optimization algorithm, if it takes an input _1 and hyperparameter θ, and produces a sequence of parameters as follows: first sample a random seed r from some distribution 𝒫_r[Such a random seed allows sampling from all iterations to generate the final output of the optimization algorithm. As an example, Algorithm <ref> sets 𝒫_r as a uniform distribution over [T].], set ^(θ)_1=_1 and then update the parameters as^(θ)_t+1= _θ^t(r,^(θ)_1, _f(^(θ)_1,z_1),⋯, _f(^(θ)_t,z_t)),where z_1,z_2,⋯,z_t are sampled i.i.d. from 𝒫.Iteration complexity. Denote the set of all first-order optimization algorithms as 𝒜_first. We next introduce iteration complexity to measure the convergence rate of optimization algorithms.The iteration complexity of first-order optimization algorithmis defined as𝒞_ε(,Δ,L,σ^2)=sup_∈𝔒(σ^2)sup_f∈ℱ(L)sup__1:f(_1)=Δinf_θ{T: ‖∇ f(^(θ)_T)‖≤ε}.Furthermore, the iteration complexity of the family of first-order optimization algorithms 𝒜_first is𝒞_ε(Δ,L,σ^2)=sup_∈𝔒(σ^2)sup_f∈ℱ(L)sup__1:f(_1)=Δinf_∈𝒜_firstinf_θ{T: ‖∇ f(^(θ)_T)‖≤ε}. It should be noticed that the iteration complexity of the family of first-order optimization algorithms is a lower bound of the iteration complexity of a specific first-order optimization algorithm, i.e., ∀∈𝒜_first,𝒞_ε(,Δ,L,σ^2)≥𝒞_ε(Δ,L,σ^2). § RELATED WORKS: NONE OF EXISTING UPPER BOUNDSMATCH THE LOWER BOUND In this section, we examine existing works that study the iteration complexity of Adam, and defer a discussion of other related works to Appendix <ref>. Specifically, we find that none of them match the lower bound for first-order algorithms provided in <cit.> (restated as follows). ∀ L,Δ,σ^2>0, we have 𝒞_ε(Δ,L,σ^2)=Ω (1/ε^4).Note that in the above bound, we omit the dependence of the lower bound over Δ, L, and σ^2, which is a standard practice in existing works (see <cit.> as examples) because the dependence over the accuracy ε can be used to derive how much additional iterations is required for a smaller target accuracy and is thus of more interest. In this paper, when we say "match the lower bound", we always mean that the upper bound has the same order of ε as the lower bound.Generally speaking, existing works on the iteration complexity of Adam can be divided into two categories: they either (i) assume that gradient is universally bounded or (ii) make stronger assumptions on smoothness. Below we respectively explain how these two categories of works do not match the lower bound in <cit.>. The first line of works, including <cit.>, assume that the gradient norm of f is universally bounded, i.e., ‖∇ f()‖≤ G, ∀∈ℝ^d. In other words, what they consider is another iteration complexity defined as follows:𝒞_ε(,Δ,L,σ^2,G)≜sup_∈𝔒(σ^2)sup_f∈ℱ(L),‖∇ f‖≤ Gsup__1:f(_1)=Δinf_θ{T: ‖∇ f(^(θ)_T)‖≤ε}.This line of works do not match the lower bound due to the following two reasons: First of all, the upper bound they derive is O (log 1/ε/ε^4), which has an additional log 1/ε factor more than the lower bound; secondly, the bound they derive is for 𝒞_ε(,Δ,L,σ^2,G). Note thatℱ(L)∩{f:‖∇ f‖≤ G} is a proper subset of ℱ(L) for any G, where a simple example in ℱ(L) but without bounded gradient is the quadratic function f(x)=‖ x‖^2. Therefore, we have that𝒞_ε(,Δ,L,σ^2)≥𝒞_ε(,Δ,L,σ^2,G), ∀ G≥ 0,and thus the upper bound on 𝒞_ε(,Δ,L,σ^2,G) does not apply to 𝒞_ε(,Δ,L,σ^2). Moreover, their upper bound of 𝒞_ε(,Δ,L,σ^2,G) tends to ∞ as G→∞, which indicates that if following their analysis, the upper bound of 𝒞_ε(,Δ,L,σ^2) would be infinity based on Eq. (<ref>).The second line of works includes <cit.>, which additionally assume a mean-squared smoothness property besides Assumption <ref> and <ref>, i.e., _z∼𝒫‖_f(,z)-_f(v,z)‖^2 ≤ L‖-v‖^2. Denote 𝔒̃ (σ^2,L)≜{:_z∼𝒫‖_f(,z)-_f(v,z)‖^2 ≤ L‖-v‖^2, ∀,v∈ℝ^d}∩𝔒(σ^2). The iteration complexity that they consider is defined as follows:𝒞̃_ε(,Δ,L,σ^2)=sup_∈𝔒̃(σ^2,L)sup_f∈ℱ(L)sup__1:f(_1)=Δinf_θ{T: ‖∇ f(^(θ)_T)‖≤ε}. Therate derived in <cit.> is O(log 1/ε/ε^6), which is derived by minimizing the upper bounds in <cit.> with respect to the hyperparameter of adaptive learning rate . According to <cit.>, the lower bound of iteration complexity of 𝒞̃_ε(,Δ,L,σ^2) is Ω (1/ε^3) and smaller than the original lower bound Ω (1/ε^4), resulting in an even larger gap between the upper bound and lower bound.Recently, there is a concurrent work <cit.> which does not require bounded gradient assumption andmean-squared smoothness property but poses a stronger assumption on the stochastic oracle: the set of stochastic oracles they consider is 𝔒̃̃̃={O:∀∈ℝ^d,𝔼_z∼𝒫_f(,z)=∇ f(), ℙ(‖_f(,z)-∇ f() ‖^2≤σ^2)=1}. 𝔒̃̃̃ is a proper subset of 𝔒 because a simple example is that _f(,z)=∇ f()+z where z is a standard gaussian variable. Therefore, their result does not provide a valid upper bound of 𝒞_ε(,Δ,L,σ^2).§ CONVERGENCE ANALYSIS OF ADAM WITH ONLY ASSUMPTIONS <REF> AND <REF> As discussed in Section <ref>, existing works on analyzing Adam require additional assumptions besides Assumption <ref> and <ref>. In this section, we provide the first convergence analysis of Adam with only Assumption <ref> and <ref>, which naturally gives an upper bound on the iteration complexity 𝒞_ε (,Δ,L,σ^2). In fact, our analysis even holds when the stochastic oracle satisfies the following more general assumption.[Coordinate-wise affine noise variance]We assume thatis unbiased, i.e., ∀∈ℝ^d,𝔼_z∼𝒫_f(,z)=∇ f(). We further assumehas coordinate-wise affine variance, i.e., ∀∈ℝ^d and ∀ i∈ [d],𝔼_z∼𝒫 [|(_f(,z))_i| ^2 ] ≤σ_0^2+σ_1^2∂_i f()^2. One can easily observe that Assumption <ref> is more general than Assumption <ref> since Assumption <ref> immediately indicatesAssumption <ref> with σ_0=σ and σ_1=1. We consider Assumption <ref> not only because it is more general but also because it allows the noise to grow with the norm of the true gradient, which is usually the case in machine learning practice <cit.>.Our analysis under Assumptions <ref> and Assumption <ref> is then given as follows. Letbe by Adam (Algorithm <ref>) and θ=(η,β_1,β_2) are the hyperparameters of . Let Assumption <ref> and <ref> hold. Then, if 0≤≤√()-8σ_1^2(1-)^-2 and <1, we have∑_t=1^T ‖∇ f(_t)‖≤ √(C_2+2C_1 ∑_i=1^d(ln( 2(T+1) ∑_i=1^d√(_0,i +σ_0^2)+24dσ_1^2 C_1/√()ln dσ_1^2 C_1/√()+ 12σ_1^2/√()C_2) ))×√(2(T+1) ∑_i=1^d√(_0,i +σ_0^2)+24dσ_1^2 C_1/√()ln dσ_1^2 C_1/√()+ 12σ_1^2/√()C_2).where _0,i is the i-th coordinate of _0,C_1= 32Lη(1+/√())^3/(1-)(1-/√())^3+16^2σ_0(1-)/√(1-)(1-/√())^3+64(1+σ_1^2)σ_1^2L^2η^2 d/^2(1-/√())^4σ_0(1-)^3/2, C_2=1-/√()/1-8/η f(_1)+32/(1-/√())^2∑_i=1^d _1,i^2/√(_1,i)+ 2C_1 ∑_i=1^d(ln(1/√(_0,i)) - T ln). A proof sketch is given in Section <ref> and thefull proof is deferred to Appendix. The right-hand side in Eq. (<ref>) looks messy at the first glance. We next explain Theorem <ref> in detail and make the upper bound's dependence over hyperparameters crystally clear. §.§ Discussion on Theorem <ref> Required assumptions and conditions. As mentioned previously, Theorem <ref> only requires Assumption <ref> and <ref>, which aligns with the setting of the lower bound (Proposition <ref>). To our best knowledge, this is the first analysis of Adam without additional assumptions.As for the range ofand , one can immediately see that the condition β_1 ≤√()-8σ_1^2(1-)^-2 degenerates to β_1≤√() in the bounded gradient case (i.e., σ_1=0), the weakest condition required in existing literature <cit.>. When σ_10, such a condition is stronger than β_1≤√(). We point out that this is not due to technical limitations but instead agrees with existing counterexamples for Adam <cit.>: <cit.> shows that when σ_1 0, there exists a counterexample satisfying Assumption <ref> and Assumption <ref> and a pair of (, ) with <√() and Adam with (, ) diverges over such a counterexample.Dependence over , η, and T. Here we consider the influence of , η, and T while fixing constant (we will discuss the effect ofin Section <ref>). With logarithmic factors ignored and coefficients hidden, C_1, C_2 andthe right-hand-side of Eq. (<ref>) can be rewritten with asymptotic notations as C_1=(1/√(1-)+η^2/√((1-)^3)),C_2=𝒪̃(1/√(1-)+η^2/√((1-)^3)+1/η+T√(1-)+η^2/√(1-)T),∑_t=1^T ‖∇ f(_t)‖=𝒪̃(C_1+C_2+√(TC_1)+√(TC_2)),where 𝒪̃denotes 𝒪 with logarithmic terms ignored.Consequently, the dependence of Eq. (<ref>)over ,η and T becomes ∑_t=1^T ‖∇ f(_t)‖= 𝒪̃(1/√(1-)+η^2/√((1-)^3)+1/η+T√(1-)+η^2/√(1-)T) +(√( T)/√(1-)+η√(T)/√((1-)^3)+√(T)/√(η)+T√(1-)+η/√(1-)T).Here we consider two cases: (i).and η are independent over T, and (ii).and η are dependent over T. For case (i), based on the above equation, one can easily observe that the averaged gradient norm 1/T∑_t=1^T ‖∇ f(_t)‖ will converge to the threshold 𝒪(η^2/√(1-)+√(1-)+η/√(1-)) with rate 𝒪(1/√(T)). This aligns with the observation in <cit.> that Adam will not converge to the stationary point with constant .For case (ii), in order to ensure convergence, i.e.,min_t∈ [T]‖_t‖_1→ 0 asT→∞, a sufficient condition is thatthe right-hand-side of the above equation is o(T). Specifically, by choosing η=Θ(T^-a) and 1- =Θ(T^-b), we obtain that 1/T∑_t=1^T ‖∇ f(_t)‖= (T^b/2-1+T^-2a+3b/2-1+T^a-1+T^-b/2+T^-2a+b/2)+(T^-1/2+b/4+T^-1/2-a+3b/4+T^-1/2+a/2+T^-b/4+T^-a+b/4).By simple calculation, we obtain that the right-hand side of the above inequality is o(1) as T→∞ if and only if b>0, 1>a>0 and b-a<1. Moreover, the minimum of the right-hand side of the above inequality is (1/T^1/4), which is achieved at a=1/2 and b=1. Such a minimum implies an upper bound of the iteration complexity which at most differs from the lower bound by logarithmic factors as solving (1/T^1/4)=ε gives T=(1/ε^4). In Theorem <ref>, we will further remove the logarithmic factor by giving a refined proof when a=1/2 and b=1 and close the gap between the upper and lower bounds.Dependence over λ. Our analysis allows λ = 0 in the adaptive learning rate η1/√(_t)+λ1_d. In contrast, some existing works <cit.> require non-zero λ and their iteration complexity has polynomial dependence over 1/λ, which is less desired as λ can be as small as 10^-8 in practice (e.g., in PyTorch's default setting). Furthermore, compared to their setting, our setting is more challenging as non-zero λ immediately provides an upper bound of the adaptive learning rate.§.§ Proof Sketch of Theorem <ref> In this section, we demonstrate the proof idea of Theorem <ref>. Generally speaking, our proof is inspired by (i). the construction of the Lyapunov function for SGDM <cit.> and (ii) the construction of auxiliary function and the conversion from regret bound to gradient bound for AdaGrad <cit.>, but the adaptation of these techniques to Adam is highly non-trivial, as SGDM does not hold an adaptive learning rate, and the adaptive learning rate of AdaGrad is monotonously decreasing. Below we sketch the proof by identifying three key challenges in the proof and provide our solutions respectively.Challenge I: Disentangle the stochasticity in stochastic gradient and adaptive learning rate. For simplicity, let us first consider the case where β_1=0, i.e., where the momentum _t degenerates to the stochastic gradient _t. According to the standard descent lemma, we have thatf() ≤ f()+[⟨ , -⟩+L/2-^2 ] ≤𝔼f()+[⟨, -η1/√(_t)⊙_t⟩]_First Order+L/2η^21/√(_t)⊙_t^2_Second OrderThe first challenge arises from bounding the "First Order" term above. To facilitate the understanding of the difficulty, we compare the "First Order" term of Adam to the corresponding "First Order" term of SGD, i.e., -η⟨_t,_t⟩. By directly applying ^|_t g_t =_t, we obtain that the "First-Order" term of SGD equals to -η‖_t‖^2. However, as for Adam, we do not even know what ^|_t1/√(_t)⊙_t is given that the stochasticity in _t and _t entangles. A common practice is to use a surrogate adaptive learning rate _t measurable with respect to _t, to approximate the real adaptive learning rate _t. This leads to the following equation:[⟨, -η1/√(_t)⊙_t⟩]_First Order = [⟨, -η1/√(_t)⊙_t⟩]_First Order Main+[⟨, -η(1/√(_t)-1/√(_t)) ⊙_t⟩]_Error.One can immediately see that "First Order Main" terms equals to [⟨, -η1/√(_t)⊙_t⟩]<0, but now we need to handle the "Error" term. In existing literature, such a term is mostly bypassed by applying the bounded gradient assumption <cit.>, which, however, we do not assume.Solution to Challenge I.Inspired by recent advance in the analysis of AdaGrad <cit.>, we consider the auxiliary function ξ_t =[ η⟨, -1/√(_t+1)⊙_t⟩], where we choose _t = _t-1+(1-)σ_0^2 1_d. In the following lemma, we show that the error term can be controlled using ξ_t, parallel to (Lemma 4. <cit.>). Let all conditions in Theorem <ref> hold. Then, Error≤5/8[ η⟨, -1/√(_t)⊙_t⟩]+ 𝒪(1/√()ξ_t-1-ξ_t)+Small Error. In the right-hand-side of inequality (<ref>), one can easily observe that the first term can be controlled by "First Order Main" term, and the third term is as small as the "Second Order" term. However, the second term seems annoying – in the analysis of AdaGrad <cit.>, there is no 1/√() factor, making the corresponding term a telescoping, but this is no longer true due to the existence of the 1/√() factor. We resolve this difficulty by looking at the sum of 1/√()ξ_t-1-ξ_t over t from 1 to T, which gives 𝒪((1-) ∑_t=1^T-1ξ_t). By further noticing that _t+1≥_t, we have∑_t=1^T(1/√()ξ_t-1-ξ_t) ≤𝒪((1-) ∑_t=1^T-1[ η⟨, -1/√(_t)⊙_t⟩]).The right-hand-side term can thus be controlled by the "First Order Main" term whenis close to 1. Compared to the analysisof AdaGrad in <cit.>, our proof technique has two-fold novelties. First, our auxiliary function has an additional (1-)σ_0^21_d term, which is necessary for the analysis of Adam as it makes _t lower bounded from 0 (AdaGrad does not need this, as _t-1 of AdaGrad itself is lower bounded). Secondly, as discussed above, the "AdaGrad version" ofsecond term inthe right-hand-side of inequality (<ref>) is a telescoping, the sum of which can be bounded straightforwardly. Challenge II: Handle the mismatch between stochastic gradient and momentum. In the analysis above, we assume β_1=0. Additional challenges arise when we move to the case where 0. Specifically, following the same routine, the "First Order Main" term now becomes [⟨_t,-η1/√(_t)⊙_t]. It is hard to even estimate whether such a term is negative or not, given that _t and _t still has entangled stochasticity, and the conditional expectation of _t also differs from _t, both due to the existence of historical gradient.Solution to Challenge II. Inspired by the state-of-art analysis of SGDM <cit.>, which leverage the potential function f(v_t) with v_t=_t-β_t-1/1-β, we propose to use the potential function f(_t) with _t=_t-/√()_t-1/1-/√(). Applying descent lemma to f(_t), we obtain that [f(_t+1)]≤𝔼f(_t)+[ ⟨∇ f(_t) , _t+1-_t ⟩]_First Order+L/2_t+1-_t ^2_Second Order.We again focus on the "First Order" term, which can be written as[ ⟨∇ f(_t) , _t+1-_t ⟩] = [ ⟨∇ f(_t),_t+1-_t/1-/√()-/√()_t-_t-1/1-/√()⟩](*)≈ [ ⟨∇ f(_t),-η/1-/√()1/√(_t)⊙_t+ η/1-/√()/√(_t-1)⊙_t-1⟩](∘)≈ [ ⟨∇ f(_t),-η/1-/√()1/√(_t)⊙_t+ η/1-/√()/√(_t)⊙_t-1⟩] = [ ⟨_t,-η(1-)/1-/√()1/√(_t)⊙_t⟩]=[ ⟨_t,-η(1-)/1-/√()1/√(_t)⊙_t⟩].Here approximate equation (*) is due to Assumption <ref> and that _t is close to _t, and approximate equation (∘) is due to Lemma <ref> and _t=_t-1+(1-)σ_0^2 ≈_t-1 (of course, these are informal statements. Please refer to Appendix <ref> for the detailed proof). With the above methodology, we arrive at the following lemma. Let all conditions in Theorem <ref> holds. Then,f(_t+1) ≤ f(_t)-Ω([ η⟨, -1/√(_t)⊙_t⟩])+ 𝒪(1/√()ξ_t-1-ξ_t)+Small Error.Summing the above lemma over t from 1 to T, we obtain∑_t=1^T[ ‖1/√(_t)⊙_t‖^2] ≤𝒪(1)+∑_l=1^d 𝒪(ln(_t,i/_0,l) - T ln).We then encounter the second challenge. Challenge III: Convert Eq. (<ref>) to a bound of gradient norm. Although we have derived a regret bound, i.e., a bound of∑_t=1^T[ ‖1/√(_t)⊙_t‖^2], we need to convert it into a bound of [ ‖_t‖^2]. In existing works <cit.> which assumes bounded gradient, such a conversion is straightforward because (their version of) _t is upper bounded. However, we do not assume bounded gradient and _t can be aribitrarily large, making [ ‖1/√(_t)⊙_t‖^2] arbitrarily small than[ ‖_t‖^2].Solution to Challenge III. As this part involves coordinate-wise analysis, we define _t,i, _t,i, _t,i, and ^1_t,i respectively as the l-th coordinate of _t, _t, _t, and ^1_t. To begin with, note that due to Cauchy's inequality and Hölder's inequality,(∑_t=1^T ‖_t‖)^2 ≤(∑_t=1^T[ ‖1/√(_t)⊙_t‖^2]) (∑_t=1^T[ ‖√(_t)‖^2]).Therefore, we only need to derive an upper bound of ∑_t=1^T[ ‖√(_t)‖^2], which is achieved by the following divide-and-conque methodology. Firstly, when |_t,i|≥σ_0/σ_1, we can show 2^|_t|_t,i|^2 ≥ 2|_t,i|^2 ≥^|_t|_t,i|^2. Then,through a direct calculation, we obtain that[|_t,i|^2/√(_t,i)1_| G_t,i|≥σ_0/σ_1] ≥√()/3(1-)σ_1^2[(√(_t+1,i)-√(_t,i))1_| G_t,i|≥σ_0/σ_1],and thus∑_t=1^T[|_t,i|^2/√(_t,i)]≥√()/3(1-)σ_1^2∑_t=1^T [(√(_t+1,i)-√(_t,i))1_| G_t,i|≥σ_0/σ_1].Secondly, when |_t,i|< σ_0/σ_1, define {_t,i}_t=0^∞ as _0,l= _0,l, _t,i= _t-1,i+| g_t,i|^21_| G_t,i| < σ_0/σ_1. One can easily observe that _t,i≤_t,i, and thus ∑_t=1^T [(√(_t+1,i)-√(_t,i))1_|_t,i|< σ_0^2/σ_1^2]≤ ∑_t=1^T (√(_t,i + (1- )σ_0^2)-√((_t-1,i+ (1- )σ_0^2))) = √(_t,i + (1- )σ_0^2)+(1-√())∑_t=1^T-1√(_t,i + (1- )σ_0^2) - √((_0,i+ (1- )σ_0^2)).Putting the above two estimations together, we derive that(1-√())∑_t=1^T+1√(_t,i)≤3(1-)σ_1^2/√()∑_t=2^T[|_t,i|^2/√(_t,i)]+ (1-√())(T+1)√(σ_0^2+_0,i).The above methodology can be summarized as the following lemma. Let all conditions in Theorem <ref> hold. Then,∑_t=1^T+1∑_i=1^d √(_t,i)≤ 2(T+1) ∑_i=1^d√(_0,i +σ_0^2)+24dσ_1^2 C_1/√()ln dσ_1^2 C_1/√()+C_2. Based on Lemma <ref>, we can derive the estimation of ∑_t=1^T[ ‖√(_t)‖^2] since _t is close to _t.The proof is then completed by combining the estimation of ∑_t=1^T[ ‖√(_t)‖^2] (Eq. (<ref>)) and Eq. (<ref>). § GAP-CLOSING UPPER BOUND ON THE ITERATION COMPLEXITY OF ADAM In this section, based on a refined proof of Stage II of Theorem <ref> (see Appendix <ref>) under the specific case η= Θ(1/√(T)) and =1 -Θ(1/T), we show that the logarithmic factor in Theorem <ref> can be removed and the lower bound can be achieved. Specifically, we have the following theorem. Let Assumption <ref> and Assumption <ref> hold. Then, select the hyperparameters of Adam as η=a/√(T), =1-b/T and =c√(), where a,b>0 and 0≤ c<1 are independent of T. Then, let _τ be the output of Adam in Algorithm <ref>, and we have𝔼‖∇ f(_r) ‖≤√(2∑_i=1^d√(_0,i+3bσ_0^2)+4D_2σ_1^2b/√(T)+256σ_1^2b/(1-c)^2T∑_i=1^d 𝔼_1,i^2/√(_1,i) +16 D_1σ_1^2b/√(T)ln(e+4 D̃σ_1^2b/√(T)))×√( 2D_1/√(T)∑_i=1^dln( 2∑_i=1^d√(_0,i+3bσ_0^2)+4D_2σ_1^2b/√(T)+256σ_1^2b/(1-c)^2T∑_i=1^d 𝔼_1,i^2/√(_1,i) +16 D_1σ_1^2b/√(T)ln(e+4 D̃σ_1^2b/√(T))) + 64/(1-c)^2T∑_i=1^d 𝔼_1,i^2/√(_1,i)+D_2/√(T)),whereD_1≜32La/b(1+c)^3/(1-c)^3+32σ_0/√(b)(1-c)^3+(1+σ_1^2)σ_1^2L^2da^2/(1-c)^4σ_0√(b^3), D_2≜8/af(_1)+ D_1(bd-∑_i=1^d ln_0,i).As a result, letbe Adam in Algorithm <ref>, we have 𝒞_ε (,Δ,L,σ^2) =𝒪(1/ε^4).The proof of Theorem <ref> is based on a refined solution of Challenge II in the proof of Theorem <ref> under the specific hyperparameter settings, and we defer the concrete proof to Appendix <ref>. Below we discuss on Theorem <ref>, comparing it with practice, with Theorem <ref> and existing convergence rate of Adam, and with the convergence rate of AdaGrad.Alignment with the practical hyperparameter choice. The hyperparameter setting in Theorem <ref> indicates that to achieve the lower bound of iteration complexity, we need to select small η and close-to-1 β_2, with less requirement over β_1. This agrees with the hyperparameter setting in deep learning libaries, for example, η=10^-3, =0.999, and =0.9 in PyTorch.Comparison with Theorem <ref> and existing works.To our best knowledge, Theorem <ref> is the first to derive the iteration complexity 𝒪(1/ε^4). Previously, the state-of-art iteration complexity is 𝒪(log 1/ε/ε^4) <cit.> where they additionally assume bounded gradient. Theorem <ref> is also tight than Theorem <ref> (while Theorem <ref> holds for more general hyperparameter settings). As discussed in Section <ref>, if applying the hyperparameter setting in Theorem <ref> (i.e., η=a/√(T), =1-b/T and =c√()) to Theorem <ref>, we will obtain that ‖∇ f(_τ) ‖≤𝒪(poly(log T)/√(T)) and 𝒞_ε (,Δ,L,σ^2) =𝒪(log 1/ε/ε^4), which is worse than the upper bound inTheorem <ref> and the lower bound in Proposition <ref> by a logarithmic factor.Comparison with AdaGrad. AdaGrad <cit.> is another popular adaptive optimizer. Under Assumptions <ref> and <ref>, the state-of-art iteration complexity of AdaGrad is 𝒪(log 1/ε/ε^4) <cit.>, which is worse than Adam by a logarithmic factor. Here we show that such a gap may be not due to the limitation of analysis, and can be explained by analogizing AdaGrad to Adam without momentum as SGD with diminishing learning rate to SGD with constant learning rate. To start with, the update rule of AdaGrad is given as _t=_t-1+ _t^⊙ 2, _t+1= _t-η1/√(_t)⊙_t.We first show that in Algorithm <ref>, if we allow the hyperparameters to be dynamical, i.e., _t=β_2,t_t-1+ (1-β_2,t)_t^⊙ 2,_t=β_1,t_t-1+ (1-β_1,t)_t, _t+1= _t-η_t 1/√(_t)⊙_t,then Adam is equivalent to AdaGrad by setting η_t=η/√(t), β_1,t=0, and β_2,t=1-1/t. Specifically, by setting μ_t=t_t in Eq. (<ref>), we have Eq. (<ref>) is equivalent to with Eq. (<ref>) (by replacing _t by μ_t in Eq. (<ref>)). Comparing the above hyperparameter setting with that in Theorem <ref>, we see that the above hyperparameter setting can be obtained by changing T to t and setting c=0 in Theorem <ref>. This is similar to the relationship between SGD with diminishing learning rate Θ(1/√(t)) and SGD with diminishing learning rate Θ(1/√(T)). Recall that the iteration complexity of SGD with diminishing learning rate Θ(1/√(t)) also has an additional logarithmic factor than SGD with constant learning rate, which may explain the gap between AdaGrad and Adam.§ LIMITATIONS Despite that our work provide the first result closing the upper bound and lower bound of the iteration complexity of Adam, there are several limitations listed as follows:Dependence over the dimension d. The bounds in Theorem <ref> and Theorem <ref> is monotonously increasing with respect to d. This is undesired since the upper bound of iteration complexity of SGD is invariant with respect to d. Nevertheless, removing such an dependence over d is technically hard since we need to deal with every coordinate separately due to coodinate-wise learning rate, while the descent lemma does not hold for a single coordinate but combines all coordinates together. To our best knowledge, all existing works on the convergene of Adam also suffers from the same problem. We leave removing the dependence over d as an important future work.No better result with momentum. It can be observed that in Theorem <ref> and Theorem <ref>, the tightest bound is achieved when =0 (i.e., no momentum is applied). This contradicts with the common wisdom that momentum helps to accelerate. Although the benefit of momentum is not very clear even for simple optimizer SGD with momentum, we view this as a limitation of our work and defer proving the benefit of momentum in Adam as a future work. Also, our result does not imply that setting is not as critical as setting. The primary objective of this paper is to characterize the dependence on ε , and the importance of setting might be justified in other ways or characterizations. To help readers gain a deeper understanding of this issue, we include experiments to illustrate the dependence of performance onin Appendix <ref>. This work was founded by the CAS Project for Young Scientists in Basic Research under Grant No. YSBR-034 and the Innovation Funding of ICT, CAS under Grant No.E000000. abbrvnat§ OTHER RELATED WORKS Section <ref> has provided a detailed discussion over existing convergence analysis of Adam. In this section, we briefly review other related works. Adam is proposed with a convergence analysis in online optimization <cit.>. The proof, however, is latter shown to be flawed in <cit.> as it requires the adaptive learning rate of Adam to be non-increasing. This motivates a line of works modifying Adam to ensure convergence. The modifications include enforcing the adaptive learning rate to be non-increasing <cit.>, imposing upper bound and lower bound of the adaptive learning rate <cit.>, and using different approach to estimate second-order momentum <cit.>. Recently, <cit.> discover a new optimizer Lion through Symbolic Discovery, which uses sign operation to replace the adaptive learning rate in Adam, achieving comparable performance of Adam with less memory costs. § AUXILLIARY LEMMASThe following two lemmas are useful when bounding the second-order term. Assume we have 0<β_2< 1 and a sequence of real numbers (a_n)_n=1^∞. Let b_0>0 and b_n=b_n-1+(1-) a_n^2. Then, we have∑_n=1^T a_n^2/b_n≤1/1-( ln(b_T/b_0) - T ln). Assume we have 0<β_1^2<β_2< 1 and a sequence of real numbers (a_n)_n=1^∞. Let b_0>0, b_n=b_n-1+(1-) a_n^2, c_0=0, and c_n=c_n-1+(1-) a_n. Then, we have∑_n=1^T | c_n |^2/b_n≤(1-)^2/(1-/√())^2(1-)(ln(b_T/b_0) - T ln).To begin with,| c_n |/√(b_n)≤ (1-) ∑_i=1^n^n-i| a_i|/√(b_n)≤ (1-) ∑_i=1^n^n-i| a_i|/√(b_n)≤ (1-) ∑_i=1^n(/√())^n-i| a_i|/√(b_i). Applying Cauchy's inequality, we obtain| c_n |^2/b_n≤ (1-)^2 (∑_i=1^n(/√())^n-i| a_i|/√(b_i))^2 ≤(1-)^2 (∑_i=1^n(/√())^n-i)(∑_i=1^n(/√())^n-i| a_i|^2 /b_i) ≤(1-)^2/1-/√()(∑_i=1^n(/√())^n-i| a_i|^2 /b_i). Summing the above inequality over n from 1 to T then leads to∑_n=1^T | c_n |^2/b_n≤ (1-)^2/1-/√()∑_n=1^T(∑_i=1^n(/√())^n-i| a_i|^2 /b_i)= (1-)^2/1-/√()∑_n=1^T| a_n|^2 /b_n(∑_i=0^T-n(/√())^i)≤ (1-)^2/(1-/√())^2∑_n=1^T| a_n|^2 /b_n≤(1-)^2/(1-/√())^2(1-)(ln(b_T/b_0) - T ln).The proof is completed. The following lemma bound the update norm of Adam. We have ∀ t≥ 1, |_t+1,i-_t,i|≤η1-/√(1-)√(1-^2/)≤η1-/√(1-)√(1-/√()). We have that |_t+1,i-_t,i| =η|_t,i/√(_t,i)|≤η∑_i=0^t-1 (1-) ^i|_t-i,l|/√(∑_i=0^t-1 (1-) ^i|_t-i,l|^2+^t _0,i) ≤ η1-/√(1-)√(∑_i=0^t-1^i|_t-i,l|^2)√(∑_i=0^t-1^2i/^i)/√(∑_i=0^t-1^i|_t-i,l|^2)≤η1-/√(1-)√(1-^2/).Here the second inequality is due to Cauchy's inequality. The proof is completed. § PROOF OF THEOREM <REF> This section collects the proof of Theorem <ref>. As a part of the proof, we first provide formal descriptions of Lemma <ref>, Lemma <ref>, and Lemma <ref>, and their corresponding proofs. We then proceed to prove Theorem <ref> leveraging these lemmas. §.§ Formal description of Lemma <ref>, Lemma <ref>, and Lemma <ref> and their proofLet all conditions in Theorem <ref> hold. Then, we have [ ⟨_t,-η/1-/√()(1/√(_t)-1/√(_t))⊙_t⟩] ≤5/8∑_i=1^dη1-/1-/√()|_t,i|^2/√(_t,i)+2η√(1-)σ_0/(1-^2/)^2∑_i=1^d_t,i^2/_t,i+ η4(1-)/(1-/√())^2 √()σ_1^2∑_i=1^d( _t-1,i^2/√(_t,i)- _t,i^2/√(_t+1,i))+∑_i=1^d 2η√(1-β_2)σ_0/(1-β_1)(1-/√())[ (|_t,i|^2/_t,i)]+ 64(1+σ_1^2)σ_1^2L^2η^3d/^2(1-/√())^3(1-)σ_0√( 1-)‖1/√(_t-1)⊙_t-1‖^2. To start with,^|_t[ ⟨_t,-η/1-/√()(1/√(_t)-1/√(_t))⊙_t⟩]= ^|_t[ ⟨_t,-η/1-/√()((1-β_2)(σ_0^21_d-_t^⊙ 2)/√(_t)√(_t)(√(_t)+√(_t)))⊙_t⟩]≤ ∑_i=1^d η/1-/√()^|_t[ |_t,i|((1-β_2)(σ_0^2+_t,i^ 2)/√(_t,i)√(_t,i)(√(_t,i)+√(_t,i)))|_t,i|] = ∑_i=1^d η/1-/√()^|_t[ |_t,i|((1-β_2)_t,i^ 2/√(_t,i)√(_t,i)(√(_t,i)+√(_t,i)))|_t,i|]_I.1.1+∑_i=1^d η/1-/√()^|_t[ |_t,i|((1-β_2)σ_0^2/√(_t,i)√(_t,i)(√(_t,i)+√(_t,i)))|_t,i|]_I.1.2.As for I.1.1, we have ∑_i=1^d η/1-β_1/√(β_2)^|_t[ |_t,i|((1-β_2)_t,i^ 2/√(_t,i)√(_t,i)(√(_t,i)+√(_t,i)))|_t,i|](*)≤ ∑_i=1^d η(1-)/(√(1-/√()))^3^|_t[ |_t,i|(√(1-β_2)_t,i ^2 /√(_t,i)(√(_t,i)+√(_t,i)))](∘)≤ ∑_i=1^d η(1-)/(√(1-/√()))^3|_t,i|/√(_t,i)√(^|_t_t,i^2)√(^|_t_t,i^2/(√(_t,i)+√(_t,i))^2) (∙)≤ ∑_i=1^dη(1-)√(1-)/(√(1-/√()))^3|_t,i|/√(_t,i)√(σ_0^2+σ_1^2 _t,i^2 )√(^|_t_t,i^2/(√(_t,i)+√(_t,i))^2) ≤ ∑_i=1^d η(1-)√(1-)/(√(1-/√()))^3|_t,i|/√(_t,i)(σ_0+σ_1 |_t,i|)√(^|_t_t,i^2/(√(_t,i)+√(_t,i))^2),where inequality (*) uses Lemma <ref>, inequality (∘) is due to Holder's inequality, and inequality (∙) is due to Assumption <ref>. Applying mean-value inequality respectively to∑_i=1^d η(1-)√(1-)/(√(1-/√()))^3^|_t|_t,i|/√(_t,i)σ_0√(^|_t_t,i^2/(√(_t,i)+√(_t,i))^2) and ∑_i=1^d η(1-)√(1-)/(√(1-/√()))^3^|_t|_t,i|/√(_t,i)σ_1 |_t,i|√(^|_t_t,i^2/(√(_t,i)+√(_t,i))^2), we obtain that the right-hand-side of the above inequality can be bounded by 1/8∑_i=1^d η1-/1-/√()√(1-)σ_0|_t,i|^2/_t,i+2η√(1-)σ_0/(1-/√())^2∑_i=1^d^|_t_t,i^2/(√(_t,i)+√(_t,i))^2+ 1/8∑_i=1^dη1-/1-/√()|_t,i|^2/√(_t,i)+2η(1-)(1-)/(1-/√())^2σ_1^2|_t,i|^2/√(_t,i)^|_t∑_i=1^d_t,i^2/(√(_t,i)+√(_t,i))^2 ≤ 1/8∑_i=1^d η1-/1-/√()|_t,i|^2/√(_t,i)+2η√(1-)σ_0/(1-/√())^2∑_i=1^d^|_t_t,i^2/_t,i+ 1/8∑_i=1^dη1-/1-/√()|_t,i|^2/√(_t,i)+2η(1-)(1-)/(1-/√())^2σ_1^2|_t,i|^2/√(_t,i)^|_t∑_i=1^d_t,i^2/(√(_t,i)+√(_t,i))^2.Here the inequality is due to _t,i=(1-) σ_0^2+_t-1,i≥ (1-) σ_0^2. Meanwhile, we have( 1/√(_t,i)- 1/√(_t+1,i)) _t,i^2 = _t,i^2((1-)^2σ_0^2+(1-)_t,i^2)/√(_t,i)√(_t+1,i) (√(_t,i)+√(_t+1,i))≥_t,i^2(1-)_t,i^2/√(_t,i)√(_t+1,i) (√(_t,i)+√(_t+1,i)) ≥ √()/2_t,i^2(1-)_t,i^2/√(_t,i) (√(_t,i)+√(_t,i))^2. Applying the above inequality back to Eq. (<ref>), we obtain that∑_i=1^d η/1-β_1^|_t[ |_t,i|((1-β_2)_t,i^ 2/√(_t,i)√(_t,i)(√(_t,i)+√(_t,i)))|_t,i|]≤ 1/4∑_i=1^dη1-/1-/√()|_t,i|^2/√(_t,i)+2η√(1-)σ_0/(1-^2/)^2∑_i=1^d^|_t_t,i^2/_t,i+ η4(1-)/(1-/√())^2 √()σ_1^2∑_i=1^d^|_t( 1/√(_t,i)- 1/√(_t+1,i)) _t,i^2. Furthermore, due to Assumption <ref>, we have (we define G_0≜ G_1)_t,i^2≤ _t-1,i^2+2|_t,i||_t,i-_t-1,i|+ 2(_t,i-_t-1,i)^2≤ _t-1,i^2+2L|_t,i|‖_t-_t-1‖+ 2L^2 ‖_t-_t-1‖^2,which further leads to1/√(_t,i)_t,i^2≤ 1/√(_t,i)(_t-1,i^2+2L|_t,i|‖_t-_t-1‖+ 2L^2 ‖_t-_t-1‖^2)(∘)≤ 1/√(_t,i)_t-1,i^2+(1-/√())(1-) √()/16σ_1^2|_t,i|^2/√(_t,i) + 16L^2σ_1^2/^3/2(1-/√())(1-)√(_t,i)‖_t-_t-1‖^2 + 2L^2 /√(_t,i)‖_t-_t-1‖^2≤ 1/√(_t,i)_t-1,i^2+(1-/√())(1-) √()/16σ_1^2|_t,i|^2/√(_t,i) + 16L^2σ_1^2η^2/^3/2(1-/√())(1-)σ_0√( 1-)‖1/√(_t-1)⊙_t-1‖^2 + 2L^2η^2/σ_0√( (1-))‖1/√(_t-1)⊙_t-1‖^2≤ 1/√(_t,i)_t-1,i^2+(1-/√())(1-) √()/16σ_1^2|_t,i|^2/√(_t,i) + 16(1+σ_1^2)L^2η^2/^3/2(1-/√())(1-)σ_0√( 1-)‖1/√(_t-1)⊙_t-1‖^2 . Applying the above inequality back to Eq. (<ref>) leads to thatI.1.1= ∑_i=1^d η/1-β_1^|_t[ |_t,i|((1-β_2)_t,i^ 2/√(_t,i)√(_t,i)(√(_t,i)+√(_t,i)))|_t,i|] ≤ 1/2∑_i=1^dη1-/1-/√()|_t,i|^2/√(_t,i)+2η√(1-)σ_0/(1-^2/)^2∑_i=1^d^|_t_t,i^2/_t,i+ η4(1-)/(1-/√())^2 √()σ_1^2∑_i=1^d^|_t( _t-1,i^2/√(_t,i)- _t,i^2/√(_t+1,i))+ 64d(1+σ_1^2)σ_1^2L^2η^3/^2(1-/√())^3(1-)σ_0√( 1-)‖1/√(_t-1)⊙_t-1‖^2.As for I.1.2, we haveI.1.2= ∑_i=1^d η/1-/√()^|_t[ |_t,i|((1-β_2)σ_0^2/√(_t,i)√(_t,i)(√(_t,i)+√(_t,i)))|_t,i|] ≤ ∑_i=1^d η/1-/√()^|_t[ |_t,i|(√(1-β_2)√(σ_0)/√(_t,i)√(_t,i))|_t,i|] ≤ 1-/8(1-/√())∑_i=1^d η|_t,i|^2/√(_t,i)+∑_i=1^d 2η√(1-β_2)σ_0/(1-β_1)(1-/√())^|_t[ (|_t,i|^2/_t,i)].With Inequalities (<ref>) and (<ref>), we conclude thatI.1≤ 5/8∑_i=1^dη1-/1-/√()|_t,i|^2/√(_t,i)+2η√(1-)σ_0/(1-^2/)^2∑_i=1^d_t,i^2/_t,i+ η4(1-)/(1-/√())^2 √()σ_1^2∑_i=1^d( _t-1,i^2/√(_t,i)- _t,i^2/√(_t+1,i))+∑_i=1^d 2η√(1-β_2)σ_0/(1-β_1)(1-/√())[ (|_t,i|^2/_t,i)]+ 64(1+σ_1^2)σ_1^2L^2η^3d/^2(1-/√())^3(1-)σ_0√( 1-)‖1/√(_t-1)⊙_t-1‖^2.Let all conditions in Theorem <ref> holds. Then, f(_t+1)≤f(_t) -η/41-/1-/√()[ η⟨, -1/√(_t)⊙_t⟩]+2η√(1-)σ_0/(1-^2/)^2∑_i=1^d_t,i^2/_t,i+ η4/(1-/√())^2 √()σ_1^2∑_i=1^d( 1/√()ξ_t-1- ξ_t)+∑_i=1^d 2η√(1-β_2)σ_0/(1-β_1)(1-/√())[ (|_t,i|^2/_t,i)]+ 64(1+σ_1^2)σ_1^2L^2η^3d/^2(1-/√())^3(1-)σ_0√( 1-)‖1/√(_t-1)⊙_t-1‖^2+2η√(1-)^2σ_0/(1-)(1-/√())∑_i=1^d [ |_t-1,i|^2/_t-1,i] +L[ 4( /√()/1-/√())^2η^2‖1/√(_t-1)⊙_t-1‖^2 +3(1/1-/√())^2η^2‖1/√(_t)⊙_t ‖^2 ].According to the definition of _t, we have_t+1-_t= _t+1-_t/1-/√()-/√()_t-_t-1/1-/√()=-η/1-/√()1/√(_t)⊙_t+η/1-/√()1/√(_t-1)⊙_t-1=-η/1-/√()1/√(_t)⊙_t+η/1-/√()1/√(_t)⊙_t-1 -η/1-/√()(1/√(_t)-1/√(_t))⊙_t+η/1-/√()(1/√(_t-1)-1/√(_t))⊙_t-1 (*)=-η1-/1-/√()1/√(_t)⊙_t-η/1-/√()(1/√(_t)-1/√(_t))⊙_t+η/1-/√()(1/√(_t-1)-1/√(_t))⊙_t-1,where Eq. (*) is due to _t =_t-1+(1-)_t.Applying the above equation to the "First Order" term, we find that it can be decomposed as[ ⟨∇ f(_t) , _t+1-_t ⟩] = [ ⟨_t, _t+1-_t ⟩]+[ ⟨∇ f(_t)-G_t, _t+1-_t ⟩] = [ ⟨_t, -η1/√(_t)⊙_t ⟩]+[ ⟨_t,-η/1-/√()(1/√(_t)-1/√(_t))⊙_t⟩] +[ ⟨_t,η/1-/√()(1/√(_t-1)-1/√(_t))⊙_t-1⟩]+[ ⟨∇ f(_t)-_t, _t+1-_t ⟩] = -η1-/1-/√()‖1/√(_t)⊙_t ‖^2 +[ ⟨_t,-η/1-/√()(1/√(_t)-1/√(_t))⊙_t⟩]_I.1+[ ⟨_t,η/1-/√()(1/√(_t-1)-1/√(_t))⊙_t-1⟩]_I.2+[ ⟨∇ f(_t)-_t, _t+1-_t ⟩]_I.3. Here we apply Lemma <ref> to bound I.1. We proceed by boundingI.2 and I.3 respectively. As for I.2, we haveI.2 = [ ⟨_t,η/1-/√()(1/√(_t-1)-1/√(_t))⊙_t-1⟩]≤ η/1-/√()∑_i=1^d [ |_t,i||1/√(_t-1,i)-1/√(_t,i)||_t-1,i|] = η/1-/√()∑_i=1^d [ |_t,i||(1-) σ_0^2/√(_t-1,i)√(_t,i)(√(_t,i)+√(_t-1,i))||_t-1,i|]= η/1-/√()∑_i=1^d [ |_t,i||√(1-)√(σ_0)/√(_t-1,i)√(_t,i)||_t-1,i|]≤ 1/81-/1-/√()∑_i=1^d η|_t,i|^2/√(_t,i) +2η√(1-)^2σ_0/(1-)(1-/√())∑_i=1^d [ |_t-1,i|^2/_t-1,i]. As for I.3, we directly apply Assumption <ref> and obtainI.3 = [ ⟨∇ f(_t)-_t, _t+1-_t ⟩]≤ [ ‖∇ f(_t)-_t‖‖_t+1-_t ‖]≤ L[ ‖_t-_t‖‖_t+1-_t ‖] =L[/√()/1-/√()‖_t -_t-1‖(/√()/1-/√()‖_t+1-_t ‖+/√()/1-/√()‖_t-_t-1‖) ]≤L[/√()/1-/√()‖_t -_t-1‖(1/1-/√()‖_t+1-_t ‖+/√()/1-/√()‖_t-_t-1‖) ]≤ L[ 2( /√()/1-/√())^2‖_t -_t-1‖^2 +1/4(1/1-/√())^2‖_t+1-_t ‖^2 ]≤ L[ 2( /√()/1-/√())^2η^2‖1/√(_t-1)⊙_t-1‖^2 +1/4(1/1-/√())^2η^2‖1/√(_t)⊙_t ‖^2 ]. All in all, we summarize that the "First Order" term can be bounded by -η/41-/1-/√()‖1/√(_t)⊙_t ‖^2+2η√(1-)σ_0/(1-^2/)^2∑_i=1^d_t,i^2/_t,i+ η4(1-)/(1-/√())^2 √()σ_1^2∑_i=1^d( _t-1,i^2/√(_t,i)- _t,i^2/√(_t+1,i))+∑_i=1^d 2η√(1-β_2)σ_0/(1-β_1)(1-/√())[ (|_t,i|^2/_t,i)]+ 64(1+σ_1^2)σ_1^2L^2η^3d/^2(1-/√())^3(1-)σ_0√( 1-)‖1/√(_t-1)⊙_t-1‖^2+2η√(1-)^2σ_0/(1-)(1-/√())∑_i=1^d [ |_t-1,i|^2/_t-1,i] +L[ 2( /√()/1-/√())^2η^2‖1/√(_t-1)⊙_t-1‖^2 +1/4(1/1-/√())^2η^2‖1/√(_t)⊙_t ‖^2 ]. Furthermore, the "Second Order" term can be directly bounded byL/2‖_t+1-_t ‖^2 = L/2‖_t+1-_t/1-/√()-/√()_t-_t-1/1-/√()‖^2≤2L ‖_t+1-_t/1-/√()‖^2 +2L‖/√()_t-_t-1/1-/√()‖^2. Applying the estimations of the first-order term and the second-order term to the descent lemma then givesf(_t+1)≤f(_t) -η/41-/1-/√()∑_i=1^d _t,i^2/√(_t,i)+2η√(1-)σ_0/(1-^2/)^2∑_i=1^d_t,i^2/_t,i+ η4/(1-/√())^2 √()σ_1^2∑_i=1^d( _t-1,i^2/√(_t,i)- _t,i^2/√(_t+1,i))+∑_i=1^d 2η√(1-β_2)σ_0/(1-β_1)(1-/√())[ (|_t,i|^2/_t,i)]+ 64(1+σ_1^2)σ_1^2L^2η^3d/^2(1-/√())^3(1-)σ_0√( 1-)‖1/√(_t-1)⊙_t-1‖^2+2η√(1-)^2σ_0/(1-)(1-/√())∑_i=1^d [ |_t-1,i|^2/_t-1,i] +L[ 4( /√()/1-/√())^2η^2‖1/√(_t-1)⊙_t-1‖^2 +3(1/1-/√())^2η^2‖1/√(_t)⊙_t ‖^2 ]. The proof is completed.Let all conditions in Theorem <ref> hold. Then,∑_t=1^T+1∑_i=1^d √(_t,i)≤ 2(T+1) ∑_i=1^d√(_0,i +σ_0^2)+24dσ_1^2 C_1/√()ln dσ_1^2 C_1/√()+ 12σ_1^2/√()C_2.To begin with, we have that∑_t=1^T[|_t,i|^2/√(_t,i)1_| G_t,i|≥σ_0/σ_1]≤∑_t=1^T[|_t,i|^2/√(_t,i)]. On the other hand, we have that|_t,i|^2/√(_t,i)1_| G_t,i|≥σ_0/σ_1≥2/3|_t,i|^2+1/3σ^2_0/σ_1^2/√(_t,i)1_| G_t,i|≥σ_0/σ_1≥/3σ_1^2𝔼^|_t|_t,i|^2+1-/3σ^2_0/σ_1^2/√(_t,i)1_| G_t,i|≥σ_0/σ_1= ^|_t/3σ_1^2|_t,i|^2+1-/3σ_1^2σ_0^2/√(_t,i)1_| G_t,i|≥σ_0/σ_1≥√()^|_t/3σ_1^2|_t,i|^2+1-/3σ_1^2σ_0^2/√(_t+1,i)+√(_t,i)1_| G_t,i|≥σ_0/σ_1. As a conclusion, ∑_t=1^T[|_t,i|^2/√(_t,i)1_| G_t,i|≥σ_0/σ_1] ≥√()∑_t=1^T[/3σ_1^2|_t,i|^2+1-/3σ_1^2σ_0^2/√(_t+1,i)+√(_t,i)1_| G_t,i|≥σ_0/σ_1]≥ √()/3(1-)σ_1^2∑_t=1^T [(√(_t+1,i)-√(_t,i))1_| G_t,i|≥σ_0/σ_1]. On the other hand, as stated in Section <ref>, we define {_t,i}_t=0^∞ as _0,i= _0,i, _t,i= _t-1,i+(1-)| g_t,i|^21_|_t,i|< σ_0^2/σ_1^2. One can easily observe that _t,i≤_t,i, and thus∑_t=1^T [(√(_t+1,i)-√(_t,i))1_|_t,i|< σ_0^2/σ_1^2] = ∑_t=1^T (√(^2 _t-1,i+ (1-)| g_t,i|^2 + (1- )σ_0^2)-√((_t-1,i+ (1- )σ_0^2)))1_|_t,i|< σ_0^2/σ_1^2 ≤ ∑_t=1^T (√(^2 _t-1,i+ (1-)| g_t,i|^2 + (1- )σ_0^2)-√((_t-1,i+ (1- )σ_0^2)))1_|_t,i|< σ_0^2/σ_1^2 ≤ ∑_t=1^T (√(^2 _t-1,i+ (1-)| g_t,i|^21_|_t,i|< σ_0^2/σ_1^2 + (1- )σ_0^2)-√((_t-1,i+ (1- )σ_0^2))) = ∑_t=1^T (√(_t,i + (1- )σ_0^2)-√((_t-1,i+ (1- )σ_0^2))) = √(_t,i + (1- )σ_0^2)+(1-√())∑_t=1^T-1√(_t,i + (1- )σ_0^2) - √((_0,i+ (1- )σ_0^2)).All in all, summing the above two inequalities together, we obtain that√(_t+1,i)+(1-√())∑_t=2^T√(_t,i) - √(_1,i)= ∑_t=1^T (√(_t,i)-√(_t-1,i))≤ ∑_t=1^T (√(_t,i)-√(_t-1,i))1_| G_t,i|≥σ_0/σ_1 +∑_t=1^T (√(_t,i)-√(_t-1,i))1_|_t,i|< σ_0^2/σ_1^2 ≤ 3(1-)σ_1^2/√()∑_t=1^T[|_t,i|^2/√(_t,i)]+ √(_t,i + (1- )σ_0^2)+(1-√())∑_t=1^T-1√(_t,i + (1- )σ_0^2) - √((_0,i+ (1- )σ_0^2)).Since ∀ t,√(_t,i + (1- )σ_0^2)≤√(_t,i + (1- )σ_0^2)≤√(σ_0^2+_0,i),combining with √(_1,i)=√((_0,i+ (1- )σ_0^2)) and √(_t+1,i) =√(_t,i + (1- )σ_0^2)≥√(_t,i + (1- )σ_0^2), we obtain(1-√())∑_t=2^T+1√(_t,i)≤ 3(1-)σ_1^2/√()∑_t=2^T[|_t,i|^2/√(_t,i)]+ +(1-√())∑_t=1^T√(_t,i + (1- )σ_0^2) ≤ 3(1-)σ_1^2/√()∑_t=2^T[|_t,i|^2/√(_t,i)]+ (1-√())T√(σ_0^2+_0,i) .Leveraging Eq. (<ref>), we then obtain that∑_t=1^T+1∑_i=1^d √(_t,i) ≤ 3(1+√())σ_1^2/√()∑_t=1^T[|_t,i|^2/√(_t,i)]+ (T+1)∑_i=1^d √(_0,i + σ_0^2) ≤ 6σ_1^2/√()(1-/√()/1-8/η f(_1)+32/(1-/√())^2∑_i=1^d _1,i^2/√(_1,i)+ C_1 ∑_i=1^d(ln(_T,i/_0,i) - T ln))+(T+1) ∑_i=1^d√(_0,i + σ_0^2) ≤ 6σ_1^2/√()(1-/√()/1-8/η f(_1)+32/(1-/√())^2∑_i=1^d _1,i^2/√(_1,i)+ 2C_1 ∑_i=1^d(ln(∑_t=1^T+1√(_t,i)/√(_0,i)) - T ln))+(T+1) ∑_i=1^d√(_0,i +σ_0^2) ≤ 6σ_1^2/√()(1-/√()/1-8/η f(_1)+32/(1-/√())^2∑_i=1^d _1,i^2/√(_1,i)+ 2C_1 ∑_i=1^d(ln(∑_t=1^T+1∑_j=1^d√(_t,j)/√(_0,i)) - T ln))+(T+1) ∑_i=1^d√(_0,i +σ_0^2),where in the last inequality we use the concavity of h(x)=ln x. Solving the above inequality with respect to ∑_t=1^T+1∑_i=1^d √(_t,i) then gives ∑_t=1^T+1∑_i=1^d √(_t,i)≤ 2(T+1) ∑_i=1^d√(_0,i +σ_0^2)+24dσ_1^2 C_1/√()ln dσ_1^2 C_1/√()+12σ_1^2/√()(1-/√()/1-8/η f(_1)+32/(1-/√())^2∑_i=1^d _1,i^2/√(_1,i)+ 2C_1 ∑_i=1^d(ln(1/√(_0,i)) - T ln)).The proof is then completed by applying the definition of C_2. §.§ Proof of Theorem <ref>Summing the inequality in Lemma <ref> over t from 1 to T and collecting the terms, we obtainf(_T+1)≤f(_1)- η/41-/1-/√()∑_t=1^T∑_i=1^d _t,i^2/√(_t,i) +η4(1-)/(1-/√())^2 √()σ_1^2∑_t=1^T∑_i=1^d( _t-1,i^2/√(_t,i)- _t,i^2/√(_t+1,i)) + C̃∑_t=1^T‖1/√(_t)⊙_t ‖^2≤ f(_1)- η/41-/1-/√()∑_t=1^T∑_i=1^d _t,i^2/√(_t,i) +η4(1-)/(1-/√())^2 √()σ_1^2 ∑_i=1^d( _1,i^2/√(_1,i). . +(1/-1)∑_t=1^T-1_t,i^2/√(_t+1,i))+ C̃∑_t=1^T‖1/√(_t)⊙_t ‖^2(*)≤f(_1)- η/41-/1-/√()∑_t=1^T∑_i=1^d _t,i^2/√(_t,i) +η4(1-)/(1-/√())^2 √()σ_1^2 ∑_i=1^d_1,i^2/√(_1,i) +η/81-/1-/√()∑_t=1^T-1_t,i^2/√(_t,i) + C̃∑_t=1^T‖1/√(_t)⊙_t ‖^2(∘)≤f(_1)- η/81-/1-/√()∑_t=1^T∑_i=1^d _t,i^2/√(_t,i) +η4(1-)/(1-/√())^2 √()σ_1^2 ∑_i=1^d_1,i^2/√(_1,i) + C̃(1-)^2/(1-/√())^2(1-)∑_i=1^d(ln(_T,i/_0,i) - T ln),where we defineC̃≜ 4Lη^2(1+/√()/1-/√())^2+2η√(1-)^2σ_0/(1-)(1-/√())+64(1+σ_1^2)σ_1^2L^2η^3d/^2(1-/√())^3(1-)σ_0√( 1-).to simplify the notations, inequality (*) is due to that _t+1,i≥_t+1,i and β_1 ≤√()-8σ_1^2(1-)^-2, and inequality (∘) is due to Lemma <ref>. Simple rearrangement of the above inequality then gives∑_t=1^T[ ‖1/√(_t)⊙_t‖^2] ≤ 1-/√()/1-8/η f(_1)+32/(1-/√())^2∑_i=1^d _1,i^2/√(_1,i)+C_1 ∑_i=1^d(ln(_T,i/_0,i) - T ln).Then, according to Cauchy's inequality, we have(∑_t=1^T ‖_t‖_1)^2 ≤(∑_t=1^T[ ‖1/√(_t^1)⊙_t‖^2]) (∑_t=1^T[ ‖√(_t^1)‖^2]).Meanwhile, by Lemma <ref>, we have∑_t=1^T∑_i=1^d √(_t,i)≤ 2(T+1) ∑_i=1^d√(_0,i +σ_0^2)+24dσ_1^2 C_1/√()ln dσ_1^2 C_1/√()+ 12σ_1^2/√()C_2.Combining the above inequality and Eq. (<ref>) gives(∑_t=1^T ‖_t‖_1)^2≤ (1-/√()/1-8/η f(_1)+32/(1-/√())^2∑_i=1^d _1,i^2/√(_1,i) +C_1 ∑_i=1^d(ln(_T,i/_0,i) - T ln))×(2(T+1) ∑_i=1^d√(_0,i +σ_0^2)+24dσ_1^2 C_1/√()ln dσ_1^2 C_1/√()+ 12σ_1^2/√()C_2) ≤ (C_2+2C_1 ∑_i=1^d(ln( ∑_t=1^T∑_i=1^d √(_t,i)) )) ×(2(T+1) ∑_i=1^d√(_0,i +σ_0^2)+24dσ_1^2 C_1/√()ln dσ_1^2 C_1/√()+ 12σ_1^2/√()C_2)≤ (C_2+2C_1 ∑_i=1^d(ln( 2(T+1) ∑_i=1^d√(_0,i +σ_0^2)+24dσ_1^2 C_1/√()ln dσ_1^2 C_1/√()+ 12σ_1^2/√()C_2) )) ×(2(T+1) ∑_i=1^d√(_0,i +σ_0^2)+24dσ_1^2 C_1/√()ln dσ_1^2 C_1/√()+ 12σ_1^2/√()C_2) .The proof is then completed. § PROOF OF THEOREM <REF>To start with, we have that|_t,i|^2/√(_t,i)1_| G_t,i|≥σ_0/σ_1≥1/2σ_1^2𝔼^|_t|_t,i|^2/√(_t,i)1_| G_t,i|≥σ_0/σ_1= 1/2σ_1^2𝔼^|_t|_t,i|^2/√(_t-1,i+(1-)σ_0^2)1_| G_t,i|≥σ_0/σ_1 ≥ 1/2σ_1^2√(1-)𝔼^|_t|_t,i|^2/√(_0,i/1-+∑_s=1^T | g_s,i|^2+σ_0^2)1_| G_t,i|≥σ_0/σ_1,where the last inequality is due to that_t-1,i= (1-)∑_s=1^t-1^t-s|_s,i|^2+^t _0,i≤ (1-)∑_s=1^T|_s,i|^2+_0,i.Furthermore, we have σ_0^2+_0,i/1-/√(_0,i/1-+∑_s=1^T | g_s,i|^2+σ_0^2)+∑_t=1^T𝔼|_t,i|^2/√(_0,i/1-+∑_s=1^T | g_s,i|^2+σ_0^2)1_|_t,i|< σ_0/σ_1 ≤ σ_0^2+_0,i/1-/√(_0,i/1-+∑_s=1^T | g_s,i|^21_|_s,i|< σ_0/σ_1+σ_0^2)+∑_t=1^T𝔼|_t,i|^2/√(_0,i/1-+∑_s=1^T | g_s,i|^21_|_s,i|< σ_0/σ_1+σ_0^2)1_|_t,i|< σ_0/σ_1= √(_0,i/1-+∑_s=1^T | g_s,i|^21_|_s,i|< σ_0/σ_1+σ_0^2)≤√(_0,i/1-+∑_s=1^T | g_s,i|^21_|_s,i|< σ_0/σ_1+σ_0^2) ≤ √(_0,i/1-+2σ_0^2T+σ_0^2) . Conclusively, we obtain √(_0,i/1-+∑_s=1^T | g_s,i|^2+σ_0^2)= σ_0^2+_0,i/1-/√(_0,i/1-+∑_s=1^T | g_s,i|^2+σ_0^2)+∑_t=1^T𝔼|_t,i|^2/√(_0,i/1-+∑_s=1^T | g_s,i|^2+σ_0^2)1_|_t,i|< σ_0/σ_1+∑_t=1^T𝔼|_t,i|^2/√(_0,i/1-+∑_s=1^T | g_s,i|^2+σ_0^2)1_|_t,i|≥σ_0/σ_1 ≤ √(_0,i/1-+2σ_0^2T+σ_0^2)+2√(1-)σ_1^2∑_t=1^T|_t,i|^2/√(_t,i)1_|_t,i|≥σ_0/σ_1 ≤ √(_0,i/1-+2σ_0^2T+σ_0^2)+2√(1-)σ_1^2∑_t=1^T|_t,i|^2/√(_t,i). Secondly, as → 1 as T→∞, ≤√()-8σ_1^2(1-)^-2 holds for large enough T, and thus Theorem <ref> holds. Applying the value of , , and η to Eq. (<ref>), we obtain that ∑_t=1^T[ ‖1/√(_t)⊙_t‖^2] ≤D_2√(T)+D_1√(T)∑_i=1^d ln_T,i+64/(1-c)^2∑_i=1^d _1,i^2/√(_1,i) . Summing Eq. (<ref>) with respect to i then gives∑_i=1^d√(_0,i/1-+∑_s=1^T | g_s,i|^2+σ_0^2) ≤ ∑_i=1^d√(_0,i/1-+2σ_0^2T+σ_0^2)+2√(1-)σ_1^2∑_i=1^d∑_t=1^T|_t,i|^2/√(_t,i) ≤ ∑_i=1^d√(_0,i/1-+2σ_0^2T+σ_0^2)+2D_2σ_1^2√(b)+2D_1σ_1^2√(b)∑_i=1^d ln_T,i+128σ_1^2√(b)/(1-c)^2√(T)∑_i=1^d _1,i^2/√(_1,i)= ∑_i=1^d√(_0,i/1-+2σ_0^2T+σ_0^2)+2D_2σ_1^2√(b)+4D_1σ_1^2√(b)∑_i=1^d ln√(_T,i)+128σ_1^2√(b)/(1-c)^2√(T)∑_i=1^d _1,i^2/√(_1,i) ≤ ∑_i=1^d√(_0,i/1-+2σ_0^2T+σ_0^2)+2D_2σ_1^2√(b)+4D_1σ_1^2√(b)∑_i=1^d ln(∑_i=1^d√(1-)√(_0,i/1-+∑_s=1^T | g_s,i|^2+σ_0^2))+128σ_1^2√(b)/(1-c)^2√(T)∑_i=1^d _1,i^2/√(_1,i) ≤ ∑_i=1^d√(_0,i/1-+2σ_0^2T+σ_0^2)+2D_2σ_1^2√(b)+4D_1σ_1^2√(b)∑_i=1^d ln(∑_i=1^d√(1-)√(_0,i/1-+∑_s=1^T | g_s,i|^2+σ_0^2))+128σ_1^2√(b)/(1-c)^2√(T)∑_i=1^d _1,i^2/√(_1,i),where the second inequality is due to Eq. (<ref>), the second-to-last inequality is due to Eq. (<ref>), and the last inequality is due to Jensen's inequality. Solving the above ineqaulity with respect to √(1-)∑_i=1^d√(_0,i/1-+∑_s=1^T | g_s,i|^2+σ_0^2) then gives√(1-)∑_i=1^d√(_0,i/1-+∑_s=1^T | g_s,i|^2+σ_0^2) ≤2∑_i=1^d√(_0,i+3bσ_0^2)+4D_2σ_1^2b/√(T)+256σ_1^2b/(1-c)^2T∑_i=1^d _1,i^2/√(_1,i) +16 D_1σ_1^2b/√(T)ln(e+4 D̃σ_1^2b/√(T)). Therefore, by Cauchy's inequality, we have[ ∑_t=1^T‖_t‖_1]^2 ≤(∑_t=1^T[ ‖1/√(_t^1)⊙_t‖^2])(∑_t=1^T∑_i=1^d √(_t,i)). Since ∑_t=1^T∑_i=1^d √(_t,i)≤∑_t=1^T∑_i=1^d √(_t-1,i+(1-)σ_0^2)≤ T∑_i=1^d√(1-)√(_0,i/1-+∑_s=1^T | g_s,i|^2+σ_0^2),we have [ ∑_t=1^T‖_t‖_1]^2≤ (2T∑_i=1^d√(_0,i+3bσ_0^2)+4D_2σ_1^2b√(T)+256σ_1^2b/(1-c)^2∑_i=1^d _1,i^2/√(_1,i) +16 D_1σ_1^2b√(T)ln(e+4 D̃σ_1^2b/√(T))) ×(D_2√(T)+D_1√(T)∑_i=1^d ln_T,i+64/(1-c)^2∑_i=1^d _1,i^2/√(_1,i))≤ (2T∑_i=1^d√(_0,i+3bσ_0^2)+4D_2σ_1^2b√(T)+256σ_1^2b/(1-c)^2∑_i=1^d _1,i^2/√(_1,i) +16 D_1σ_1^2b√(T)ln(e+4 D̃σ_1^2b/√(T))) ×(2D_1√(T)∑_i=1^dln( 2∑_i=1^d√(_0,i+3bσ_0^2)+4D_2σ_1^2b/√(T)+256σ_1^2b/(1-c)^2T∑_i=1^d _1,i^2/√(_1,i) +16 D_1σ_1^2b/√(T)ln(e+4 D̃σ_1^2b/√(T))). + .64/(1-c)^2∑_i=1^d _1,i^2/√(_1,i)+D_2√(T)).The proof is completed. § EXPERIMENTS In this section, we give the effect of momentum in Adam as a complementary for our theory, since our theory cannot give better results using momentum (discussed in Section <ref>). Experiment setting We use Adam training on Cifar 10 with ResNet18<cit.> and VGG13<cit.> and wikitext2 with two layer Transformer<cit.> for 50 epoch and record its training loss at 50 epoch as measure for the optimization speed. Smaller loss indicates better optimization.The batch size is set 1024 for Cifar10 dataset and 100 for WikiText2 Dataset.The results are given in Table <ref>.Our discoveries are: * Momentum can benefit the optimization when the β is not too large.* For all datasets, larger β_1 will worse the optimization.
http://arxiv.org/abs/2310.17998v1
{ "authors": [ "Bohan Wang", "Jingwen Fu", "Huishuai Zhang", "Nanning Zheng", "Wei Chen" ], "categories": [ "cs.LG", "math.OC" ], "primary_category": "cs.LG", "published": "20231027091658", "title": "Closing the Gap Between the Upper Bound and the Lower Bound of Adam's Iteration Complexity" }
On the Fidelity Distribution of Link-level Entanglements under Purification Karim Elsayed^*, Wasiur R. KhudaBukhsh, Amr Rizk^*^*Faculty of Computer Science, University of Duisburg-Essen, Germany School of Mathematical Sciences, University of Nottingham, UK================================================================================================================================================================================================Quantum entanglement is the key to quantum communications over considerable distances. The first step for entanglement distribution among quantum communication nodes is to generate link-level Einstein–Podolsky–Rosen (EPR) pairs between adjacent communication nodes. EPR pairs may be continuously generated and stored in a few quantum memories to be ready for utilization by quantum applications.A major challenge is that qubits suffer from unavoidable noise due to their interaction with the environment, which is called decoherence.This decoherence results in the known exponential decay model of the fidelity of the qubits with time, thus, limiting the lifetime of a qubit in a quantum memory and the performance of quantum applications.In this paper, we evaluate the fidelity of the stored EPR pairs under two opposite dynamical and probabilistic phenomena, first, the aforementioned decoherence and second purification, i.e. an operation to improve the fidelity of an EPR pair at the expense of sacrificing another EPR pair. Instead of applying the purification as soon as two EPR pairs are generated, we introduce a Purification scheme Beyond the Generation time (PBG)of two EPR pairs. We analytically show the probability distribution of the fidelity of stored link-level EPR pairs in a system with two quantum memories at each node allowing a maximum of two stored EPR pairs. In addition, we apply a PBG scheme that purifies the two stored EPR pairs upon the generation of an additional one. We finally provide numerical evaluations of the analytical approach and show the fidelity-rate trade-off of the considered purification scheme.§ INTRODUCTION Quantum entanglement lies at the core of the quantum Internet which enables quantum applications including quantum communications <cit.>, quantum key distribution <cit.> and distributed quantum computation <cit.>.A major challenge is that qubits suffer from unavoidable decoherence,which results in a rapid decay in the quality of the entangled Einstein–Podolsky–Rosen (EPR) qubit pair with time <cit.>. A corresponding quality metric, also denoted fidelity, measures the closeness between the noisy EPR pairs and the original (desired) one.In the phase damping decoherence model, the fidelity decays exponentially with time <cit.>.A canonical model for quantum networks with quantum memories or queues assumes that EPR pairs are continuously generated and stored to be ready to respond to transmission requests of qubits resulting in a high-capacity network <cit.>. The quantum network must guarantee a sufficient fidelity for the desired application and due to the probabilistic nature of quantum operations the higher the fidelity, the better the quality attained by the application. To this end, the goal of network nodes is to generate high-fidelity entanglements, ensure the validity of the stored ones and apply purification to them. The generation of high-fidelity entanglements through purification involves consuming a smaller or equal fidelity EPR pair to improve the fidelity of another pair. In <cit.> different recurrence purification schemes are proposed using multiple purification rounds to generate one very high-fidelity EPR pair. Specifically, we start from the purification scheme in <cit.> as a baseline to compute the fidelity distribution.Also note that some works consider entanglement cut-off times, i.e., a deadline after which the fidelity is assumed below a required threshold, to ensure a minimum validity of the stored EPR pairs.For example, the work in <cit.> assumes the cut-off times are probabilistic and modeled by an exponential distribution based on which the EPR pairs stored in the quantum queue are dropped.In this paper, we address the gap in the literature on the derivation of the fidelity steady-state distribution of stored EPR pairs under purification.Purification is usually treated as a mechanism to initially generate high fidelity EPR pairs <cit.>.Its application beyond the initial generation on the stored EPR pairs is rarely considered. The purification of the stored EPR pairs has the potential to improve the fidelity at the expense of reducing the average number of EPR pairs in the system, which inherently leads to a rate-fidelity trade-off.We denote the purification scheme applied beyond the generation of an EPR pair as PBG. In this paper, we derive the steady-state probability distribution of the fidelity of the link-level entanglements in a system with a few quantum memories in isolation of any request process. In addition, we apply a PBG scheme that distillates the stored EPR pairs before storing a newly generated pair when the quantum memory is full.To the best of our knowledge, this is the first work that evaluates the fidelity distribution of the stored EPR pairs in the quantum memories and the effect of the PBG schemes.The remainder of the paper is structured as follows: We first describe the model and problem statement in Sect. <ref>.In Sect. <ref>, we derive the steady-state fidelity distribution of the stored EPR pairs.We numerically evaluate the proposed approach in Sect. <ref> and summarize the related work in Sect. <ref> before concluding the paper and discussing open problems in Sect. <ref>.§ MODEL AND PROBLEM STATEMENTWe model the entanglement generation as Bernoulli trials with success probability p_g within a time slot t similar to <cit.>.One rationale that the entanglement generation is probabilistic is that the optical fiber is assumed to absorb the transmitted qubit from one node to the other with probability 1-p_g=1-^-η l, where l is the fiber length between the communication nodes and η is the attenuation coefficient <cit.>. This is associated with the link-level entanglement generation schemes that require qubit transmission through a fiber of length l as discussed in <cit.>. The scheme that we consider in this paper involves, first, the preparation of an EPR pair at one node before sending half of it, i.e., one of the two entangled qubits, to the other node, hence, l denotes the link length. In addition, each entanglement generation attempt takes ideally t= l/c duration, where c is the speed of light. Following the formulation from <cit.> the fidelity of an EPR pair at time t_0 decays with time due to decoherence asF(t)= 1/2(1+(2F(t_0)-1) ^-(t-t_0)/t_c) ,where 1/t_c is the decoherence rate and F(t_0) is the fidelity of the EPR pair at time t_0. In this work, we assume a perfect EPR generation. Note that this assumption does not affect our analytical approach to obtain the fidelity distribution. We assume a PBG scheme to maintain high fidelity. This entails attempting to purify the two stored EPR pairs at the moment of a successful generation of an additional one.Instead of dropping the lowest fidelity EPR pair to be replaced by the freshly generated one, we use it to purify the other stored EPR pair. Specifically, we consider the purification scheme in <cit.>, where the fidelity of the purified EPR pair becomesF_p(F_1,F_2)=F_1 F_2/F_1 F_2+(1-F_1)(1-F_2),with a purification success probability p_s given byp_s(F_1,F_2)=F_1 F_2+(1-F_1)(1-F_2).Here, F_1 and F_2 denote the fidelity of the first and the second pair, respectively. In Fig. <ref>, we illustrate the model of the purification protocol using a sample path realization of the fidelity of the EPR pairs over time. We assume the system contains one EPR pair at time t=0 and its fidelity decays with time due to decoherence as in (<ref>). As per the Bernoulli assumption on the generation from above the inter-generation times {τ_i}_i come from a geometric distribution denoting the time between two successful EPR pair generations.When an EPR pair is generated and the quantum memories are full, purification takes place between the two stored EPR pairs. The figure shows the improved fidelity obtained from purification as well as the random event of purification failure leading to losing the two stored EPR pairs.Next, we calculate the steady-state distribution of the fidelity of the EPR pairs in the system with a few quantum memories. The hardness of the problem originates from the hardness of tracking the fidelity due to its dependence on the purification outcome which in turn recursively depends on the fidelity at the previous purification attempts. § APPROACHMotivated by the Bernoulli modeling of the EPR generation in Sect. <ref>, our key idea for calculating the fidelity distribution is to track the fidelity decay at each time slot by discretizing the fidelity proportional to the time slots.This allows modeling the fidelity using a discrete time Markov chain (DTMC).We divide the fidelity range into N+1 discrete levelsproportional to its decay ranging from the lowest fidelity value F_ϵ to the initial fidelity of the generated EPR pair F_0 asF( n)=1/2(1+(2F_0-1) ^-α n ),where n ∈{0,1,...,N} is the time duration elapsed since theentanglement generation, which we denote the age (given in discrete time) and α:= t / t_c denotes the decoherence coefficient in one time slot. We do not consider the fidelity beyond the lowest value F_ϵ. Since the age n uniquely defines the fidelity level, we model the fidelity level as a result of a successful purification of two EPR pairs by an EPR pair with equal or smaller age n_p according ton_p( n_1, n_2) = max(⌈-1/αln(2F_p( n_1, n_2)-1/2F_0-1)⌉ ,0 ) ,where F_p( n_1, n_2) is the fidelity after purification of the two EPR pairs from (<ref>) and n_1 and n_2 are the ages corresponding to the fidelities of the stored EPR pairs F_1(n) and F_2(n), respectively. Here F_i(n) is the fidelity at slot n on the discrete time lattice.Since F_p( n_1, n_2) may not correspond to one of the discrete fidelity levels, we use ⌈ . ⌉ to map the purification age to the next larger integer to lower bound the purified fidelity. In case F_p>F_0, which may occur for small initial EPR fidelity, the maximum operation in (<ref>) maintains n_p≥ 0 corresponding to the highest fidelity F_0. Note that the reduced age due to purification does not reflect the actual time the EPR pair spent in the memory. In our model, the purified pair obtains a fidelity value from (<ref>) with success probability (<ref>) that is equivalent to the fidelity of an EPR with a later generation time.Hence, as shown in <ref> the age n is shortened accordingly through the purification operation. Similarly, we calculate the maximum age N that achieves the lowest fidelity threshold according to F(N)=F_ϵ asN= ⌈-1/αln(2F_ϵ-1/2F_0-1)⌉. Note that the fidelity F_1(n) always represents the larger fidelity EPR pair out of the two stored ones when the memories are full and is exactly calculated using (<ref>). Hence, right after a fidelity jump in Fig. <ref>, F_2(n) represents the older EPR pair and the fidelity of the only EPR pair in the system when it is not full (cf. the figure). The value of F_2(n) is quantized according to (<ref>) during purification.§.§ DTMC model of the age of the stored EPR pairs We model the fidelity of the EPR pairs stored in the system by a DTMC with states ( n_1, n_2) ∼ (F_1(n),F_2(n)) representing their age such that n_2 always represents the oldest (smallest fidelity) EPR pair in the system. We assume that the system has initially one EPR pair with perfect fidelity, thus the initial system state is (-∞,0) at time n=0, where -∞ stands for the non-existing second EPR pair. We illustrate the system DTMC in Fig. <ref>, where we denote the state transitions to be either forward or backward. The forward transitions represent the time evolution before attempting purification, i.e., the age progression of EPR pairs.We summarize the forward transitions as(i,j)→(min{i+1,N},min{j+1,N }) w.p.1-p_g , (-∞,j ) →(0,min{j+1,N }) w.p.p_g .The backward transitions are a result of a purification attempt as in Success: (i,j ) → (0, n_p(i,j) ) w.p. p_gp_s(i,j),Fail:(i,j) → (-∞,0 ) w.p. p_g (1-p_s(i,j)), ∀ i ∈{0,1, ..., N},j≥ i ,where n_p(i,j) and p_s(i,j) are the age of the successfully purified EPR pair (<ref>) and the probability of purification success (<ref>) at state (i,j), respectively. The purification attempt occurs upon entanglement generation subject to a full quantum memory, thus it only appears when i≠ -∞. In case of a purification failure, the two stored EPR pairs are lost and only the newly generated EPR pair remains, thus the system state resets to (-∞,0).The backward transition probabilities are state-dependent since the success probability depends on the fidelity levels (<ref>), i.e., the age and α. We describe this dependence in Fig. <ref> by the dotted arrow representing the existence of a state-dependent transition from each state within a block to a corresponding state in the destination block.We define a block in Fig. <ref> to comprise the states within a horizontal row which represents the states S_m={(m,j)} ∀ j≥ m, j ∈{0,1,..,N}. Note that not only do the transition probabilities vary in the case of successful purification but also the destination state (0, n_p(i,j)). As illustrated the destination state is a function of the current state as well as α. Note that a careful choice of α, i.e., the time discretization with respect to the decoherence rate is crucial for the design of the DTMC.We represent the transition matrix of this Markov chain in terms of sub-matrices describing the transitions between blocks of the DTMC as depicted in Fig. <ref> with the states ordered as [(-∞,0) … (-∞,N) (0,0) … (0,N) …… (N,N) ] by𝐐= [ 0_N+1,1𝐗_0 𝐗_-∞0_N+1,N……0; f_00_N+1,N𝐃_0𝐗_0⋱⋱⋮; f_10_N,N𝐃_10_N,N𝐗_1⋱⋮;⋮⋮⋮⋱⋱0;⋮⋮⋮⋱0_2,2𝐗_N-1; f_N0_1,N𝐃_N0_1,N…0_1,21-p_g ] .The forward transition implies the transition from one block to the next one, thus resulting in the sparse matrix structure, where𝐗_m is an N-m+1 × N-m matrix, with 0 ≤ m ≤ N-1, representing the forward transitions in (<ref>). We express this matrix as 𝐗_m= (1-p_g) [ 𝐈_N-m; 0_1,N-m-1 1 ] , 0 ≤ m ≤ N-1 ,while the N+1 × N+1 matrix 𝐗_-∞ represents the forward transitions due to the successful generation of an EPR pair when only one EPR pair is stored, which we express as𝐗_-∞=[0_N+1,1| 1-𝐗_0] ,where [.|.] represents the column-wise concatenation operation. The probabilities of entanglement generation resulting in a failed purification attempt, thus the backward transitions in (<ref>), are represented by the N-m+1 × 1 vectors f_m:=p_g(1-[p_s(m,m),p_s(m,m+1),...., p_s(m,N) ]^T ),with p_s(i,j) being the probability of purification success at state (i,j) known from (<ref>). Additionally, 𝐃_m includes the backward transitions due to a successful purification expressed in (<ref>).We express the elements of the matrix representing the transition from state (m,j) to state (0,k) by𝐃_m[(m,j),(0,k)]=p_gp_s(m,j) 1_k= n_p(m,j),0 ≤ m ≤ N ,where 1 is the indicator function.§.§ Obtaining the fidelity distribution from the DTMC The classical steady-state solution to the DTMC to obtain the steady-state probability vector p involves solving the linear system of equations p^T 𝐐= p^T with the normalization condition p^T e_n_s = 1, where 𝐐 is the transition matrix, e_n_s is an all-one column vector of length n_s while n_s being the number of states. Since the number of equations in the linear system grows quadratically as O(N^2), we make use of the problem structure and derive next a reduced problem that requires solving only N+1 equations. We denote the probability of a state (i,j) as p_i,j and the column probability vector of the block states S_i as p_i. Moreover, we denote the part of the transition matrix representing the transitions from all the states to the states S_i, i.e., a block column in 𝐐, by 𝐐_i. For example 𝐐_-∞ and 𝐐_0 represent the first and the second block column in 𝐐 as given in (<ref>). Using the steady-state description from above and (<ref>), we express p_i in terms of 𝐐_i asp^T 𝐐_i=p^T_i . The key idea to reducing the system of equations to N+1 is by relating the steady-state probabilities of all the states in terms of p_0 using the structure of the DTMC and the transition matrix (<ref>). The structure of the DTMC implies that the states S_0 link all the states together. First, the states S_i,i >0 recursively originate from the forward transitions of S_0 as given by the corresponding block columns 𝐐_𝐢. Equipped with this idea, we can recursively derive p_i: i>0 in terms of p_0 using (<ref>) and (<ref>), i.e., the recursive structure starts from the third block column in (<ref>). This recursive structure leads top_i^T=p_i-1^T 𝐗_i-1= p_0^T ∏_m=0^i-1𝐗_m,0<i<N .Similarly, we derive p_N as p_N=p^T_N-1𝐗_𝐍-1 +(1-p_g)p_N =1/p_gp^T_N-1𝐗_𝐍-1 . Note that p_N represents only one state, i.e., p_N,N.We further derive p_N using the expression of p_N-1 in terms of p_0 from (<ref>) as p_N=1/p_gp^T_0∏_m=0^N-1X_m .Now, the state (-∞,0) is the destination of the states S_i,i≥ 0 as a result of the backward transitions capturing the failed purification attempt which is represented by the first column in 𝐐.Therefore, using (<ref>), we derive p_-∞,0 in terms of p_i,i≥ 0 asp_-∞,0= ∑_m=0^N p_m^T f_m = p_0^T f_0+p_N^T f_N +∑_m=1^N-1p_m^T f_m.Consequently, using the expressions in (<ref>) and (<ref>) we obtainp_-∞,0 =p_0^T [ f_0+ 1/p_g∏_m=0^N-1𝐗_m f_N + ∑_m=1^N-1∏_n=0^m-1𝐗_n f_m ] , := p_0^T Φ.Next, the states S_-∞ are recursively related by the forward transitions according to 𝐐_-∞ as p_-∞,j = (1-p_g) p_-∞,j-1=(1-p_g)^j p_-∞,0 ,0<j<N,p_-∞,N =1-p_g/p_g p_-∞,N-1 =(1-p_g)^N/p_g p_-∞,0 . Let ρ=[ 1,(1-p_g), …,(1-p_g)^N-1,(1-p_g)^N/p_g]^T, we rewrite p_-∞ in vector form in terms of p_0 as p_-∞^T= p_-∞,0 ρ^T= p_0^T Φρ^T . Finally, S_0 is the destination of all the states according to 𝐐_0, i.e., from S_-∞ according to the forward transitions in (<ref>) and from all the other states according to the backward transitions due to successful purification in (<ref>). Therefore, we describe this relation using (<ref>) asp_0^T=p_-∞^T 𝐗_-∞+ ∑_m=0^Np_m^T 𝐃_m .As a result, the linear system of equation to be solved is reduced to p_0^T Ψ = 0_N+1,1 ,p_0^T β = 1 ,where we derive Ψ using (<ref>), (<ref>) and (<ref>) in(<ref>) as Ψ:=𝐈_N-Φρ^T 𝐗_-∞-𝐃_0-∑_m=1^N-1∏_n=0^m-1𝐗_n 𝐃_m-1/p_g∏_m=0^N-1𝐗_m D_N , in addition to β using (<ref>), (<ref>) and (<ref>) in the normalization equation as β:= Φρ^T e_N+1+e_N+1+∑_m=1^N-1∏_n=0^m-1𝐗_n e_N-m+1+1/p_g∏_m=0^N-1𝐗_m . We rewrite (<ref>) in a short form as p_0^T [Ψ | β]= [0_1,N+1|1] .usingthe column-wise concatenation operation[.|.].The linear system of equations in (<ref>) is of rank N+1, where its solution yields the value of p_0. In addition, we obtain the other steady-state probabilities by substituting p_0 in (<ref>), (<ref>) and (<ref>). § NUMERICAL VALIDATIONIn this section, we validate our DTMC analytical approach with simulations and show the trade-off between the steady-state average fidelity of the stored EPR pairs defined as F̅_̅i̅:=n →∞lim[F_i(n)] and their average number for an increasing link length ranging between 5 and 30.We set the attenuation η=0.15/ and the decoherence time t_c=1 similar to <cit.>.We assume a perfect generation of EPR pairs and use a fidelity threshold F_ϵ=0.55. In Fig. <ref>, we validate the steady-state analytical cumulative mass function (CMF) of the older EPR pair with the result from the simulation for l=15. We illustrate in Fig. <ref> the rate-fidelity trade-off achieved by applying purification beyond generation to the stored EPR pairs in our system with two quantum memories. Intuitively, while purification improves the average steady-state fidelity of the two stored EPR pairs as shown in Fig. <ref>, it results in a reduction in the average number of the EPR pairs as shown in Fig. <ref> since we sacrifice one EPR pair for successful purification and both in case of failure. Note that F̅_1 represents the average fidelity of the higher fidelity EPR pair when it exists, i.e., when the quantum memories are full.§ RELATED WORKLink-level entanglement is the first step towards long distant quantum communication.The authors of <cit.> propose a physical and link layer protocol to provide a robust link-level entanglement generation between quantum communication nodes. Specifically, the proposed protocol organizes the link-level entanglement generation requests to ensure the fidelity desired by the applications at the expense of the increased generation time. Nitrogen vacancy (NV) centers in diamond platform <cit.> is one way to generate desired fidelity EPR pairs, where higher fidelity EPR pairs require longer generation times.A different method relies on recurrence purification algorithms, which use two EPR pairs per round to obtain a higher fidelity one. The work in <cit.> proposes an approach that purifies two EPR pairs using polarization mode dispersion and derives an expression of the improved fidelity as well as the probability of purification success. Several other works such as <cit.> provide quantum operation-based procedures for the purification of two EPR pairs. Starting from the Lindblad formalization of the qubit interaction with the environment, i.e., decoherence, as time first order differential equation <cit.>, the time dynamics of the fidelity can be analytically expressed for different phase damping models <cit.>. Using this concept, the works in <cit.> express the exponentially decaying fidelity over time of the EPR pairs. Hence, quantum communication nodes need to address the effect of the decoherence on the stored link-level EPR pairs by estimating their fidelity to ensure meeting the desired application requirements. For that reason, the works in <cit.> drop qubits from the memory after specific cut-off times to ensure a minimum fidelity requirement. Specifically, the authors in <cit.> probabilistically model the cut-off times by an exponential distribution. On the other hand, the work in <cit.> models a quantum queue without dropping qubits and derives an expression on the average queuing delay, thus it can estimate the average decoherence a qubit suffers in the queue. Overall, these works differ from this paper in the sense that we target the derivation of the steady-state distribution of the fidelity of EPR pairs on one link given a continuous purification after generation protocol.§ DISCUSSION & OPEN PROBLEMS In this paper, we used a DTMC to model the fidelity of the EPR pairs for a quantum communication link in a few (two) quantum memory system. We used this model to calculate the steady-state distribution of the fidelity of the EPR pairs.The model shows the improvement of the fidelity in terms of its distribution of the existing EPR pairs by applying a purification beyond generation protocol at the expense of a decrease in the average number of ready EPR pairs in the system. Extending the model to more than two quantum memories or a quantum memory queue is open for future work as well as incorporating a request process that consumes the EPR pairs as required by the desired application. Moreover, having more than a few EPR pairs stored in the queue raises a question about the appropriate purification beyond generation protocol and when it should be applied. Further, the problem of calculating the distribution of the continuous fidelity is open and is considered much more complex due to the stochastic behavior of the entanglement generation and purification as well as the dependence between the fidelity at the purification points resulting in random recursive equations. IEEEtran.bst
http://arxiv.org/abs/2310.18198v1
{ "authors": [ "Karim Elsayed", "Wasiur R. KhudaBukhsh", "Amr Rizk" ], "categories": [ "quant-ph", "cs.NI", "C.2; C.4" ], "primary_category": "quant-ph", "published": "20231027151619", "title": "On the Fidelity Distribution of Link-level Entanglements under Purification" }
Corresponding author [email protected] Department of Modern Physics, University of Science and Technology of China, ChinaThis paper introduces the “comparison and replacement" (CNR) operation and propose a general-purpose pure quantum approximate algorithm for combinatorial optimization problems. The CNR operation is implemented with the aid of t ancillary qubits. And our algorithm is constructed to a p-level divide-and-conquer structure based on the CNR operation. The quality of approximate optimization improves with the increase of p. And the practical performance improves and converges to the theoretical case as t increases. For sufficiently general problems, the algorithm can work and quantitatively produce a solution which well optimizes the problem with considerably high probability. Furthermore, we illustrate the simulation results of our algorithm when applied to MAX-2-XOR instances and Gaussian weighted 2-edge graphs. The advantage of our algorithm is that, quantitatively, we can choose p to produce the solution near optimum with probability of acceptance and evaluate the performance explicitly. A Quantum Approximate Optimization Algorithm Based on CNR Operation An Min Wang 2023-10-25 ===================================================================§ INTRODUCTION Quantum computation can accelerate the approximate optimization and then promotes the development of related algorithms, which overcome the difficulty of exponential inefficiency hopefully. The well-known beginning is that Edward Farhi et. al developed the quantum adiabatic algorithm (QAA)<cit.> and introduced the quantum approximate optimization algorithm (QAOA)<cit.> which relies on the parameters produced by classical method. His team investigates applications of p-level QAOA in different combinatorial optimization problems such as typical instances<cit.>, Sherrington-Kirkpatrick model<cit.> and the ensemble of k-edge graphs<cit.>. The results present outstanding properties such as concentration<cit.>. And algorithm performance of QAOA shows the quantum supremacy<cit.> compared with classical algorithm. Moreover, QAOA emerges great application prospect in transportation science<cit.>, economy<cit.>, product synthesis in biochemistry<cit.> and specific physics systems<cit.> etc. All these achievements motivate us to construct a general-purposed pure quantum approximate optimization algorithm for combinatorial optimization problem in this paper. Combinatorial optimization problems can be quantified with cost function C(z) defined on n-bit string z=(z_1z_2⋯ z_n) ∈{0,1}^n<cit.>. By this frame, approximate optimization asks for a string z^* for which C(z^*) is close to the absolute minimum C_min. In quantum computer, the CNR operation works on the tensor product of two quantum registers, the target register T_qreg and the support register S_qreg. Each register works in a 2^n dimensional Hilbert space spanned by n-qubit computational basis vectors {|z⟩}, which is bit-wisely corresponding to n-bit strings {z}. In principle, the cost function can be assigned to a Hermitian operator 𝒞 that is diagonal in the computational basis vectors, defined as 𝒞|z⟩ = C(z)|z⟩. The two registers are not equivalent. The CNR operation is designed to a procedure that first indicates the string with cost function value closer to C_min stored by the two registers in each tensor product component, then overwrites T_qreg with the corresponding computational basis vector. Quantum parallelism provides an feasible implementation. In the end, it produces a final state in T_qreg which is more optimum than the initial state in T_qreg or S_qreg. Based on the CNR operation, we construct our quantum approximate optimization algorithm. The sketch of our algorithm when p=3 is shown in Figure <ref>. Our algorithm adopts the typical multi-level divide-and-conquer structure. By repetitions of CNR operation, the quality of approximate optimization improves. There are 2^p registers in |0⟩^⊗ n as the input of p-level algorithm. Then they will be transformed into n-qubit uniform superposition with Hadamard gates and enter the first level of CNR operations by pairs as T_qreg and S_qreg. In the intermediate procedure, the input of the k-th level are 2^p-k+1 target registers from the (k-1)-th level. These states are divided into new target-support register pairs as the standard input of CNR. Then 2^p-k CNR operations in the k-th level transform these state and delivers 2^p-k target registers to the (k+1)-th level. In this way, our p-level algorithm outputs the final state in the only one T_qreg experiencing all p times CNR. And we measure the final T_qreg in computational basis vectors and obtain a string z^* and evaluate C(z^*). For reasonable p and t, the algorithm produces a string z^* with C(z^*) sufficiently close to C_min, in considerably high probability. In this paper, we focus on the approximation ratio r, which is introduced as one of the most important quantities which reflects the quality of approximate optimization<cit.>. It is defined as r = C_max - ⟨𝒞⟩/C_max-C_min, where ⟨𝒞⟩ is the expectation of 𝒞 in the final state. And the closer r is to 1, the better a quantum approximate optimization algorithm performs. The paper is organized as follows. In section <ref>, we introduce the implementation of the CNR operation by realizing the two sub-operations, comparison and replacement in detail. Then we calculate the final state for a general input directly following the implementation of CNR and derive the problem-independent recursion relations and a series of corollaries which demonstrate the theoretical performance in sufficiently general problems quantitatively in section <ref>. Subsequently, we apply the algorithm with different p and t to MAX-2-XOR and Gaussian weighted 2-edge graphs in section <ref>, and illustrate dependence of the algorithm performance in application on corresponding parameters by simulation results. As a conclusion, we review the algorithm and emphasize the properties of our algorithm, and then finish some further discussions in section <ref>. § COMPARISON AND REPLACEMENT §.§ Comparison operation with ancillary register As the first sub-operation of CNR, comparison compares the two strings stored by T_qreg and S_qreg in sense of optimization and gives a result using the ancillary register. It can be implemented by the similar technique in Quantum Phase Estimation<cit.>. The initial state of comparison is the 2n-qubit state stored jointly by T_qreg and S_qreg, which is also the input of CNR as mentioned earlier. We introduce t qubits in the uniform superposition state as the ancillary register. The operator for which we perform eigenvalue estimation is 𝒜 = I⊗𝒞 - 𝒞⊗ I constructed with the Hermitian operator defined by (<ref>). The corresponding unitary operator is exp(i𝒜/M) = exp(i𝒞_s/M)⊗exp(-i𝒞_t/M), where 𝒞_s or 𝒞_t only acts on S_qreg or T_qreg respectively. And the scale factor M is introduced to scale the spectrum of 𝒜 in the range [-π,π) to avoid the multi-value correspondence from the periodicity of exponent on the imaginary axis. Thus the strict lower bound that M needs to satisfy is M ≥C_max - C_min/2π The exact bound of M for a general problem is unknown, unless the problem is solved. But a suitable M can be estimated, since the number of edges in a graph has an upper bound and coefficients can be estimated according to feature quantities, such as the edge density of MAX-k-XOR and (μ,σ) for Gaussian weighted 2-edge graphs. We first consider the comparison transformation on a tensor product component |z_t⟩|z_s⟩, where |z_t⟩ and |z_s⟩ are stored in T_qreg and S_qreg respectively, which can be promoted to a general case by quantum parallelism directly. The comparison operation can be described as follows: first it assigns C(z_s)-C(z_t)/M as phase on the corresponding tensor product, by controlled-exp(i2^j𝒜/M) according to ancillary qubits in uniform superposition. Then the inverse Quantum Fourier Transformation on total 2n+t qubits extracts the information in phases into the ancillary register. The procedure of comparison in explicit expression is |z_t⟩|z_s⟩∑_x∈{0,1}^t1/√(2^t)|x⟩→1/√(2^t)∑_x∈{0,1}^t e^i2π D(x) Δ|z_t⟩|z_s⟩|x⟩→|z_t⟩|z_s⟩|Δ̃⟩, where {|x⟩=|x_1x_2⋯ x_t⟩} are computational basis vectors working for the ancillary register. We use D(x) to denote the decimal value of string x. Moreover, we use Δ to refer to C(z_s)-C(z_t)/2π M in the range [-1/2,1/2) By the expression (<ref>), the final state of comparison consists the unchanged tensor product |z_t⟩|z_s⟩ and |Δ̃⟩ which stores the information of Δ. Expanding |Δ̃⟩ with t-qubit computational basis vectors, the amplitude of |x⟩ is ϕ(x;Δ) = 1/2^t1-exp(i2π(2^tΔ-D(x)))/1-exp(i2π(Δ-2^-tD(x))). And the corresponding probability distribution is Pr(x;Δ) = 1/2^2t1-cos(2π(2^tΔ-D(x)))/1-cos(2π(Δ-2^-tD(x))), which peaks at 2^tΔ when Δ≥0 or 2^t(Δ+1) when Δ <0. Figure <ref>(a) and (b) illustrate the probability distribution the two cases separately. In principle, we especially care about whether Δ is positive or negative, which determines the behavior of the replacement. It can be seen that in the ancillary register, components corresponding to D(x)<2^t-1-1 has the first ancillary qubit |0⟩, which stand for C(z_s)-C(z_t)≥ 0, that is, z_t is closer to the minimum. And conversely those components corresponding to D(x)≥2^t-1 has the first ancillary qubit |1⟩, which gives a opposite result C(z_s)-C(z_t)<0, that is z_s is closer to the minimum. In application, presetting an accuracy can effectively reduce the cost of resource. In our algorithm, the accuracy is a tolerable bound of estimation error. In probability analysis, when the comparison operation gives an incorrect answer about the sign of Δ, we count the case as a failure only when |Δ| is larger than accuracy. There exists a trade-off between performance and efficiency. For MAX-k-XOR or MAX-k-SAT, the absolute value of gap between different cost function values is at least 2, thus we can choose 2/M as accuracy for the best performance. However when the coefficients are in continuous distribution, pursuing performance is costly. Based on expression (<ref>), we give the lower bound of t according to the accuracy and tolerable probability of failure, which indicates that the comparison with reasonable t can give a correct answer with satisfactory probability. The details are shown in Appendix <ref>. §.§ Replacement controlled by the first ancillary qubit In our implementation and analysis about comparison, one point worth emphasizing is that the answer of comparing the two string can be delivered by the first qubit in our ancillary register. Thus, more explicitly, our requirement on the replacement is that for the first ancillary qubit at |1⟩, it replaces the computational basis vector in T_qreg with the one in S_qreg, and for the first ancillary qubit at |0⟩, it does nothing. Our replacement can be implemented directly by controlled-overwriting operation introduced by us. The quantum circuit of single-qubit overwriting is shown in Figure <ref>. We use the first ancillary qubit from comparison as the control bit, then construct the bit-wise controlled-overwriting between two n-qubit states, by n simultaneous single-qubit overwriting operations between qubits at the same position in T_qreg and S_qreg correspondingly. Such a operation meets our requirement above, and works as the second sub-operation after the comparison. So far, we realize the CNR operation by performing the comparison and the replacement in turn. Since both comparison and replacement are unitary, by quantum parallelism, CNR still works when the input is a 2n-qubit state stored jointly by T_qreg and S_qreg. For a general input, each tensor product component will experience the procedure we introduced before. The CNR operation can transform the initial state and output a final state as expected. We will demonstrate our algorithm performance by theoretical analysis in section <ref> and study the dependence on the two parameters p and t in application in section <ref>. These results confirm the feasibility and the derived properties motivate us to construct the algorithm. § THEORETICAL ALGORITHM PERFORMANCE At the beginning of this section, we need to introduce and explain the labels and notes used in the following context. In this section, we will understand 𝒞 as a Hamiltonian. To demonstrate the optimization explicitly, we adopt the complete-knowing sight over the problem to denote the computational basis vectors according to equation (<ref>) as follows: we use |E_i;j⟩ to denote the j-th computational basis vector in the subspace of energy E_i which is corresponding to the i-th energy level. It must be declared here that the complete knowledge is not necessary for CNR to optimize general combinatorial optimization problems. The only requirement is a black box that can perform the controlled-exp( ij𝒞/M) operation for integer j. Therefore, this relabeling on computational basis vectors will not change the objective result. In this section, we assume that there are G energy levels and the i-th level has g_i strings. We start by calculating the final state of the CNR operation from a general input explicitly. Without loss of generality, we consider the general input state as |Ψ⟩ = ∑_i,k=1^G∑_j=1^g_i∑_l=1^g_kc_i;j|k;l|E_i;j⟩|E_k;l⟩, where {c_i;j|k;l} are coefficients that have been normalized, and the |E_i;j⟩ is corresponding to T_qreg and |E_k;l⟩ is to S_qreg. The CNR operation transforms the input state and ancillary qubits into ∑_i,k=1^G∑_j=1^g_i∑_l=1^g_k c_i;j|k;l|E_i;j⟩|E_k;l⟩∑_x∈{0,1}^n1/√(2^t)|x⟩→∑_i,k=1^G∑_j=1^g_i∑_l=1^g_k c_i;j|k;l|E_i;j⟩|E_k;l⟩∑_x∈{0,1}^n;x_1=0ϕ(x;Δ_i,k) |x⟩ +∑_i,k=1^G∑_j=1^g_i∑_l=1^g_k c_i;j|k;l|E_k;l⟩|E_k;l⊕̅ E_i;j⟩∑_x∈{0,1}^n;x_1=1ϕ(x;Δ_i,k) |x⟩ where |E_k;l⊕̅ E_i;j⟩ is defined as the state produced by bit-wise XOR between |E_i;j⟩ and |E_k;l⟩. And we use Δ_i,k to refer to 2^t(E_k-E_i)/2π M correspondingly. By the analysis and discussion in section <ref> and Appendix <ref>, the comparison can give a correct answer through the first ancillar qubit with considerably high probability. Now we hide those non-peaking terms in the summation, and normalize the expression again, then (<ref>) becomes ∑_i,k=1^G∑_j=1^g_i∑_l=1^g_k c_i;j|k;l|E_i;j⟩|E_k;l⟩∑_x∈{0,1}^n1/√(2^t)|x⟩ →∑_i≤ k^G∑_j=1^g_i∑_l=1^g_kc_i;j|k;l|E_i;j⟩|E_k;l⟩|0⋯⟩+∑_i>k^G∑_j=1^g_i∑_l=1^g_kc_i;j|k;l|E_k;l⟩|E_k;l⊕̅ E_i;j⟩|1⋯⟩, where we highlight the first ancillary qubit x_1 and omit the rest t-1 qubits in ancillary register.Since the terms after the arrow are orthogonal to each other, the probability of obtaining a string in the a-th energy level from T_qreg after CNR operation is P(a,1) = ∑_k≥ a∑_j=1^g_a∑_l=1^g_k|c_a;j|k;l|^2 + ∑_i> a∑_j=1^g_i∑_l=1^g_a|c_i;j|a;l|^2 In our algorithm, the input of each CNR operation in the first level is the uniform superposition state, which means every coefficient c_i;j|k;l is equal to 1/2^n. In this condition, (<ref>) becomes P(a,1)= ∑_k≥ a∑_j=1^g_a∑_l=1^g_k1/2^2n + ∑_i> a∑_j=1^g_i∑_l=1^g_a1/2^2n= g_a/2^2n∑_k≥ a^Gg_k + g_a/2^2n∑_i>a^Gg_k = 2g_a/2^n - 2g_a/2^n( 1/2^n∑_k≤ ag_k) + ( g_a/2^n)^2 We denote the probability of obtaining a string in the a-th energy level from the final state after CNR within the m-th level in our algorithm as P(a,m), and the summation of probability over the first energy level to a-th level as S(a,m). And we allow m=0 corresponding to the initial state, which is uniform superposition. Thus we have the first step for the mathematical induction P(a,1) = 2P(a,0) - 2P(a,0)S(a,0) + P(a,0)^2 Then we assume for a positive integer m, the recursion relation of probability P(a,m) = 2P(a,m-1) - 2P(a,m-1)S(a,m-1) + P(a,m-1)^2 is correct. We will prove that for (m+1), the relation above is still established. As Figure <ref> shows, every CNR operation in our algorithm has the input where states stored by T_qreg and S_qreg are identical, because of the divide-and-conquer structure. We plug the final state of the m-th level of CNR operations into the (m+1)-th level. Using (<ref>), we can obtain P(a,m+1) = 2P(a,m) - 2P(a,m)S(a,m) + P(a,m)^2. It is the basic recursion relation for our algorithm. And by summing both sides over the first energy level to the a-th level in the equation above, we have a recursion relation in summation form S(a,m+1) = 1 - (1-S(a,m))^2 = 1-(1-S(a,0))^2^m+1 And combined with P(a,0) = 1/2^n, the recursion relations (<ref>) and (<ref>) are complete for our algorithm with any positive integer p and constitute the core mechanism of our algorithm. They are reliable for sufficiently general combinatorial optimization problems, since our derivation above is in the general form. The reason why we construct our algorithm to a typical divide-and-conquer algorithm structure is to accomplish this optimization mechanism under the restrict of quantum no-cloning theory. Using the recursion relations, we can calculate the probability distribution of obtaining a string corresponding to each energy level after p-level algorithm. Since the last step of our algorithm is measuring the final state in computational basis vectors to obtain a string as result, we focus on the probability analysis about strings after the p-level algorithm. We propose the following corollaries: The approximate ratio r increases as p grows, that is r_p ≥ r_p-1. The equal sign is established at constant problem. And as p increases, the approximate ratio converges to 1, that is lim_p→∞ r_p = 1 We use the labels and notes in this section to rewrite the definition (<ref>) in a explicit form r_p = C_max - ⟨𝒞⟩/C_max-C_min = C_max/C_max-C_min - C_max/C_max-C_min∑_a=1^GP(a,p)E_a. And using S(a,p) to express P(a,p) above, and we concentrate on the expectation ⟨𝒞⟩ = ∑_a=1^GP(a,p)E_a =P(1,p)C_min+∑_a=2^G[S(a,p)-S(a-1,p)]E_a =P(1,p)C_min+∑_a=2^G[-(1-S(a,0))^2^p+(1-S(a-1,0))^2^p]E_a Plug the normalization condition ∑_a=1^GP(a,p) = 1 into expression (<ref>) and replace P(1,p). We have ⟨𝒞⟩ = C_min + ∑_a=2^G[-(1-S(a,0))^2^p+(1-S(a-1,0))^2^p](E_a-C_min) Since p is independent of the problem, 1-S(a,0) and 1-S(a-1,0) are constant in sense of p. Therefore, -(1-S(a,0))^2^p+(1-S(a-1,0))^2^p is monotonically decreasing. And for nontrivial problem, E_a-C_min≥0, which means ⟨𝒞⟩ decreases monotonically. According to (<ref>), r_p is monotonically increasing as p grows. For a trivial problem, E_a-C_min=0, and ⟨𝒞⟩=C_min=C_max. In this case, the approximation ratio is also a constant 1. To sum up, we have r_p ≥ r_p-1, where the equal sign is obtained at trivial problems. Then we come back to (<ref>), when p→∞, the second term on the right side converges to 0. And ⟨𝒞⟩ trends to C_min. Thus we derive the limit lim_p→∞ r_p = 1 Sorting strings in ascending order of energy and postponing the order when degeneracy happens. If one requires that the first 1/ξ strings are corresponding to the probability of acceptance η with any ξ≥1 and η∈[0,1], for a nondegenrate problem or a degenerate problem where the 1/ξ demarcation does not cut off any energy level, p needs to be at least log_2(ξln( 1/1-η)). In other case, the bound of p will increase according tothe degree of degeneracy cut off by the demarcation. We denote the set of the first 1/ξ strings as Γ_1. And correspondingly, the cumulative probability over Γ_1 is denoted as Pr(Γ_1). We consider a general case. We assume that the β-th energy level is cut off by the 1/ξ demarcation, according to g̃ strings in the beta-th energy level belonging to Γ_1, where g̃ is an integer in [0,g_β]. When g̃=0 or g_β, the 1/ξ demarcation does not cut off any energy level. And according to the implementation of CNR, the replacement operation is triggered by the negativity difference between states stored by S_qreg and T_qreg, thus strings in the same energy level have identical probability. In this situation, we write down the Pr(Γ_1) as Pr(Γ_1)=g̃/g_βP(β,p)+S(β-1,p). And the second term is that S(β-1,p) = 1-( 1-1/ξ+g̃/2^n) ^2^p Similarly, we can rewrite S(β,p) and use S(β,p)-S(β-1,p) to replace P(β,p). The probability in explicit form is Pr(Γ_1)=1 - [g_β-g̃/g_β( 1-1/ξ+g̃/2^n) ^2^p + g̃/g_β( 1-1/ξ-g_β-g̃/2^n) ^2^p] Since p is a positive integer, x^2^p is a convex function on the real axis. Therefore, we have ( 1-1/ξ) ^2^p≤[g_β-g̃/g_β( 1-1/ξ+g̃/2^n) ^2^p + g̃/g_β( 1-1/ξ-g_β-g̃/2^n) ^2^p], First, the equal sign is established when g̃=0 or g̃=g_β. The problem is nondegenerate or the 1/ξ demarcation does not cut off any energy level can fulfill the two cases. In this case, we have Pr(Γ_1) = 1-( 1-1/ξ) ^2^p, Then we evaluate the summation of probability that obtaining one of the first 1/ξ strings. We have the function with real number c F(N) = ( 1-1/N) )^cN, which is monotonically decreasing as the positive integer N grows. And it converges to 1/mathrme^c when N trend to infinity. Thus we have a inequality established 1-( 1-1/ξ) ^cξ≥ 1-1/e^c. And we set a probability of acceptance η for the first 1/ξ strings, that is Pr(Γ_1)=1-( 1-1/ξ) ^2^p≥η. Using (<ref>), we can solve a lower bound for p p ≥log_2(ξln( 1/1-η)) On the other side, if (<ref>) is expected to reach the same probability of acceptance as (<ref>), the lower bound of p increases according to g̃/g_β and ξg̃/2^n. The former determines the degree of degeneracy cut off by the demarcation and the later determines the convexity of (<ref>). We will study the algorithm performance when applied to degenerate and nondegenerate problems, and verify the recursion relations and the derived properties in section <ref>. § APPLICATION AND PERFORMANCE Since we have derived the theoretical performance directly following the definition of our algorithm, the feasibility needs to be verified by studying algorithm performance in application. In this section, we apply the algorithm to MAX-2-XOR and Gaussian weighted 2-edge graph, which respectively represent the degenerate and nondegenerate problems. We run the algorithm with different p and t, and obtain simulation results systematically. To illustrate our results more intuitively in this section, we assign the indices ζ to cost function values similarly to labeling energy. What is slightly different is that for degenerate case, ζ will be postponed (with any rules, since the difference does not influence the illustration and conclusion). Thus ζ∈{1,2,⋯,2^n} are corresponding to strings one-to-one. And the lower ζ is corresponding to the string z with C(z) closer to C_min. We reorganize the simulation results, such as the probability distribution of measurement result, according to ζ. The related results will be presented with the scatter graphs using ζ as X-axis. §.§ Applications in degenerate and nondegenerate cases We start by clarifying the Hermitian operator assigned to MAX-2-XOR and Gaussian weighted 2-edge graph. MAX-k-XOR is composed with XOR terms depending on exact k bits. The corresponding Hermitian operator is<cit.> 𝒞 = ∑_{q_1⋯ q_k}⊂{1,2,⋯,n} d_q_1⋯ q_kẐ_q_1⋯Ẑ_q_k , where each d_q_1⋯ q_k∈{0,1} and we omit the constant term since such a shift is a overall phase which does not influence the objective result. The operator is completely specified by the edge type k and coefficients. We define the edge density as the frequency of nonzero coefficients occurring. The discrete and bounded coefficients result in an degenerate spectrum. As to our nondegenerate problem, Gaussian weighted 2-edge graph, coefficients obey the Gaussian distribution, which is related to Sherrington-Kirkpatrick model<cit.>. The Hermitian operator corresponding to Gaussian weighted 2-edge graph in this paper is 𝒞 = ∑_{i,j}∈{1,2,⋯,n} d_ijẐ_iẐ_j, where d_ij∼N(0,1) is without loss of generality, since the results can be promoted to a general Gaussian distribution by the affine property. And the continuous distribution ensures the low degeneracy in the spectrum. Presetting that n=10 and t is large enough, we apply our algorithm with p=1,2,3 to a random MAX-2-XOR instance where edge density is 0.6 and a random Gaussian weighted 2-edge graph respectively. The scale factor M is chosen as 45/2π here. Figure <ref> illustrates the distribution of cost function value and the probability distribution of obtaining a string z with index ζ(z). The probability distribution has the same degeneracy condition as the cost function. Comparing Figure <ref>(a) and (b), it can be seen that the step at the same position have nearly the same width. Similarly, comparing (c) and (d), both of them show a nondegenerate structure. According to section <ref>, the replacement operation is triggered by the negativity of C(z_s)-C(z_t), such that the probability distribution in principle is influenced by the degeneracy, which results in the performance difference between degenerate and nondegenerate problems. When t is large enough, the final state will reflect the degeneracy faithfully. And the scatter graphs of probability distribution and corresponding fit curves show that the string z with more optimum C(z), occupies higher probability in the measurement result. In Figure <ref>(b) and (d), the scatter graphs show a obvious phenomenon that probability concentrates on the side with lower cost function value. And the fit curves show a higher intercept on the Pr(z)-axis and slope drops more rapidly with the increasing of p. For a clearer angel of view to the algorithm performance, we divide strings into 8 groups equally in ascending order of ζ. By this 1/8 grouping, the first group, which we denote as note Γ_1 in section <ref>, contains strings corresponding to ζ from 1 to 128, and so on. And we illustrate the cumulative probability of each group when the algorithm with p=3 is applied on the previous instances. The results are shown in Figure <ref>. The first group occupies advantage in the cumulative probability compared with other groups. It can be seen in Figure <ref>(a) that the first group has the cumulative probability 0.6301 for the MAX-2-XOR instance. And (b) shows the first group has the cumulative probability 0.6564 for the Gaussian weighted 2-edge graph, which agrees with Corollary <ref> marvelously. And for the MAX-2-XOR instance, this bound slightly decreases as mentioned. The reason is that there exist a degeneracy spanning the demarcation of two groups. Moreover, the first group has an average approximate ratio 0.8266 within the group, and the worst approximate ratio in the first group is 0.7333 for the MAX-2-XOR instance. As to the Gaussian weighted 2-edge graph, the average approximate ratio in the first group is 0.8110, and because of nondegenerate structure, the worst case hardly appears. Since the simulation results confirm the corollary <ref> that we give in section <ref>, the advantage of our algorithm is that we can quantitatively choose p to solve the problem with quality of approximation and probability of acceptance reasonably and evaluate the performance explicitly. In detail, the evolution of probability distribution agrees with the recursion relations (<ref>) and (<ref>) strictly when t is large enough. The corresponding improvement of approximate optimization will be investigated in next subsection. §.§ Performance depending on the number of level p In section <ref>, the recursion relations (<ref>) and (<ref>) are applicable to sufficiently general problems. And as we have shown in Figure <ref>(b) and (c), the scatter graphs and corresponding fit curves are in a nearly identical trend. All these imply that the performance quantified with approximation ratio is directly influenced by different instances and problems. We use r_initial to denote the approximation ratio of the initial state, which is the n-qubit uniform superposition state in this paper. For different random instances, it floats in a range, which may cause a numerical fraud. Therefore, we introduce the approximation rate X_r defined as X_r = r-r_initial/1-r_initial. to reduce the influence of initial value. Such constructor is especially useful in the average analysis of floating value. To demonstrate the optimization ability and reliability, we need to discuss the average performance on random instances of MAX-2-XOR and Gaussian weighted 2-edge graph. We focus on the statistics including average approximation ratio r̅, variance of approximation ratio Var[r] and average approximation rate X_r. Presetting that n=8 and t is large enough, we generate 1000 random instances for MAX-2-XOR whose edge density is 0.5 and Gaussian weighted 2-edge graphs. We choose M=28/2π. The large t ensures that the variance of r only from different instances, which is convenient for us to research stability of performance. The simulation results for MAX-2-XOR and Gaussian weighted 2-edge graph are shown in Figure <ref>. The average approximation ratio r̅ increases steadily as p grows. It can be seen in Figure <ref>(a) and (c), r̅_initial is 0.6439 for MAX-2-XOR instances and 0.5016 for Gaussian weighted 2-edge graphs. Subsequently, for the final state of the 1-level algorithm, or the mediate state after the first level, r̅ increases to 0.7363 for MAX-2-XOR and 0.6165 for Gaussian weighted 2-edge graphs. In this sense, if we only consider the performance throughout the analysis about r is unfair. By X_r, we find that the first level provides the effect reducing the distance from initial state to the true answer in the rate of 0.2653 and 0.2344 averagely for MAX-2-XOR instances and Gaussian weighted 2-edge graphs. And even when p increases to 3, which is the logarithm of n in this case, r̅ reaches to 0.8551 and 0.7869 and X_r is about 0.6034 and 0.5793 for these two type of problems. The increasing of r̅ confirms our Corollary <ref>. Moreover, the difference from initial state are slightly faded and the results show a considerable algorithm performance. And when p approaches Θ(n), we can see that the r̅ and X_r are close to 1 enough and the performance difference nearly disappears. The ability of optimization improves significantly as we input more resource to our algorithm and construct a huger structure. And even only the 1-level algorithm can give an improvement on optimization with X_r nearly 0.25. The variance of approximation ratio Var[r] shows unstable or slow decreasing stage when p is small for these two problems and decreases rapidly when p continues to grow. We can see that the initial variance is not zero, which is the direct reason why we introduce X_r. And when p≤3, Var[r] does not decrease obviously and even shows a abnormal behavior in MAX-2-XOR instances. The reason is that when p is small, different instances present distinctive increase of the approximation ratio as p grows. The decreasing stage of Var[r] for p≥4 ends up with a value near zero. Since the contribution to Var[r] only comes from instances, results about variance show that the influence on performance from different instances reduces as p increases on the whole. And for p is small, the slow decreasing and abnormal behavior are in a acceptable range. The number of level p is the most important parameter of our algorithm. One point is that p directly influences the complexity of our algorithm, and another is that p determines the theoretical algorithm performance. In section <ref>, we derive the recursion relations as the core mechanism of algorithm with the number of level working as the recursive variable, and in this part we illustrate how p improves the approximation ratio. The performance agrees with the trend that we give in corollary <ref>. Even when p is small, the performance is considerable. Moreover, if we provide enough resource to algorithm to construct a structure with larger p, the algorithm returns a result match the input of resource, because of the increasing of X_r. §.§ Performance depends on number of ancillary qubits t In section <ref> and Appendix <ref>, we reveal how the ancillary qubits number t influence the performance of comparison. We give a relation about t according to accuracy and probability of failure. However, the discussion is for the case that T_qreg and S_qreg store the computational basis vectors |z_t⟩ and |z_s⟩ separately. In general, the results are far from complete, although the probability that comparison fails decays with the component deviating from the worst case. The results in section <ref> and <ref> are in condition that t large enough. Subsequently, we study the behavior and performance for finite t as the independent investigation and illustrate how t influences the performance in application. In this part, we consider our 3-level algorithm applied to an 8-qubit Gaussian weighted 2-edge graph. We specially choose M=36/2π and show results with different t. Figure <ref> consists of the value structure (a), the scatter graph of probability distribution for t = 1,3,4 and 5 and t=6,7,8 and theoretical case in (b)(c), and plots r in (d). The scatter graph of probability converges to theoretical distribution as t increases. It can be seen in Figure <ref>(b) that the probability distribution when t=1 is nearly uniform superposition. In this case, the algorithm can hardly optimize the problem. As t grows, the probability distribution shows a more and more explicit trend that the probability of obtaining string z decreases as R(z) increases, or C(z) increases. And according to (c), when t increases to 6, the scatter graph of probability distribution is very close to the theoretical performance. And the approximation ratio r increases and tends to theoretical value with the increasing of t. In Figure <ref> (d), we have r=0.5149 for t=1. This value is very close to the average initial approximation ratio 0.5016 for Gaussian weighted 2-edge graphs discussed in <ref>, which improves that for t is too small, the algorithm has little effect in optimization. And there exists sudden increasing from t=3 to t=5, which agrees with the obvious changing of scatter graphs. And when t≥6, the approximation ratio reaches the slow increasing stage, and the further increasing of t no longer provides an obvious improvement on performance. And we show the limit of r in this condition is 0.8122. To sum up, t is also an important parameter to our algorithm, since when t is too small, the performance is far from effective and as t increases to a certain degree, the further growth can not improve the performance obviously, not only for approximation ratio, but also the probability distribution of measurement result. § DISCUSSION AND CONCLUSION In this paper, we propose a pure quantum algorithm that find approximate solution for combinatorial optimization problems. The algorithm is constructed to a p-level divide-and-conquer structure with CNR operations. And the CNR operation consists of two unitary sub-operation, comparison and replacement. With the aid of t ancillary qubits, we realize the CNR operation as the basic unit of optimization. Directly following the definition, we derive the problem-independent recursion relations P(a,m+1) = 2P(a,m) - 2P(a,m)S(a,m) + P(a,m)^2 S(a,m+1) = 1 - (1-S(a,m))^2 = 1-(1-S(a,0))^2(m+1) . for the p-level algorithm. And based on the recursion relations, we give two corollaries that demonstrate the improvement with increasing of p (Corollary <ref>) and set a lower bound for p to obtain a string in first 1/ξ best solutions with probability of acceptance η (Corollary <ref>) p ≥log_2(ξln( 1/1-η)). Subsequently, we investigate the algorithm performance when applied to MAX-2-XOR instances and Gaussian weighted 2-edge graphs with different p and t. Our algorithm shows considerable average approximation ratio r̅ and approximation rate X_r defined in (<ref>) even when p is small, as Figure <ref> shows. Though performance is influenced by the problems itself, r̅ and X_r trend to 1 and variance decays rapidly as p grows. The simulation results confirm (<ref>) that the approximation ratio increases as p grows, which can be written as r_p ≥ r_p-1 lim_p→∞r_p = 1. The equal sign in (<ref>) is taken in trivial cases. On the other side, according to Figure <ref>, for the algorithm where n=8 and p=3, the performance improves as ancillary qubits number grows and it seems that there happens a mutation when t increases from 3 to 5. On the whole, as t increases, the performance in application converges to the prediction of recursion relations. And we give our temperate suggestion for choosing t in Appendix <ref>. And there are three open questions raised by this work. First, the CNR operation can receive arbitrary superposition state of 2n-computational basis vectors. Thus the selection of inputs has great degree of freedom in principle. Moreover, different CNR operation within the same level can have different input. Further works maybe worth discussing the influence of initial states and attempt to give some heuristic choices of inputs for specific problems. Second, in section <ref>, the results with graphs reveal details of the powerful performance when applied to MAX-2-XOR and Gaussian weighted 2-edge graphs. It is worth emphasizing that choosing a suitable p and t also has a great degree of freedom but faces a trade-off of efficiency and performance. It is interesting to study the suitable choosing strategy to a specific type of problems or even general combinatorial optimization problems. This work was supported by National Key R& D Program of China under Grant No. 2018YFB1601402-2. § ANCILLARY QUBITS, ACCURACY AND PROBABILITY The cases of failure are that y has a different sign with Δ for |Δ| is larger than the accuracy. With expression (<ref>), we can calculate the probability of comparison that succeeds or fails in principle. Here we will utilize inequalities to induce a loose but available bound to a component of tensor product, or in other words, the specific case that T_qreg and S_qreg store computational basis vectors separately. According to our definition of accuracy in section <ref>, we consider that Δ is equal to the accuracy 2^-s to meet one of the worst cases in tolerable range. For any real θ, |1-exp(iθ)| ≤ 2 .Whenever -π≤θ≤π, |1-exp(iθ)| ≥ 2|θ| . The probability distribution satisfies the inequality P(y;Δ) ≤1/4(2^t-s-y)^2 .Considering one of the equivalent worst cases that Δ=-2^-s, we sum up the situation corresponding to failure and obtain a sequence of equations and inequalities for t ≥ sPr(y≥0) ≤∑_y=0^2^t-1-11/4(2^t-s-y)^2 = 1/4(2^t-s)^2 +∑_y=1^2^t-1-11/4(2^t-s-y)^2 ≤1/4(2^t-s)^2 + ∫_0^+∞1/4(2^t-s-y)^2 dy = 1/4(2^t-s)^2 + 1/4(2^t-s)≤1/2(2^t-s) . Introduce a tolerable bound ϵ for probability of failure. We give the inequality inducing lower bound 1/2(2^t-s)≤ϵ . Then obtain an approximate relation among t, accuracy 2^-s and tolerable probability of failure ϵ t = s + ⌈log(1/2ϵ) ⌉ , And it can be seen from the derivation of (<ref>) and (<ref>) that the upper bound of probability that comparison fails decays according to inverse proportional function of the deviation away from worst case. We can give a temperate suggestion for choosing t. Since the probability of failure decays in other case, we only focus on the worst case. Combining the (<ref>) and (<ref>), if we expect that each CNR in our p-level algorithm can give a correct answer for the worst case in a few times repetition, the probability of success needs to be higher than 1-1/2^p-1. We have in this condition, t = s + p -1. The probability that each CNR gives a correct answer for the worst case increases to 1/e≈0.3679 at least. And with the numerical calculation, if we prepare one more ancillary qubit, the probability will suddenly increase to at least √(1/e)≈0.6059. We give another reference lower bound of t for p-level algorithm that t = s + p . For the instance we study in section <ref>, (<ref>) gives t=s+3 which means t≥3. It is the start point of sudden changing shown in Figure <ref>(d). And every additional ancillary qubit contributes to better accuracy, which enhances the optimization reasonably. All these agree with the t≥3 part of our result.
http://arxiv.org/abs/2310.17927v5
{ "authors": [ "Da You Lv", "An Min Wang" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231027065439", "title": "A Quantum Approximate Optimization Algorithm Based on CNR Operation" }
The impact of convective criteria on the properties of massive stars Y. Sibony C. Georgy S. Ekström G. Meynet==================================================================== We define and study the notion of property ( T) for Banach algebras, generalizing the one from C^*-algebras. For a second countable locally compactgroup G and a given family of Banach spaces ℰ, we prove that our Banach algebraic property (T_ℰ) of the symmetrized pseudofunction algebras F^*_ℰ(G) characterizes the Banach property (T_ℰ) of Bader, Furman, Gelander and Monod for groups.In case G is a discrete group and ℰ is the class of L^p-spaces for 1≤ p < ∞, we also achieve the analogue characterization using the symmetrized p-pseudofunction algebras F^*_λ_ p(G).§ INTRODUCTIONIn 1960 David Kazhdan introduced in <cit.> the notion of property ( T) for groups in order to prove finite generation of lattices in higher rank Lie groups. Today Kazhdan's property (T) is a central notion of analytic group theory being used in numerous proofs such as first and foremost Margulis superrigidity theorem. A second countable locally compact group G has property ( T) if, whenever a unitary representation of G contains a net of almost invariant vectors, it has a non-zero invariant vector. Lattices in higher rank semisimple Lie groups, as well as lattices in Sp(1, n) enjoy this property. We refer the reader to <cit.> for a comprehensive treatment of Kazhdan's property ( T).Classically, property ( T) concerns unitary representations and the class of Hilbert spaces. In <cit.>, Bader, Furman, Gelander and Monod extended this notion to isometric representations on Banach spaces. They showed that Kazhdan's property ( T) is equivalent to their Banach algebraic property (T_L^p) for 1 ≤ p < ∞.On the operator algebraic side, the notion of property ( T) was brought to von Neumann algebras by Connes in <cit.> for typeII_1-factors and later by Connes and Jones in <cit.> for general von Neumann algebras, where unitary representations were replaced by bimodules (or Connes' correspondences). Turning to C^*-algebras, Bekka adopted Connes's definition and formulated the notion of property ( T) for unital C^*-algebras in <cit.> and Ng defined two versions of property ( T) for general C^*-algebras in <cit.>. Although property ( T) for C^*-algebras is a younger topic compared with von Neumann algebras, it already proved useful in applications.Connections with nuclearity (i.e. amenability) of C^*-algebras, Haagerup property for C^*-algebras, and property ( T) of quantum groups were studied respectively in <cit.>, in <cit.> and<cit.>, and in <cit.>, to name a few.In this article we generalize the notion of property ( T) to Banach algebras that possess a bounded approximate unit. Our aim is to characterize our Banach algebraic property (T) in terms of the group theoretic property (T) of Bader, Furman, Gelander and Monod <cit.>. Our definition, which we state below, adopts the stronger version of property ( T) of Ng <cit.> as it is stated in section 2 of <cit.>. Note that our terminology is different from that of Bekka and Ng; see Warning <ref>.(Definition <ref>) Let 𝒜 be a Banach algebra with a bounded approximate unit and let ℰ be a class of Banach spaces. We say that 𝒜 has property (T_ℰ) if, whenever E∈ℰ is an essential 𝒜-bimodule admitting a strictly almost 𝒜-central net of unit vectors ξiI then there exists a net of central vectors ηiI in E such that ξ_i-η_iE→ 0. We further define a weaker version of this property (see Definition <ref>), where we only require the existence of a non-zero central vector. The weak version is referred to as weak property (T_ℰ) and it is an extension of the weaker version of property ( T) of Bekka and Ng for C^*-algebras (see <cit.> for the unital case and <cit.> for the general case). When 𝒜 is unital, our weak property (T_ℰ) coincides with the definition suggested by Bekka in Remark 18 of <cit.> of a Banach space version of property ( T) for arbitrary normed algebras. When 𝒜 is a C^*-algebra and we consider the class of Hilbert spaces, our definitions coincide with Ng's <cit.> in the general case, and with Bekka's<cit.> in the unital case.Given a locally compact group G, we consider Banach algebras constructed from actions on a family of Banach spaces ℰ referred to aspseudofunction algebras F_ℰ(G). We shall consider both the pseudofunction algebra F_ℰ(G) and its symmetrized version F^*_ℰ(G). See paragraph <ref> in the preliminaries for a definition. We are particularly interested in the symmetrized pseudofunction algebra associated with the class of representations on L^p-spaces, F^*_L^p(G), as well as the symmetrized version of the p-pseudofunction algebra, F^*_λ_p(G). Pseudofunction algebras were first studied by Herz already in the 1970's <cit.>, and they have subsequently been studied intensely in the context of harmonic analysis. Later they appeared in the work of Phillips, who provided an operator algebraic ground for studying these Banach algebras (see e.g. <cit.>, <cit.>, and <cit.>.) The symmetrized version of the p-pseudofunction algebra was first considered by Kasparov and Yu in <cit.>, and were studied in <cit.> in the context of the Baum-Connes conjecture, in <cit.> in the context of quasi-Hermitian groups as well as in <cit.> in the context of exotic group C^*-algebras.The main results of this article concern characterization of our Banach algebraic property (T_ℰ) for symmetrized pseudofunction algebras in terms of Bader et al's property (T_ℰ), for a general class of Banach spaces ℰ and, in particular, for the class L^p consisting of L^p-spaces on σ-finite measure spaces. We state the results here for the symmetrized pseudofunction algebras, but they hold for the non-symmetrized versions, as well.(Theorem <ref> and <ref>) Let G be a locally compact group and let ℰ be a class of Banach spaces. The following are equivalent: * G has (weak) property (T_ℰ),* F^*_ℰ(G) has (weak) property (T_ℰ).Under the assumption that ℰ is the class L^p-spaces, Theorem <ref> implies:(Corollary <ref>) Let G be a second countable locally compact group with property (T) and let 1≤ p≤2. Then F^*_L^p(G) has property (T_L^q), for all 1≤ q≤ p and all p'≤ q<∞, where p' is the Hölder conjugate of p. Assuming further that the group is discrete, Theorem <ref> implies: (Corollary <ref>) Let Γ be a discrete group with property ( T) and let 1≤ p≤2. Then F^*_L^p(Γ) has property (T_L^q), for all 1≤ q<∞. Continuing with the assumptions that Γ is a discrete group and ℰ is the classL^p-spaces, we obtain, in addition to the characterization from Theorem <ref>, a characterization of property (T_L^p) in terms of the symmetrized p-pseudofunction algebra F^*_λ_p(Γ). (Theorem <ref>) Let Γ be a discrete group. For each 1≤ p<∞, the following are equivalent: * Γ has property (T_L^p),* F^*_L^p(Γ) has property (T_L^p),* F^*_λ_p(Γ) has property (T_L^p). Theorem <ref> holds for the non-symmetrized versions of the algebras, as well. The proof relies on an L^p-analogue of Fell's absorption principle; see Lemma <ref>.This paper is organized as follows. Chapter 2 contains preliminaries on actions of Banach algebras, multiplier algebras, symmetrized pseudofunction algebras as well as property (T_ℰ) for actions of groups on Banach spaces.In chapter 3 we define (weak) property (T_ℰ) for Banach algebras associated with a family of Banach spaces ℰ. Moreover we prove Theorem <ref> and its implications Corollary <ref> and Corollary <ref>. In chapter 4 we prove Theorem <ref>. Further, we show that weak property (T_SL^p) is stronger than property (T_L^p) fordiscrete groups. Acknowledgements The second named author was supported by Knut and Alice Wallenberg foundation through grant number 31001288.We thank the organizers of the workshop on C^*-algebras and geometry of semigroups and groups held at the University of Oslo, where the initial ideas were discussed.We are grateful to Nadia Larsen, Tim de Laat, Christopher Phillips, Mikael de la Salle and Alain Valette for comments on an earlier draft of this paper. The first named author thanks Nadia Larsen for many discussions and Mikael de la Salle for his insightful questions and suggestions and for discussions on subtleties in the different versions of property (T_ℰ) for groups.§ PRELIMINARIES Actions of Banach algebras on Banach spaces. For a Banach algebra 𝒜, we denote by 𝒜^op its opposite algebra, i.e., the Banach algebra with the same underlying Banach space as 𝒜 but with multiplication in reversed order. Let 𝒜 and ℬ be Banach algebras and E a Banach space. A left action of 𝒜 on E is a contractive representation of 𝒜 on E. A right action of 𝒜 on E is a contractive representation of 𝒜^op on E. We say that E is a left (right) 𝒜-module if it carries a left (right) action of 𝒜. Further, E is an 𝒜-ℬ-bimodule if it carries a left action of 𝒜 and a right action of ℬ with commuting ranges. We writea·ξ· b=ψ(b)φ(a)ξ , a∈𝒜,b∈ℬξ∈ E.An 𝒜-𝒜-bimodule is simply referred to as an 𝒜-bimodule.An 𝒜-ℬ-bimodule E is said to be essential if the span of 𝒜· E·ℬ is dense in E. Further, E is said to be faithful if, whenever ξ∈ E satisfies a·ξ· b=0, for all a∈𝒜 and b∈ℬ, then ξ=0.If 𝒜 admits a bounded approximate unit, then any essential 𝒜-bimodule will necessarily be faithful. The Banach ^*-algebra L^1(G), for a locally compact group G, plays an important role among the Banach algebras we consider. Since L^1(G) always carries a bounded approximate unit, it is essential and faithful as a bimodule over itself. The following fact connecting its contractive representation theory with the isometric representation theory of G is folklore:Let G be a locally compact group and E a Banach space. There is a 1-1 correspondence between non-degenerate, contractive representations of L^1(G) on E and isometric representations of G on E.The Banach ^*-algebra L^1(G) is in the following precise sense its own opposite, as may be easily verified:The map □:L^1(G)→ L^1(G)^op given byf(s)Δ(s^-1)f(s^-1),s∈ G,is an isometric ^*-isomorphism.Multipliers of Banach algebras. We refer the reader to <cit.> for details and more results on multipliers of modules. Let 𝒜 be a Banach algebra. A multiplier of 𝒜 is a pair of maps (L,R) from 𝒜 to itself satisfyingaL(b)=R(a)b,a,b∈𝒜.We denote by M(𝒜) the set of all multipliers on 𝒜. This is a linear space with addition and scalar multiplication defined in the obvious way. There is a canonical linear map 𝒜→M(𝒜) defined by assigning to each a_0∈𝒜 a pair of maps L_a_0, R_a_0:𝒜→𝒜 as follows:L_a_0:a↦ a_0a R_a_0: a↦ aa_0,a∈𝒜.If 𝒜 is faithful as a bimodule over itself, this map is injective. If 𝒜 is unital, it is surjective. In general, it need neither be injective nor surjective. When 𝒜 is faithful as a bimodule over itself, one can show that M(𝒜) embeds linearly into ℬ(𝒜)⊕_∞ℬ(𝒜). Hence, M(𝒜) inherits the strict topology, i.e., the locally convex topology generated by the family of seminorms (L,R)↦L(a)+R(a), where a∈𝒜. The next proposition may be verified with routine arguments: Let 𝒜 be a Banach algebra and assume that 𝒜 is faithful as a bimodule over itself. Then M(𝒜) is a strictly closed subspace of ℬ(𝒜)⊕_∞ℬ(𝒜). The following lemma concerning strictly compact subsets of M(𝒜) is Lemma 8 in <cit.> put in our more general setting. The proof is the same and so we omit it. Let 𝒜 be a Banach algebra which is faithful as a bimodule over itself and let S be a non-empty strictly compact subset of M(𝒜). Then S satisfies the following two properties: * S is norm-bounded,* for any element a_0∈𝒜 and any ε>0, there exist a finite number of elements x_1,…,x_n∈ S such that, for every x∈ S, there is a k∈{1,…,n} for which x· a_0-x_k· a_0𝒜<ε. Let 𝒜 be a Banach algebra with a bounded approximate unit so that any essential 𝒜-bimodule is automatically faithful. Given another Banach algebra ℬ and a bounded homomorphism φ:𝒜→ℬ, the Banach algebra ℬ becomes in a natural way a bimodule over 𝒜. Moreover, if φ has dense range, the image of any bounded approximate unit on 𝒜 is a bounded approximate unit on ℬ. The following is Theorem 2.8 in <cit.>, where it is stated in greater generality:Let 𝒜 be a Banach algebra with a bounded approximate unit, let ℬ be another Banach algebra and let φ:𝒜→ℬ be a bounded homomorphism with dense range. There is a unique extension Φ:M(𝒜)→M(ℬ) and this extension is strictly continuous. We close this part with some remarks on the multiplier algebra of L^1(G), for a locally compact group G. For s∈ G, we denote by L_s and R_s the left, respectively, right translation operators on L^1(G). Precisely, for s,t∈ G and f∈ L^1(G),L_sf(t)=f(s^-1t),R_sf(t)=f(ts) Thus, (L_s,Δ(s^-1)R_s^-1) is a multiplier of L^1(G). This gives rise to a multiplicative embedding G↪M(L^1(G)), and this embedding is continuous when M(L^1(G)) is equipped with the strict topology. Thus, given an essential L^1(G)-bimodule E, the group G, respectively, its opposite G^op act on E via the extension of the bimodule structure to the multiplier algebra. The next proposition, which may be verified with a straight forward computation, shows that these actions agree with the actions we already have from the L^1(G)-bimodule structure via Proposition <ref>. Let G be a locally compact group and let E be an essential L^1(G)-bimodule with left action φ and right-action ψ. For each ξ∈ E and s∈ G,(L_s,Δ(s^-1)R_s^-1)·ξ=φ(s)ξξ·(L_s,Δ(s^-1)R_s^-1)=ψ(s)ξ Symmetrized pseudofunction algebrasLet G be a locally compact group. Given a class of Banach spaces ℰ, denote by _ℰ(G) the class of isometric representations of G on a Banach space in ℰ. For a subclass ℛ of _ℰ(G), we define a seminorm on L^1(G) by settingfℛ=supπ(f)π∈ℛ.Set I_ℛ=⋂_π∈ℛ(π), which is a closed 2-sided ideal in L^1(G) so that the quotient L^1(G)/I_ℛ inherits the algebra structure from L^1(G). We denote by F_ℛ(G) the completion of L^1(G)/I_ℛ with respect to the norm induced by ℛ. This is a Banach algebra with multiplication extending the convolution product on L^1(G); we refer to it as the Banach algebra of ℛ-pseudofunctions. When ℛ is all of _ℰ(G), we shall denote by F_ℰ(G) the resulting Banach algebra. When ℛ consists of only one representation, say π, we simply write F_π(G). Accordingly, we refer to these Banach algebras as algebras of ℰ-pseudofunctions, respectively, π-pseudofunctions.Well-known examples of pseudofunction algebras include the universal and the reduced group C^*-algebras, C^*(G) and C^*_r(G), respectively. In the notation introduced above, the former is the pseudofunction algebra F_ℋ(G), where ℋ is the class of complex Hilbert spaces, and the latter is F_λ(G), where λ the left-regular representation of G. Further, for 1≤ p<∞ and λ_p the left-regular representation of G on L^p(G), F_λ_p(G) is the Banach algebra of p-pseudofunctions, which goes back to work of Herz and it is often denoted by PF_p(G). This Banach algebra also appeared in work of Phillips, e.g. <cit.>, where it is denoted by F^p_r(G) to emphasize its connection to the reduced group C^*-algebra. It is easy to see that if π is an isometric representation of G, and π lies in the class ℛ, then π extends to a non-degenerate contractive representation of F_ℛ(G). That is, F_ℛ(G) is universal for ℛ in the same way that C^*(G) is universal for all unitary representations of G. Conversely, by Proposition <ref>, if π is a non-degenerate contractive representation of F_ℛ(G) then π is the extension of an integrated form of an isometric representation of G. However, we are not guaranteed that π lies in the class ℛ.The involution on L^1(G) need not extend to F_ℛ(G), and so, F_ℛ(G) is in general only a Banach algebra and not necessarily a Banach ^*-algebra. However, if the class ℛ is closed under duality, the involution on L^1(G) does extend. Recall that if π is an isometric representation of G on a Banach space E, its dual representation π^* is the isometric representation on the dual Banach space E^* given, for t∈ G, η∈ E^* and x∈ E, by(π^*(t)η)(x)=η(π(t^-1)x).We say that the class ℛ is closed under duality if π^*∈ℛ whenever π∈ℛ. Proposition <ref> below is proven in a special case in <cit.>. The proof in the generality stated here is essentially the same, and so, we omit it. Let ℛ be a class of continuous isometric representations of G closed under duality. Then the involution on L^1(G) is an isometry with respect to the norm induced by ℛ. For a class ℛ of continuous isometric representations of G, denote by ℛ^* the smallest class of continuous isometric representations of G which is closed under duality and which contains ℛ. We denote by F^*_ℛ(G) the completion of L^1(G) with respect to the normfF^*_ℛ(G)=supπ(f)π∈ℛ^*By Proposition <ref>, F^*_ℛ(G) is a Banach ^*-algebra, and we shall refer to it as the symmetrized Banach ^*-algebra of ℛ-pseudofunctions. As in the non-symmetrized setting, when ℛ is all of _ℰ(G) or when ℛ consists of a single representation π, we shall write F^*_ℰ(G), respectively, F^*_π(G), and we refer to these accordingly. Let ℰ_ref be the class of all reflexive Banach spaces. For a subclass ℰ⊂ℰ_ref, denote by ℰ' the class consisting of the Banach spaces which are dual to the Banach spaces in ℰ. Let ℛ be a subclass of _ℰ(G) and denote by ℛ' the subclass of _ℰ'(G) consisting of representations which are dual to the representations in ℛ. Then ℛ^*=ℛ∪ℛ'. In this case, a straight forward computation givesfF^*_ℛ(G)=max{fF_ℛ(G),fF_ℛ(G)} Similar to L^1(G), F^*_ℛ(G) is self-opposite via the map defined in equation (<ref>). This need not be true for general pseudofunction algebras. Property ( T) for groups acting on Banach spacesA continuous linear isometric representation of a locally compact group on a Banach space E is a continuous homomorphism π:G→(E), where (E) is the group of linear isometries on E. Given such a representation (π,E), we denote by E^π the subspace of G-invariant vectors. In <cit.>, Bader, Furman, Gelander and Monod define property (T_ℰ) as follows:<cit.> Let ℰ be a class of Banach spaces. A locally compact group G has property (T_ℰ) if, for any continuous isometric representation (π,E) with E in the class ℰ, the quotient representation π' G →(E/E^π) does not have almost G-invariant vectors. If ℰ consists of a single Banach space E, we write (T_E) instead of (T_ℰ).We recall from Theorem A in <cit.> that, for a second countable locally compact group G, Kazhdan's property (T) coincides with property (T_L^p(μ)), for any σ-finite measure μ and any 1≤ p < ∞.We shall use the following alternative definition of property (T_ℰ), which is equivalent to the definition above by Lemma 18 in <cit.>. There, the lemma is stated for second countable locally compact groups, but the additional assumption that the group is second countable can be dropped. Let ℰ be a class of Banach spaces. A second countable locally compact group G has property (T_ℰ) if, whenever (π,E) is a continuous isometric representation of G with E in the class ℰ admitting a net of almost invariant unit vectors ξiI, there exists a net of G-invariant vectors ηiI such that ξ_i-η_iE→0.Parallel to what is the case for Kazhdan's property (T), there is a quantitative version of Definition <ref> (see <cit.>). Because we shall not be interested in the size of Kazhdan type constants in this paper, we stick to the qualitative formulation. It may be tempting to define the Banach space version of property (T) for groups parallel to the often used definition of Kazhdan's property (T) which only requires the existence of a non-zero invariant vector.A priori, this is a different and weaker property. We shall refer to it as weak property (T_ℰ). Let ℰ be a class of Banach spaces. A locally compact group G has weak property (T_ℰ) if any continuous isometric representation (π,E) with E∈ℰ admitting almost invariant vectors has a non-zero G-invariant vector. It is well-known that property (T_ℋ) is equivalent to weak property (T_ℋ) when ℋ is the class of Hilbert spaces, in which case we recover Kazhdan's property ( T). A bit more generally, Proposition <ref> gives two sufficient conditions on the class ℰ for the equivalence of property (T_ℰ) and weak property (T_ℰ). This may be of independent interest. The conditions are well-known to experts, but to our knowledge, they do not appear explicitly in the literature. For any second countable locally compact group G and any class of Banach spaces ℰ, property (T_ℰ) implies weak property (T_ℰ). The converse is true if ℰ satisfies either one of the following properties: * ℰ is stable under quotients,* ℰ is a class of superreflexive Banach spaces stable under taking complemented subspaces.Assume G does not have property (T_ℰ). We can then find a continuous isometric representation (π,E) of G with E in ℰ such that the quotient E/E^π admits a net of almost invariant vectors. However, E/E^π has no non-zero G-invariant vectors, by construction. If there is a bounded isomorphism from a space in ℰ to E/E^π, it follows that G does not have property (T_ℰ). This is trivially the case if ℰ is stable under quotients. If ℰ consists of superreflexive Banach spaces then E^π is a complemented subspace and its complement is isomorphic to the quotient E/E^π (see, e.g., <cit.>). If ℰ furthermore is stable under taking complemented subspaces, we have a contraction from a space in ℰ to the quotient E/E^π. Hence, if the class ℰ satisfies either of the conditions (i) or (ii), we see that if G does not have property (T_ℰ) it does not have weak property (T_ℰ) either.§ PROPERTY (T_ℰ) FOR BANACH ALGEBRASIn this section we define property (T_ℰ), as well as a weaker version of it, for a Banach algebra acting on a family of Banach spaces ℰ. Our definitions extend that of Bekka-Ng<cit.> for (not necessarily unital) C ^*-algebras to a Banach algebraic setting. The main result in this section, Theorem <ref>, relates the group theoretic property (T_ℰ) of Bader et al for locally compact groups to that of its symmetrized Banach ^*-algebra of ℰ-pseudofunctions.Denoting by L^p be the class of all L^p-spaces on σ-finite measure spaces, we show that property (T_L^p) for F^*_L^ p(G) is independent on the parameter p.Finally, under the assumption that G has Kazhdan's property ( T), we obtain property (T_L^q) for F^*_L^ p(G).Let 𝒜 be a Banach algebra and E an 𝒜-bimodule. We say that ξ∈ E is 𝒜-central if, for all a∈𝒜, a·ξ=ξ· a. The set of all such elements constitute a closed subspace of E, which we denote by E^𝒜. A net ξiI in E is said to be almost 𝒜-central if, for every finite subset F⊂𝒜 and every ε>0, there is an index i_0∈ I such that, for all i≽ i_0,sup_a∈ Fa·ξ_i-ξ_i· aE<ε.A net ξiI in E is said to be strictly almost 𝒜-central if, for every strictly compact subset S⊂M(𝒜) and every ε>0, there is an index i_0∈ I such that, for all i≽ i_0,sup_x∈ Sx·ξ_i-ξ_i· xE<ε. We can now state our two main definitions: Let 𝒜 be a Banach algebra with a bounded approximate unit and let ℰ be a class of Banach spaces. We say that 𝒜 has property (T_ℰ) if, whenever E∈ℰ is an essential 𝒜-bimodule admitting a net of strictly almost 𝒜-central unit vectors ξiI, then there exists a net ηiI in E^𝒜 such that ξ_i-η_iE→ 0. Let 𝒜 be a Banach algebra with a bounded approximate unit and let ℰ be a class of Banach spaces. We say that 𝒜 has weak property (T_ℰ) if, whenever E∈ℰ is an essential 𝒜-bimodule admitting a net of strictly almost 𝒜-central unit vectors, then E contains a non-zero central vector.It is immediately clear that property (T_ℰ) implies weak property (T_ℰ).We have chosen to restrict the definitions of the two versions of property (T_ℰ) to Banach algebras possessing a bounded approximate unit. This includes, in particular, all the pseudofunction algebras. The definitions, however, are sensible for any Banach algebra which is faithful as a bimodule over itself.When 𝒜 is a C^*-algebra and ℋ is the class of Hilbert spaces, our property (T_ℋ) for 𝒜 recovers the stronger version of property ( T) of Bekka and Ng while our weak property (T_ℋ) recovers the weaker version of their property ( T). Indeed, the assumption that the bimodules are essential forces the extension of the bimodule structure to the multiplier algebra to be unital, and unital contractive algebra homomorphisms between C^*-algebras are necessarily ^*-preserving. Hence, we shall refer to (weak) property (T_ℋ) simply as (weak) property ( T).Our terminology differs from that of Bekka and Ng for C^*-algebras: While Bekka and Ng use the terms strong property ( T) and property ( T) for the stronger, respectively, weaker version, we prefer the terms property ( T) and weak property ( T). In particular, property ( T) is the weaker version for them and the stronger version for us. As we shall see in a moment, our terminology is better aligned with the terminology on the group level. Moreover, we avoid confusion with the established notion of strong property ( T) for groups. If E^𝒜 is a complemented subspace of E and P is the projection onto E^𝒜 along its complement, we may take η_i=Pξ_i in the definition of property (T_ℰ). In his definition of property ( T) for a unital C^*-algebra 𝒜 <cit.>, Bekka considered 𝒜-bimodules admitting a net of almost 𝒜-central unit vectors rather than strictly almost 𝒜-central unit vectors. When 𝒜 is unital, his definition and that of Ng coincides (see <cit.>). In the more general setting of property (T_ℰ) for Banach algebras, the same phenomenon happens. Let 𝒜 be a Banach algebra with a bounded approximate unit and let ℰ be a class of Banach spaces. Assume 𝒜 satisfies the following property: whenever E∈ℰ is an essential 𝒜-bimodule admitting a net ξiI of almost 𝒜-central unit vectors then E contains a net ηiI of central vectors such that ξ_i-η_iE→0. Then 𝒜 has property (T_ℰ). The converse is true if 𝒜 is unital. Consider the canonical embedding ς:𝒜↪M(𝒜). If F⊂𝒜 is a finite subset then ς(F) is a finite, hence strictly compact, subset of M(𝒜). Now, for any ξ∈ E, we havesup_a∈ Fa·ξ-ξ· aE=sup_x∈ς(F)x·ξ-ξ· xE.Hence, any net of strictly almost 𝒜-central vectors is automatically also a net of almost 𝒜-central vectors. It is thus clear that property (T_ℰ) for 𝒜 is implied by the mentioned property.For the converse implication, assume that 𝒜 is unital. Let S be any strictly compact subset of M(𝒜). Because 𝒜 is unital, any element in M(𝒜) is of the form (L_a,R_a), for some a∈𝒜. Thus, S=ς(F) for some (not necessarily finite) subset F of 𝒜. For a given ε>0, we apply Lemma <ref>(ii) with the identity of 𝒜 in place of a_0 to obtain a finite number of elements a_1,…,a_n∈ F such that, for any a∈ F, there is a k∈{1,…,n} for whicha-a_k𝒜=L_a(1_𝒜)-L_a_k(1_𝒜)𝒜<ε.Let E be an 𝒜-bimodule. For a∈ F, take k such that equation (<ref>) holds. Then, for any ξ∈ E,a·ξ-ξ· aE≤a_k·ξ-ξ· a_kE+2ε.Thus, for any ξ∈ E,sup_x∈ς(F)x·ξ-ξ· xE=sup_a∈ Fa·ξ-ξ· aE≤sup_k∈{1,…,n}a_k·ξ-ξ· a_kE+2ε.So any net of vectors in E which is almost 𝒜-central is also strictly almost 𝒜-central. Hence, if E admits a net of almost 𝒜-central unit vectors, property (T_ℰ) of 𝒜 will imply the existence of a net ηiI of central vectors such that ξ_i-η_iE converges to zero. The similar statement to that of Proposition <ref> but for weak property (T_ℰ) also holds with essentially the same proof. Before proceeding to specific cases, we record the following permanence property:Let 𝒜 be a Banach algebra with a bounded approximate unit, let ℬ be another Banach algebra and let φ:𝒜→ℬ be a bounded homomorphism with dense range. If 𝒜 has (weak) property (T_ℰ), for a class of Banach spaces ℰ, then so does ℬ. Let E∈ℰ be an essential ℬ-bimodule admitting a net ξiI of strictly almost M(ℬ)-central unit vectors. Through precomposition with φ, E becomes an 𝒜-bimodule, and as such, it isessential because φ has dense range. We check that the net ξiI remains almost central for the induced M(𝒜)-bimodule structure. By Theorem <ref>, φ extends to a strictly continuous homomorphism Φ:M(𝒜)→M(ℬ) and the M(𝒜)-bimodule structure on E induced through Φ from the M(ℬ)-bimodule structure agrees with the extension of the 𝒜-bimodule structure induced through φ from the ℬ-bimodule structure. Let S⊂M(𝒜) be any strictly compact subset. The image of S under Φ is then a strictly compact subset of M(ℬ). Hence,sup_x∈ Sx·ξ_i-ξ_i· xE=sup_x∈ SΦ(x)·ξ_i-ξ_i·Φ(x)E=sup_y∈Φ(S)y·ξ_i-ξ_i· yE→0,and so, ξiI is a net of strictly almost 𝒜-central unit vectors. Now, if ξ is any 𝒜-central vector, then density of the image of 𝒜 under φ implies that ξ must also be ℬ-central. Hence, if 𝒜 has (weak) property (T_ℰ) then so does ℬ.§.§ Locally compact groups and their pseudofunction algebrasIn this section we provide a characterization of (weak) property (T_ℰ) of a locally compact group G in terms of (weak) property (T_ℰ) of F^*_ℰ(G) for a class of Banach spaces ℰ; see Theorem <ref> and <ref>.It generalizes the similar result of Bekka and Ng in <cit.> from the C^*-algebra setting to the Banach ^*-algebra setting. The generalization comes in two versions reflecting the fact that, unlike in the Hilbert spaces setting, property (T_ ℰ) for a group need not be equivalent to its weak relative. The proof relies on a natural way of constructing an isometric representation from an F^*_ℰ(G)-bimodule, and vice versa. Let ℰ be a class of Banach spaces and let E∈ℰ be an essential F^*_ℰ(G)-bimodule with left and right actionsφ:F^*_ℰ(G)→ℬ(E)ψ:F^*_ℰ(G)^op→ℬ(E).By Proposition <ref> and Remark <ref>, φ and ψ are induced from isometric representations of G, respectively G^op, which we shall also denote by φ and ψ. Since φ and ψ have commuting ranges as left and right actions of F^*_ℰ(G), their ranges as group representations commute, as well. We construct a new isometric representation π of G on E by settingπ(t)ξ=φ(t)ψ(t^-1)ξ,t∈ Gξ∈ E.It is easy to see that a vector ξ∈ E is G-invariant if and only if it is F^*_ℰ(G)-central. Further, for every vector ξ∈ E and every t∈ G, Proposition <ref> yields thatπ(t)ξ-ξE=φ(t)ξ-ψ(t)ξE=(L_t,Δ(t^-1)R_t^-1)·ξ-ξ·(L_t,Δ(t^-1)R_t^-1)E,Because the embedding G↪M(F^*_ℰ(G)) is continuous when M(F^*_ℰ(G)) is equipped with the strict topology, we see that any net of almost M(F^*_ℰ(G))- strictly central vectors is also a net of almost G-invariant vectors. Now let (π,E) be an isometric representation of G with E ∈ℰ. Then π extends to a non-degenerate, contractive representation of F^*_ℰ(G) on E. Further, since the trivial representation 1_G is contained in _ℰ(G), it extends to F^*_ℰ(G). This induces an essential F^*_ℰ(G)-bimodule structure on E with left action π and right action 1_G. It is easy to see that the F^*_ℰ(G)-central vectors for this bimodule structure are exactly the G-invariant vectors. Further, for each ξ∈ E and each f∈ C_c(G), π(f)ξ-1_G(f)ξE =f(s)(π(s)ξ-ξ)Gμ_G(s)E≤f(s)π(s)ξ-ξEGμ_G(s)≤sup_s∈ fπ(s)ξ-ξEf1.Let x∈ F^*_ℰ(G). For any ε>0, we can find f∈ C_c(G) such that x-fF^*_ℰ(G)<ε. Thenπ(x)ξ-1_G(x)ξE < π(f)ξ-1_G(f)ξE+ 2ε.Hence, if ξiI is a net in E of almost invariant unit vectors then it is almost F^*_ℰ(G)-central for the bimodule structure on E with left action π and right action 1_G. In fact, as we shall see next, it will be almost central for the extension of the bimodule structure to the multiplier algebra. We show this in the following technical lemma, which is based on the proof of Proposition 10 in <cit.>.Let (π,E) be an isometric representation of the locally compact group G with E in the class ℰ, and view E as an F^*_ℰ(G)-bimodule with left action π and right action 1_G. Then any net of almost F^*_ℰ(G)-central unit vectors is automatically strictly almost F^*_ℰ(G)-central. Suppose ξiI is a net of almost F^*_ℰ(G)-central unit vectors in E. Fix a_0∈ F^*_ℰ(G) such that 1_G(a_0)=1. Thenπ(a_0)ξ_iE-1≤π(a_0)ξ_i-1_G(a_0)ξ_iE,for all i∈ I, and so,lim_iπ(a_0)ξ_iE=1.We may assume that π(a_0)ξ_i is non-zero for all i∈ I as we can otherwise pass to a subnet. Define, for each i∈ I,η_i=π(a_0)ξ_i/π(a_0)ξ_iE.We claim that ηiI constitutes a net of strictly almost F^*_ℰ(G)-central unit vectors. To see this, let S be any strictly compact subset of M(F^*_ℰ(G)). Given ε>0, we can find a finite collection of elements x_1,…,x_n of S such that, for every x∈ S, there is a k∈{1,…,n} for whichxa_0-x_ka_0F^*_ℰ(G)<ε/12.Take i_0∈ I such that the following hold, for all i≽ i_0 and all k=1,…,n,π(a_0)ξ_iE≥1/2, π(a_0)ξ_i-1_G(a_0)ξ_iE<ε/4sup_y∈ Sy, π(x_ka_0)ξ_i-1_G(x_ka_0)ξ_iE<ε/12.Now, given x∈ S, take k∈{1,…,n} such that (<ref>) holds. Thenx·η_i-η_i· xE =1/π(a_0)ξ_iEπ(x)π(a_0)ξ_i-1_G(x)π(a_0)ξ_iE≤2π(xa_0)ξ_i-π(x_ka_0)ξ_iE+2π(x_ka_0)ξ_i-1_G(x_ka_0)ξ_iE=+21_G(x_ka_0)ξ_i-1_G(xa_0)ξ_iE+21_G(xa_0)ξ_i-1_G(x)π(a_0)ξ_iE≤ε/6+ε/6+ε/6+1_G(x)ε/2sup_y∈ Sy <ε.Thus, ηiI is indeed a net of strictly almost F^*_ℰ(G)-central unit vectors in E. Now, by construction of the net ηiI, the norm difference ξ_i-η_iE converges to zero. Hence, ξiI is a net of strictly almost F^*_ℰ(G)-central unit vectors in E, as well. Theorem <ref> below, which is one of our main results, relates property (T_ℰ) for a locally compact group G with property (T_ℰ) of its associated symmetrised ℰ-pseudofunction algebra.thmLet G be a second countable locally compact group and let ℰ be a class of Banach spaces. The following are equivalent: * G has property (T_ℰ),* F^*_ℰ(G) has property (T_ℰ).(i)⇒(ii): Assume that G has property (T_ℰ) and let E∈ℰ be an essential F^*_ℰ(G)-bimodule admitting a net ξiI of strictly almost F^*_ℰ(G)-central unit vectors. Then ξiI is almost G-invariant for the isometric representation π of G induced by the F^*_ℰ(G)-bimodule structure. By the assumption that G has property (T_ℰ), we obtain a net of G-invariant vectors ηiI such that ξ_i-η_iE→0. Since the C_c(G) is dense in F^*_ℰ(G), we see that each η_i is F^*_ℰ(G)-central. Thus, F^*_ℰ(G) has property (T_ℰ).(ii)⇒(i): Assume F^*_ℰ(G) has property (T_ℰ) and let (π,E) be an isometric representation of G on a Banach space E in ℰ admitting a net ξiI of almost invariant unit vectors. Then ξiI is almost F^*_ℰ(G)-central for the bimodule structure on E with left action π and right action 1_G. Hence, ξiI is automatically strictly almost F^*_ℰ(G)-central, by Lemma <ref>. By the assumption that F^*_ℰ(G) has property (T_ℰ), we obtain a net ηiI of F^*_ℰ(G)-central vectors such that ξ_i-η_iE→0. Hence, G has property (T_ℰ).Theorem <ref> also holds with F_ℰ(G) in place of F^*_ℰ(G). In fact, the proof shows the following stronger statement: If G has property (T_ℰ) then F_ℛ(G) has property (T_ℰ), for any class ℛ of isometric Banach space representations of G. Further, a sufficient condition for the converse implication is that ℛ contains the class of all isometric representations of G on a space in ℰ.The similar statement holds when exchanging property (T_ℰ) with weak property (T_ℰ). The proof is the same mutatis mutantis, and so, we omit it. thm-1Let G be a locally compact group and ℰ a class of Banach spaces. The following are equivalent: * G has weak property (T_ℰ),* F^*_ℰ(G) has weak property (T_ℰ).thm1When ℰ is a class satisfying either of the two conditions of Proposition <ref> so that property (T_ℰ) and weak property (T_ℰ) for the group are equivalent, the two theorems <ref> and <ref> can be merged into one. This holds, in particular, when ℰ is the class of Hilbert spaces, in which we recover the similar result of Bekka and Ng in <cit.>. As an immediate corollary to Theorem <ref> and to Theorem A in <cit.>, we obtain the following equivalence:Let G be a second countable locally compact group and let 1≤ p,q<∞. Then F^*_L^p(G) has property (T_L^p) if and only if F^*_L^q(G) has property (T_L^q).§.§ Property (T_L^q) for F^*_L^p(G)Let ℰ be the class of L^p-spaces. One may view the associated Banach ^*-algebras F^*_L^p(G) as interpolating between L^1(G) and the universal group C^*-algebra C^*(G) as p varies from 1 to 2. Precisely, for 1≤ q≤ p≤ 2, the identity on L^1(G) extends to a contractive homomorphism F^*_L^q(G)→ F^*_L^p(G).[This follows from the similar result in the classical (non-symmetrized) setting by Gardella and Thiel (see <cit.>), but it can also be proven more directly in the symmetrized setting via interpolation theory.The latter proof is part of work in progress of the first named author.] With this in mind, we obtain the following results as consequences to Theorem <ref>:Let G be a second countable locally compact group with property (T) and let 1≤ p≤2. Then F^*_L^p(G) has property (T_L^q), for all 1≤ q≤ p and all p'≤ q<∞, where p' is the Hölder conjugate of p. Since G has property ( T), it has property (T_L^q), for every 1≤ q<∞, by <cit.>, and so, F^*_L^q(G) has property (T_L^q), by Theorem <ref>. For 1≤ q≤ p or p'≤ q<∞, the identity on L^1(G) extends to a contractive homomorphism F^*_L^q(G)→ F^*_L^p(G) with dense range. It follows by Proposition <ref> that F^*_L^p(G) has property (T_L^q). For discrete groups we obtain a similar result also for parameters q in the interval between p and p'. Let Γ be a discrete group with property ( T) and let 1≤ p≤2. Then F^*_L^p(Γ) has property (T_L^q), for all 1≤ q<∞. It suffices to show the statement for p< q< p'; the cases where q is in between 1 and p or greater than p' are covered in Corollary <ref>. As in the proof of Corollary <ref>, it follows from <cit.> and Theorem <ref> that F^*_L^q(Γ) has property (T_L^q). Let L^q(Ω,ν) be an F^*_L^p(Γ)-bimodule admitting a net ξiI of almost F^*_L^p(Γ)-central unit vectors. By construction of F^*_L^q(Γ), we see that the F^*_L^p(Γ)-bimodule actions extend continuously to F^*_L^q(Γ). Let {x_1,…,x_n} be any finite subset of F^*_L^q(Γ) and let ε>0. Take f_1,…, f_n∈ℓ^1(Γ) such that x_j-f_jF^*_L^q(Γ)<ε/2, for j=1,…, n. Then,sup_j∈{1,…,n}x_j·ξ_i-ξ_i· x_jL^q(Ω,ν)<sup_j∈{1,…,n}f_j·ξ_i-ξ_i· f_jL^q(Ω,ν)+ε.Because we can view {f_1,…,f_n} as a finite subset of F^*_L^p(Γ), the supremum on the right-hand side can be made arbitrarily small when i is chosen large enough. It follows that ξiI is almost central for the F^*_L^q(Γ)-bimodule structure. By Proposition <ref>, property (T_L^q) for F^*_L^q(Γ) then implies the existence of a net ηiI in L^q(Ω,ν) consisting of F^*_L^q(Γ)-central vectors such that ξ_i-η_iL^q(Ω,ν) converges to zero. As the F^*_L^p(Γ)-bimodule actions are the precomposition of the F^*_L^q(G)-bimodule actions with the canonical contractive homomorphism F^*_L^p(Γ)→ F^*_L^q(Γ), the net ηiI is also central for the F^*_L^p(Γ)-bimodule structure. We conclude from Proposition <ref> that F^*_L^p(Γ) has property (T_L^q). § PROPERTY (T_L^P) FOR SYMMETRIZED P-PSEUDOFUNCTION ALGEBRASIn this section, we continue to focus our attention to the class of L^p-spaces on σ-finite measure spaces, where 1≤ p<∞.Due to Theorem <ref>, G has property (T_L^p) if and only if F^*_L^p(G) has property (T_L^p). For p=2, we recover the result of Bekka and Ng in <cit.> that G has property ( T) if and only if the universal group C^*-algebra C^*(G) of G has property ( T).After establishing their result, Bekka andNg ask if C^*(G) can be replaced by the reduced group C^*-algebra C^*_r(G) of G.In this section, we ask the same question but in the more general setting of actions on L^p-spaces.Here, the role of the reduced group C^*-algebra is played by F^*_λ_p(G). Thus, we ask if (or when) property (T_L^p) for the group is captured by F^*_λ_p(G). We shall see that this is the case when G is a discrete group (see Theorem <ref>). This result generalizes that of Bekka and Ng. Our proof relies on a generalisation of Fell's absorption principle to isometric representations on L^p-spaces, which may be of independent interest. Further, when G is discrete, we show that weak property (T_SL^p) for F^*_λ_p(G) implies property (T_L^p) for the group (see Theorem <ref>), where SL^p is the class of closed subspaces of L^p-spaces on σ-finite measure spaces. Let (π,L^p(Ω,ν)) be an isometric L^p-representation of the locally compact group G. We denote by 𝕀 the trivial representation of G on L^p(Ω,ν) and by λ_p the left-regular representation of G on L^p(G). Consider the L^p-spaceE L^p(G,L^p(Ω,ν)).This space contains the algebraic tensor product L^p(G)⊙ L^p(Ω,ν) as a dense subspace. Hence, λ_p⊗π and λ_p⊗𝕀 defines isometric representations of G on E. For p=2, we know from Fell's absorption principle that these two representations are unitarily equivalent. In Proposition <ref> below, we show the analogous statement for general 1≤ p<∞. Let G be a locally compact group, let 1≤ p<∞, and let (π,L^p(Ω,ν)) an isometric representation of G. Then λ_p⊗π and λ_p⊗𝕀 are equivalent in the sense that they are intertwined by a surjective isometry of L^p(G;L^p(Ω,ν)). Consider the linear map V:L^p(G;L^p(Ω,ν))→ L^p(G;L^p(Ω,ν)) given by (Vη)(t)=π(t)η(t),η∈ L^p(G;L^p(Ω,ν))t∈ G.It is a straight forward computation to verify that V defines an isometry. Further, the map W:L^p(G;L^p(Ω,ν))→ L^p(G;L^p(Ω,ν)) given by (Wη)(t)=π(t^-1)η(t),η∈ L^p(G;L^p(Ω,ν))t∈ G.defines a linear isometry, as well, andis an inverse of V. In particular, V is surjective. It remains to show that V intertwines λ_p⊗π and λ_p⊗𝕀.For each η∈ L^p(G;L^p(Ω,ν)) and each t,s∈ G,[(λ_p⊗π)(s)(Vη)](t) =π(s)(Vη)(s^-1t) =π(t)η(s^-1t),and[V(λ_p⊗𝕀)(s)(η)](t) =π(t)[(λ_p⊗𝕀)(s)(η)](t) =π(t)η(s^-1t).Hence,(λ_p⊗π)(s)∘ V=V∘(λ_p⊗𝕀)(s),for all s∈ G. Thus, λ_p⊗π and λ_p⊗𝕀 are equivalent.In Proposition <ref>, one can exchange the left regular representation with the right regular representation, ρ_p. That is, ρ_p⊗π is equivalent to ρ_p⊗𝕀. The proof is the same mutatis mutantis. With the L^p-version of Fell's absorption principle at hand, we can construct an F^*_λ_p(G)-bimodule on E from the isometric representation (π,L^p(Ω,ν)) of G as follows: Set φ=λ_p⊗𝕀 and ψ=ρ_p⊗π of G on E. Clearly, φ integrates to a representation of F^*_λ_p(G), and ψ does as well by Lemma <ref> and the remark following it. Thus, E is an F^*_λ_p(G)-bimodule with left action φ and right action ψ^op=ψ∘□. Let G be a locally compact group, let (π,L^p(Ω,ν)) be an isometric representation and let E be the F^*_λ_p(G)-bimodule from equation (<ref>). If η∈ E is central thenπ(s)η(t)=η(sts^-1),for all s∈ G and μ_G-almost all t∈ G. Let η∈ E be central. Then η is also central for the extension of the bimodule structure to M(F^*_λ_p(G)). Hence, for every s∈ G,ψ(s)η=ψ^op(s^-1)η=φ(s^-1)η.In particular, we have equality almost everywhere on G. It follows thatπ(s)η(t)=ψ(s)η(ts^-1)=φ(s^-1)η(ts^-1)=η(sts^-1),for each s∈ G and for μ_G-almost every t∈ G. §.§ Property (T_L^p) for F^*_λ_p for discrete groupsWe show next that property (T_L^p) for a discrete group Γ is detected by its (symmetrized) p-pseudofunction algebra. Recall that pseudofunction algebras of discrete groups are unital, and we can therefore use the equivalent definition of property (T_L^p) from Proposition <ref>.Let Γ be a discrete group. For each 1≤ p<∞, the following are equivalent: * Γ has property (T_L^p),* F^*_L^p(Γ) has property (T_L^p),* F^*_λ_p(Γ) has property (T_L^p). Our proof uses the ideas of the proof of Theorem 9 in <cit.>. (i)⇒(ii) is covered by Theorem <ref> and (ii)⇒(iii) follows from Proposition <ref>. It remains to show (iii)⇒(i). Suppose (π,L^p(Ω,ν)) is an isometric representation of Γ and let E be the F^*_λ_p(Γ)-bimodule from equation (<ref>). Given ξ∈ L^p(Ω,ν) set ζ=δ_e⊗ξ∈ E. For each f∈ C_c(Γ), we havef·ζ =φ(f)(ζ) =∑_r∈Γf(r)δ_r⊗ξ, ζ· f =ψ^op(f)(ζ) =∑_r∈Γf(r)δ_r⊗π(r^-1)ξ.We computef·ζ-ζ· fE =∑_r∈Γf(s)δ_r⊗(ξ-π(r^-1)ξ)E=(∑_s∈Γf(s)(ξ-π(s^-1)ξ)L^p(Ω,ν)^p)^1/p≤f1sup_t∈(f)π(t)ξ-ξL^p(Ω,ν).Suppose π admits a net ξiI of almost Γ-invariant unit vectors in L^p(Ω,ν). For each i∈ I, set ζ_i=1_V⊗ξ_i. Given f∈ C_c(Γ) and ε>0, pick i_f,ε∈ I such that, for all i≽ i_f,ε,sup_t∈(f)π(t)ξ_i-ξ_iL^p(Ω,ν)<ε/f1.By the above calculations, we see that f·ζ_i-ζ_i· fE<ε, for all i≽ i_f,ε. Since C_c(Γ) is dense in F^*_λ_p(Γ), we deduce that ζiI is an almost F^*_λ_p(Γ)-central net. By the assumption that F^*_λ_p(Γ) has property (T_L^p), we obtain a net ηiI of F^*_λ_p(Γ)-central vectors satisfyingδ_e⊗ξ_i-η_iL^p(G;L^p(Ω,ν))→0,By Lemma <ref>, we have equality π(s)(η_i(t))=η_i(s^-1ts), for all s,t∈ G and every i∈ I. Hence, η_i(e) is a Γ-invariant vector in L^p(Ω,ν). Further,ξ_i-η_i(e)L^p(Ω,ν)≤δ_e⊗ξ_i-η_iE→ 0.Hence, we conclude that Γ has property (T_L^p).Theorem <ref> also holds with F_L^p(Γ) and F_λ_p(Γ) in place of F^*_L^p(Γ) and F^*_λ_p(Γ). The proof is the same.As a corollary to Theorem <ref>, we establish a relation between property (T_L^p) and amenability on the level of the p-pseudofunction algebra. Let Γ be a discrete group. Suppose F^*_λ_p(Γ) has property (T_L^p) and it is amenable as a Banach algebra. Then it is finite dimensional. By Theorem 2.3.1 in <cit.>, amenability of F^*_λ_p(Γ) implies that of the unsymmetrized Banach algebra F_λ_p(Γ), which in turn yields amenability of Γ by<cit.>. Moreover,Theorem <ref> combined with Remark <ref> imply thatΓ has Kazhdan property ( T). It is well-known that amenability and property ( T) of discrete groups imply finiteness. Therefore F^*_λ_p(Γ)=ℂΓ is finite dimensional. §.§ Weak property (T_SL^p) for discrete groups Denote by SL^p the class of closed subspaces of L^p-spaces on σ-finite measure spaces, for some fixed 1≤ p<∞. We show that, for a discrete group Γ, weak property (T_SL^p) implies property (T_L^p). Moreover, we show that weak property (T_SL^p) for F^*_λ_p(Γ) is intermediate to the two by adapting the proof of Bekka in <cit.> that property ( T) for a discrete group Γ is implied by property ( T) for C^*_r(Γ). Along the way, we show that property (T_L^p) for Γ is implied by the property that isometric representations on spaces in SL^p with almost Γ-invariant vectors necessarily must have a finite dimensional subrepresentation. This should be compared with the similar well-known characterization of property ( T) in the setting of unitary representations (see <cit.>). The original proof of in the setting of unitary representations via the characterization of Kazhdan's property ( T) of Delorme and Guichardet utilizes Schönberg's theorem and a GNS-construction to construct a unitary representation with almost invariant vectors. Since we are concerned with isometric representations on L^p-spaces, this route does not seem feasible to us. In order to circumvent this, we will provide an alternative proof based on notions from ergodic theory, which employs an idea used in <cit.>. Let Γ be a discrete group and let 1≤ p<∞. A representation πΓ→Iso(L^p(X, μ)) is called weakly mixing if, for each pair of finite sets E⊂ L^p(X, μ) and F⊂ L^q(X, μ) and each ϵ >0, there exists t∈Γ such that |⟨π (t) ξ, η⟩|< ϵ for all ξ∈ E, and η∈ F. Let Γ be an infinite discrete group and let Γ↷ (Ω, ℬ, μ) be a measure preserving action of Γ on a probability space (Ω, ℬ, μ). This action is called weakly mixing if, for all ℱ⊂ℬ finite, we have lim inf_g →∞∑_A, B∈ℱ|μ(A∩ gB) - μ(A)μ(B)|=0 Connes and Weiss in <cit.>provide a dynamical characterization of property (T) in terms of ergodic measure preserving actions. We recall it here. A discrete group Γ has Kazhdan's property ( T) if and only if every measure preserving ergodic (even weakly mixing) action of Γ is strongly ergodic. (See also <cit.>). For the sake of completeness we include the proof of the following statements. LetΓ↷ (Ω, ℬ, μ) be a p.m.p action and fix 1≤ p<∞. If the action is weakly mixing, then the Koopman representation π_0 Γ→Iso(L^p_0(Ω, ℬ, μ )) is weakly mixing. Let E and F be finite sets of simple functions ∑ c_A_iχ_A_i given by a finite set {A_i ∈ℬ| 1≤ i≤ n} and a finite set of coefficients {c_A_i| 1≤ i≤ n}, such that ∑ c_A_iμ(A_i)=0. Let ℬ_0 ⊂ℬ to be the set of all these finitely many measurable sets. Since the action is weakly mixing, there is a sequence (t_n)_n such that for any pair A, B ∈ℬ_0 we have μ(t_n A ∩ B)→μ(A)μ(B). Now by bilinearity of the scalar product, for any pair of simple functions in E and F we have ⟨π_0(t_n) ∑_i c_A_iχ_A_i, ∑_j d_B_iχ_B_i⟩ =∑_i,jc_A_id_B_j⟨χ_t_n A_i, χ_B_j⟩→∑_i,jc_A_id_B_jμ(A_i)μ(B_j)=0 Since simple functions are dense in L^p_0(Ω, ℬ, μ ) andL^p'_0(Ω, ℬ, μ ) this shows that the Koopman representation is weakly mixing. Let π be an isometric representation of Γ on a closed subspace X of a Banach space. If π is weakly mixing, then it does not have any non-zero finite dimensional subrepresentations.Let V⊂ X be a finite dimensional invariant subspace with basis {ξ_1,…,ξ_d}.Because π is weakly mixing there exists a sequence (t_n)_n in Γ such that ⟨π(t_n) ξ_i, η⟩→ 0, for all i∈{1,…,d} and all η∈ V^*. Since V is finite dimensional, we may pass to a subsequence of (t_n)_n and assume that there is a T∈Iso(V) such that π(t_n)ξ→ Tξ, for all ξ∈ V. Fix ξ∈ V. For each η∈ V^*, we find ⟨Tξ, η⟩ = ⟨limπ(t_n)ξ, η⟩ =0Therefore Tξ=0, and so, ξ=0 since T is an isometry. Hence, V must be zero. We are now ready to prove the main result of this subsection.Let Γ be a discrete group, and let 1≤ p<∞. Each of the following implies the next: * Γ has weak property (T_SL^p),* F^*_λ_p(Γ) has weak property (T_SL^p),* If an isometric representation π of Γ on a subspace of some L^p(Ω, μ) contains almost Γ-invariant vectors, then it has a finite dimensional subrepresentation,* Γ has property (T_L^p).(i)⇒(ii) follows from Theorem <ref> and Proposition <ref>.(ii)⇒(iii): Suppose (π,X) is an isometric representation of Γ with X⊂ L^p(Ω,ν) a closed subspace. SetEℓ^p(Γ)⊗_p X≅ℓ^p(Γ,X)⊂ℓ^p(Γ,L^p(Ω,ν)).Then E is an F^*_λ_p(Γ)-bimodule with left action φ=λ_p⊗𝕀 and right action ψ^op=ρ_p^op⊗π^op. Given ξ∈ L^p(Ω,ν) consider the vector ζ=δ_e⊗ξ in E.We computef·ζ-ζ· fE =φ(f)(δ_e⊗ξ)-ψ^op(f^op)(δ_e⊗ξ)E=∑_s∈Γf(s)δ_s⊗(ξ-π(s^-1)ξ)E=(∑_s∈Γf(r)(ξ-π(r^-1)ξ)L^p(Ω,ν)^p)^1/p≤fpsup_r∈(f)π(r)ξ-ξL^p(Ω,ν) Assume F^*_λ_p(Γ) has weak property (T_SL^p) and suppose π admits a net ξiI of almost Γ-invariant unit vectors in X. For each i∈ I, set ζ_i=δ_e⊗ξ_i. Then ζiI is a net of unit vectors in E. By our above calculations f·ζ_i-ζ_i· fE→ 0, for every f∈ C_c(Γ). Because C_c(Γ) is dense in F^*_λ_p(Γ), it follows that ζiI is an almost F^*_λ_p(Γ)-central net. Thus, E admits a non-zero central vector ζ.By Lemma <ref>, we have, for each pair s,t∈Γ, the equalityπ(s)ζ(r)=ζ(srs^-1).Take t_0∈Γ such that ζ(t_0)≠0, and denote by Cl(t_0)tt_0t^-1t∈ G the conjugacy class of t_0. Since π is an isometric representation of Γ on L^p(Ω,ν), we see from equation (<ref>) that ζ(tt_0t)L^p(Ω,ν)=ζ(t_0)L^p(Ω,ν), for all t∈Γ. From this, we deduce thatζ(t_0)L^p(Ω,ν)^pCl(t_0)=∑_r∈Cl(t_0)ζ(r)L^p(Ω,ν)^p≤∑_r∈ Gζ(r)L^p(Ω,ν)^p=ζp^p<∞.Hence, as ζ(t_0)≠0, the set Cl(t_0) must be finite.Thus, the set π(Γ)ζ(t_0) is finite, and so, its span is a finite dimensional invariant subspace of X.(iii)⇒(iv):Assume that Γ does not have property (T_L^p) and hence not property ( T) (see Remark <ref>). Then, by work of Connes and Weiss, there exists a p.m.pweakly mixing action on a probability space (Ω, μ) admitting an asymptotically invariant sequence (B_n)_n of measurable subsets of Ω with μ(B_n)=1/2, for all n. Since the action is weakly mixing, the Koopman representation π_0Γ→Iso(L_0^p(Ω, μ)) is weakly mixing due to Proposition <ref>. Hence, by Lemma <ref>, π_0 does not have any finite dimensional subrepresentation. However, ξ_n=2χ _B_n-1 provides an almost Γ-invariant sequence in L_0^p(Ω, μ).One can prove the implication (i)⇒(iii) in Theorem <ref> without passing by weak property (T_SL^p) for F^*_λ_p(G). Indeed, if Γ has property (T_SL^p) then any isometric representation on a closed subspace of an L^p-space with almost Γ-invariant vectors contains an invariant vector and hence a 1-dimensional subrepresentation.In Theorem <ref>, the core of the proof of the implication (ii)⇒(iii) is to show that the set Cl(t_0) is finite. Observe that, to reach this conclusion, it is shown that its Haar-measure is finite. The conclusion that Cl(t_0) is finite is therefore contingent on the discreteness of Γ. It does not seem feasible to us to extend this proof to a larger class of groups.Theorem <ref> above remains true when exchanging F^*_λ_p(Γ) with F_λ_p(Γ). plain [t]0.45 Emilie Mai ElkiærDepartment of MathematicsUniversity of Oslo0851 Oslo, Norway [email protected] Sanaz Pooya Institute of MathematicsUniversity of Potsdam 14476 Potsdam, [email protected]
http://arxiv.org/abs/2310.18136v1
{ "authors": [ "Emilie Mai Elkiær", "Sanaz Pooya" ], "categories": [ "math.FA", "math.GR", "math.OA" ], "primary_category": "math.FA", "published": "20231027132655", "title": "Property (T) for Banach algebras" }
* Christian Bauer^1, Yoshiyuki Sakai^2 and Markus Uhlmann^3 2023-10-25 ============================================================= We argue that Transformers are essentially graph-to-graph models, with sequences just being a special case.Attention weights are functionally equivalent to graph edges.Our Graph-to-Graph Transformer architecture makes this ability explicit, by inputting graph edges into the attention weight computations and predicting graph edges with attention-like functions, thereby integrating explicit graphs into the latent graphs learned by pretrained Transformers.Adding iterative graph refinement provides a joint embedding of input, output, and latent graphs, allowing non-autoregressive graph prediction to optimise the complete graph without any bespoke pipeline or decoding strategy. Empirical results show that this architecture achieves state-of-the-art accuracies for modelling a variety of linguistic structures, integrating very effectively with the latent linguistic representations learned by pretraining.§ INTRODUCTION Computational linguists have traditionally made extensive use of structured representations to capture the regularities found in natural language.The huge success of Transformers <cit.> and their pre-trained large language models <cit.> have brought these representations into question, since these models are able to capture even subtle generalisations about language and meaning in an end-to-end sequence-to-sequence model <cit.>. This raises issues for research that still needs to model structured representations, such as work on knowledge graphs, hyperlink graphs, citation graphs, or social networks.In this paper we show that the sequence-to-sequence nature of most Transformer models is only a superficial characteristic; underlyingly they are in fact modelling complex structured representations.We survey versions of the Transformer architecture which integrate explicit structured representations with the latent structured representations of Transformers.These models can jointly embed both the explicit structures and the latent structures in a Transformer's sequence-of-vectors hidden representation, and can predict explicit structures from this embedding.In the process, we highlight evidence that the latent structures of pretrained Transformers already include much information about traditional linguistic structures. These Transformer architectures support explicit structures which are general graphs, making them applicable to a wide range of structured representations and their integration with text.The key insight of this line of work is that attention weights and graph structure edges are effectively the same thing.Linguistic structures are fundamentally an expression of locality in the interaction between different components of a representation.As <cit.> argued, incorporating this information about locality in the inductive bias of a neural network means putting connections between hidden vectors if their associated components are local in the structure.In Transformers <cit.>, these connections are learned in the form of attention weights.Thus, these attention weights are effectively the induced structure of the Transformer's latent representation.However, attention weights are not explicitly part of a Transformer's hidden representation.The output of a Transformer encoder is a sequence of vectors, and the same is true of each lower layer of self-attention.The latent attention weights are extracted from these sequence-of-vector embeddings with learned functions of pairs of vectors.Edges in explicit graphs can be predicted in the same way (from pairs of vectors), assuming that these graphs have also been embedded in the sequence of vectors.In recent years, the main innovation has been in how to embed explicit graphs in the hidden representations of Transformers.In our work on this topic, we follow the above insight and input the edges of the graph into the computation of attention weights.Attention weights are computed from an n× n matrix of attention scores (where n is the sequence length), so we input the label of the edge between nodes i and j into the score computation for the i,j cell of this matrix.Each edge label has a learned embedding vector, which is input to the attention score function in various ways depending on the architecture.This allows the Transformer to integrate the explicit graph into its own latent attention graph in flexible and powerful ways.This integrated attention graph can then determine the Transformer's sequence-of-vectors embedding in the same way as standard Transformers. Researchers from the Natural Language Understanding group at Idiap Research Institute have developed this architecture for inputting and predicting graphs under the name of Graph-to-Graph Transformer (G2GT).G2GT allows conditioning on an observed graph and predicting a target graph.For the case where a graph is only observed at training time, we not only want to predict its edges, we also want to integrate the predicted graph into the Transformer embedding.This has a number of advantages, most notably the ability to jointly model all the edges of the graph.By iteratively refining the previous predicted graph, G2GT can jointly model the entire predicted graph even though the actual prediction is done independently for each edge.And this joint modelling can be done in conjunction with other explicit graphs, as well as with the Transformer's induced latent graph.Our work on G2GT has included a number of different explicit graph structures.The original methods were developed on syntactic parsing <cit.>.The range of architectures was further explored for semantic role labelling <cit.> and collocation recognition <cit.>.G2GT's application to coreference resolution extended the complexity of graphs to two levels of representation (mention spans and coreference chains) over an entire document, which was all modelled with iterative refinement of a single graph<cit.>.Current work on knowledge extraction poses further challenges, most notably the issue of tractably modelling large graphs.The code for G2GT is open-source and available for other groups to use for other graph structures (at <https://github.com/idiap/g2g-transformer>). In the rest of this paper, we start with a review of related work on deep learning for graph modelling (Section <ref>).We then present the general G2GT architecture with iterative refinement (Section <ref>), before discussing the specific versions we have evaluated on specific tasks (Section <ref>).We then discuss the broader implications of these results (Section <ref>), and conclude with a discussion of future work (Section <ref>).§ DEEP LEARNING FOR GRAPHSGraph Neural Networks. Early attempts at broadening the application of neural networks to graph structures were pursued by <cit.> and <cit.>, who introduced the Graph Neural Networks (GNNs) architecture as a natural expansion of Recurrent Neural Networks (RNNs) <cit.>. This architecture regained interest in the context of deep learning, expanded through the inclusion of spectral convolution layers <cit.>, gated recurrent units <cit.>, spatial convolution layers <cit.>, and attention layers <cit.>. GNNs generally employ the iterative local message passing mechanism to aggregate information from neighbouring nodes <cit.>. Recent research, analysing GNNs through the lens of <cit.>, has highlighted two key issues: over-smoothing <cit.> and over-squashing <cit.>. Over-smoothing arises from repeated aggregation across layers, leading to convergence of node features and loss of discriminative information. Over-squashing, on the other hand, results from activation functions during message aggregation, causing significant information and gradient loss. These issues limit the capacity of GNNs to effectively capture long-range dependencies and nuanced graph relationships <cit.>. The Transformer architecture <cit.> can be seen as addressing these issues, in that its stacked layers of self-attention can be seen as a fixed sequence of learned aggregation steps. Graph Transformers. Transformers <cit.>, initially designed for sequence tasks, represent a viable and versatile alternative to GNNs due to their intrinsic graph processing capabilities. Through their self-attention mechanism, they can seamlessly capture global wide-ranging relationships, akin to handling a fully-connected graph. <cit.> explicitly input relative position relations as embeddings into the attention function, thereby effectively inputting the relative position graph, instead of absolute position embeddings, to represent the sequence. Generalising this explicit input strategy to arbitrary graphs <cit.> has led to a general class of models which we will refer to as Graph Transformers (GT).GT Evolution and Applications. The history of graph input methods used in GTs started with Transformer variations that experimented with relative positions to more effectively capture distance between input elements.Rather than adopting the sinusoidal position embedding introduced by <cit.> or the absolute position embedding proposed by <cit.>, <cit.> added relative position embeddings to attention keys and values, capturing token distance within a defined range.<cit.> proposed Transformer-XL, which used content-dependent positional scores and a global positional score in attention weights. <cit.> demonstrated one of the earliest successful integration of an explicit graph into Transformer's latent attention graph. They introduced the Graph-To-Graph Transformer (G2GT) architecture and applied it to syntactic parsing tasks by effectively leveraging pre-trained models such as BERT <cit.>. <cit.> introduced new methods to enhance interaction between query, key and relative position embeddings within the self-attention mechanism. <cit.> proposed RoFormer, which utilises a rotation matrix to encode absolute positions while also integrating explicit relative position dependencies into the self-attention formulation. <cit.> and <cit.> extended Performer <cit.> to support relative position encoding while scaling Transformers to longer sequences with a linear attention mechanism. Graphormer <cit.> introduced node centrality encoding as an additional input level embedding vector, node distances and edges as soft biases added at attention level, and obtained excellent results on a broad range of graph representation learning tasks. <cit.> built upon the G2GT architecture and proposed an iterative refinement procedure over previously predicted graphs, using a non-autoregressive approach. SSAN <cit.> leveraged the GT approach to effectively model mention dependencies for document-level relation extraction tasks. JointGT <cit.> exploited the GT approach for knowledge to text generation tasks via a joint graph-text encoding. Similarly, TableFormer <cit.> demonstrated the successful utilisation of the GT approach for combined text-table encoding in table-based question answering tasks. <cit.> proposed a GT architecture for simultaneous collocation extraction and lexical function typification, incorporating syntactic dependencies into the attention mechanism. <cit.> showed that the G2GT iterative refinement procedure can be effectively applied to graphs at multiple levels of representation. <cit.> further extended a GT architecture with new edge and node update methods and applied them to graph-structured problems.QAT <cit.> substantially expanded upon GT models to jointly handle language and graph reasoning in question answering tasks. In the study conducted by <cit.>, the G2GT model showed substantial improvements in the semantic role labelling tasks.The multitude of successful applications and extensions firmly establish Graph Transformers as a robust and adaptable framework for addressing complex challenges in language and graphs.§ GRAPH-TO-GRAPH TRANSFORMER ARCHITECTUREOur Graph-to-Graph Transformer (G2GT) architecture combines the idea of inputting graph edges into the self-attention function with the idea of predicting graph edges with an attention-like function.By encoding the graph relations into the self-attention mechanism of Transformers, the model has an appropriate linguistic bias, without imposing hard restrictions. Specifically, G2GT modifies the attention mechanism of Transformers <cit.> to input any graph. Given the input sequence W=(x_1,x_2,...,x_n), and graph relations G={(x_i,x_j,l),1 ≤ i,j ≤ n, l ∈ L} (where L is the set of labels), the modified self-attention mechanism is calculated as[Various alternative functions are possible for inputting relation embeddings into attention weight computations.<cit.> provide a survey of previous proposals for relative position encoding. In ongoing work, we have found that using a relation embedding vector to reweight the dimensions in standard dot-product attention works well for some applications.]:e_ij = 1/√(d)[ x_iW^Q(x_jW^K)^T +x_iW^Q(r_ijW^R_1)^T +r_ijW^R_2(x_jW^K)^T ]where r_ij∈{0,1}^|L| is a one-hot vector which specifies the type of the relation between x_i and x_j,[This formulation can be easily extended to multi-label graphs by removing the one-hot constraint.We are investigating the most effective method for doing this.] W^R_1,W^R_2∈ R^|L| × d are matrices of graph relation embeddings which are learned during training, |L| is the label size, and d is the size of hidden representations.The value equation of Transformer <cit.> is also modified to pass information about graph relations to the output of the attention function:z_i = ∑_jα_ij(x_jW^V+r_ijW^R_3)where W^R_3∈ R^|L| × d is another learned relation embedding matrix.To extract the explicit graph from the sequence of vectors output by the Transformer, a classification module is applied to pairs of vectors and maps them into the label space L. Initially, the module transforms each vector into distinct head and tail representations using dedicated projection matrices. Subsequently, a classifier (linear, bilinear or MLP) is applied, to map the vector pair onto predictions over the label space. Notably, each edge prediction can be computed in parallel (i.e. in a non-autoregressive manner), as predictions for each pair are independent of one another. Given the discrete nature of the output, various decoding methods can be employed to impose desired constraints on the complete output graph. These can range from straightforward head-tail order constraints, to more complex decoding algorithms such as the Minimum Spanning Tree (MST) algorithm. Having an architecture which can both condition on graphs and predict graphs gives us the powerful ability to do iterative refinement of arbitrary graphs.Even when graph prediction is non-autoregressive, conditioning on the previously predicted graph allows the model to capture between-edge correlations like an autoregressive model.As illustrated in Figure <ref>, we propose Recursive Non-autoregressive G2GT (RNGT), which predicts all edges of the graph in parallel, and is therefore non-autoregressive, butcan still condition every edge prediction on all other edge predictions by conditioning on the previous version of the graph (using Equations <ref> and <ref>).The input to the model is the input graph W (e.g. a sequence of tokens), and the output is the final graph G^T over the same set of nodes. First, we compute an initial graph G^0 over the nodes of W, which can be done with any model.Then each recursive iteration encodes the previous graph G^t-1 and predicts a new graph G^t. It can be formalised in terms of an encoder E^RNG and a decoder D^RNG: Z^t =E^RNG(W,G^t-1) G^t =D^RNG(Z^t) t = 1,…,T where Z represents the set of vectors output by the model, and T indicates the number of refinement iterations.Note that in each step of this iterative refinement process, the G2G Transformer first computes a set of vectors which embeds the predicted graph (i.e. E^RNG(W,G^t-1)), before extracting the edges of the predicted graph from this set-of-vectors embedding (i.e. D^RNG(Z^t)). § G2GT MODELS AND RESULTSThis section provides a more comprehensive explanation of each alternative G2GT model we have explored, along with an outline of how we've applied these models to address various graph modelling problems. The empirical success of these models demonstrate the computational adequacy of Transformers for extracting and modelling graph structures which are central to the nature of language.The large further improvements gained by initialising with pretrained models demonstrates that Transformer pretraining encodes information about linguistic structures in its attention mechanisms. §.§ Syntactic Parsing Syntactic parsing is the process of analysing the grammatical structure of a sentence, including identifying the subject, verb, and object. Syntactic dependency parsing is a critical component in a variety of natural language understanding tasks, such as semantic role labelling <cit.>, machine translation <cit.>, relation extraction <cit.>, and natural language inference <cit.>. It is also a benchmark structured prediction task, because architectures which are not powerful enough to learn syntactic parsing cannot be computationally adequate for language understanding. Syntactic structure is generally specified in one of two popular grammar styles, constituency parsing (i.e. phrase-structure parsing) <cit.> and dependency parsing <cit.>. There are two main approaches to compute the dependency tree: transition-based and graph-based parsers. Transition-based parsers predict the dependency graph one edge at a time through a sequence of parsing actions <cit.>, and graph-based parsers compute scores for every possible dependency edge and then apply a decoding algorithm to find the highest scoring total tree <cit.>. In the following, we outline our proposals for using G2GT for syntactic parsing tasks. §.§.§ Transition-based Dependency Parsing In <cit.>, we integrate the G2GT model with two baselines, named StateTransformer (StateTr) and SentenceTransformer (SentTr). In the former model, we directly input the parser state into the G2GT model, while the latter takes the initial sentence as the input. For better efficiency of our transition-based model, we used an alternative version of G2GT, introduced in Section <ref>, where the interaction of graph relations with key matrices in Equation <ref> is removed.Each parser decision is conditioned on the history of previous decisions by inputting an unlabelled partially constructed dependency graph to the G2GT model.<cit.> evaluate the integrated models on the English Penn Treebank <cit.>, and 13 languages of Universal Dependencies Treebanks <cit.>.Results of our models on the Penn Treebank are shown in Table <ref> (see <cit.> for further results on UD Treebanks). Integrating the G2GT model with the StateTr baseline achieves 9.97% LAS Relative Error Reduction (RER) improvement, which confirms the effectiveness of modelling the graph information in the attention mechanism. Furthermore, initialising our model weights with the BERT model <cit.>, provides significant improvement (27.65% LAS RER), which shows the compatibility of our modified attention mechanism with the latent representations learned by BERT pretraining. Integrating the G2GT model with the SentTr baseline results in a similar significant improvement (4.62% LAS RER). §.§.§ Graph-based Dependency ParsingThe StateTr and SentTr models generate the dependency graph in an autoregressive manner, predicting each parser action conditioned on the history of parser actions.Many previous models have achieved better results with graph-based parsing methods, which use non-autoregressive computation of scores for all individual candidate dependency relations and then use a decoding method to reach the maximum scoring structure <cit.>. However, these models usually ignore correlations between edges while predicting the complete graph. In <cit.>, we propose the Recursive Non-autoregressive Graph-to-Graph Transformer (RNGT) architecture, as discussed in Section <ref>. The RNGT architecture can be applied to any task with a sequence or graph as input and a graph over the same set of nodes as output. Here, we apply it for the syntactic dependency parsing task, and preliminary experiments showed that removing the interaction of graph relations with key vectors, in Equation <ref>, results in better performance and a more efficient attention mechanism.<cit.> evaluate this RNGT model on Universal Dependency (UD) Treebanks <cit.>, Penn Treebanks <cit.>, and the German CoNLL 2009 Treebank <cit.> for the syntactic dependency parsing task.Table <ref>shows the results on 13 languages of UD Treebanks. First, we use UDify <cit.>, the previous state-of-the-art multilingual dependency parser,as the initial parser for the RNGT model. The integrated model achieves significantly better LAS performance than the UDify model in all languages, which demonstrates the effectiveness of the RNGT model at refining a dependency graph. Then, we combine RNGT with Syntactic Transformer (SynTr), a stronger monolingual dependency parser, which has the same architecture as the RNGT model except without the graph input mechanism. The SynTr+RNGT model reaches further improvement over the strong SynTr baseline (four languages are significant), which is stronger evidence for the effectiveness of the graph refinement method. Interestingly, there is little difference between the performance with different initial parsers, implying that the RNGT model is effective enough to refine any initial graphs.In fact, even when we initialise with an empty parse, the Empty+RNGT model achieves competitive results with the other RNGT models, again confirming our powerful method of graph refinement.§.§.§ Penn Treebank and German corpus Results UAS and LAS results for the Penn Treebanks and German CoNLL 2009 Treebank are reported in Table <ref>. We compare to the results of previous state-of-the-art models and SynTr, and we use the RNGT model to refine both the Biaffine parser <cit.> and SynTr, on all Treebanks.[Results are calculated with the official evaluation script: (<https://depparse.uvt.nl/>). For German, we use <https://ufal.mff.cuni.cz/conll2009-st/eval-data.html>.]Again, the SynTr model significantly outperforms previous state-of-the-art models, with a 5.78%, 9.15%, and 23.7% LAS relative error reduction in English, Chinese, and German, respectively.Despite this level of accuracy, adding RNGT refinement improves accuracy further under both UAS and LAS.For the Chinese Treebank, this improvement is significant, with a 5.46% LAS relative error reduction. When RNGT refinement is applied to the output of the Biaffine parser <cit.>, it achieves a LAS relative error reduction of 10.64% for the English Treebank, 16.05% for the Chinese Treebank, and 27.72% for the German Treebank. These improvements, even over such strong initial parsers, again demonstrate the effectiveness of the RNGT architecture for graph refinement.§.§ Semantic Role Labelling The semantic role labelling (SRL) task provides a shallow semantic representation of a sentence and builds event properties and relations among relevant words, and is defined in both dependency-based <cit.> and span-based <cit.> styles. Previous work <cit.> showed that the syntactic graph helps SRL models to predict better output graphs, but finding the most effective way to incorporate the auxiliary syntactic information into SRL models was still an open question. In <cit.>, we introduce the Syntax-aware Graph-to-Graph Transformer (SynG2G-Tr) architecture. The model conditions on the sentence’s dependency structure and jointly predicts both span-based <cit.> and dependency-based <cit.> SRL structures. Regarding the self-attention mechanism, we remove the interaction of graph embeddings with value vectors in Equation <ref>, as it reaches better performance in this particular task <cit.>.Results for span-based SRL are shown in Table <ref>.Without initialising the models with BERT <cit.>, the SynG2G-Tr model outperforms a previous comparable state-of-the-art model <cit.> in both end-to-end and given-predicate scenarios. The improvement indicates the benefit of encoding the graph information in the self-attention mechanism of Transformer with a soft bias, instead of hard-coding the graph structure into deep learning models <cit.>, as the model can still learn other attention patterns in combination with this graph knowledge.BERT <cit.> initialisation results in further significant improvement in both settings, which again shows the compatibility of the G2GT modified self-attention mechanism with the latent structures learned by BERT pretraining. CoNLL 2009 Results.[Scores are calculated with CoNLL 2009 shared task script (<https://ufal.mff.cuni.cz/conll2009-st/>).]Table <ref> illustrates the results of dependency-based SRL on the test set of CoNLL 2009 dataset. Without BERT initialisation, SynG2G-Tr significantly outperforms previous work in in-domain and out-of-domain settings. With BERT initialisation, our model significantly outperforms previous work in end-to-end setting with 3.2%/10.4% F1 RER in both in-domain and out-of-domain evaluation sets, while having competitive performance in given-predicate setting. For a better comparison with Fei_Li_Li_Ji_2021 (last setting of Table <ref>), we also employ the gold dependency tree for training and use the predicted dependency graph at inference time. Our model significantly outperforms Fei_Li_Li_Ji_2021, especially on the out-of-domain dataset. This shows the benefit of encoding the dependency graph by modifying the self-attention mechanism of Transformer <cit.> compared to using graph convolutional network, as in Fei_Li_Li_Ji_2021. §.§ Coreference Resolution Coreference resolution (CR) is an important and complex task which is necessary for higher-level semantic representations.We show that it benefits from a graph-based global optimisation of all the coreference chains in a document.§.§.§ CR Task Definition and Background Coreference resolution is the task of linking all linguistic expressions in a text that refer to the same entity.Solutions for this task involve three parts: mention-detection <cit.>, classification or ranking of mentions, and finally reconciling the decisions to create entity chains.These approaches fall within three principal categories: mention-pair models which perform binary decisions <cit.>, entity-based models which focus on maintaining single underlying entity representation, contrasting the independent pair-wise decisions of mention-pair approaches <cit.>, and ranking models which aim at ranking the possible antecedents of each mention instead of making binary decisions <cit.>. A limitation of these methods lies in their bottom-up construction, resulting in an underutilisation of comprehensive global information regarding coreference links among all mentions in individual decisions. Furthermore, these methods tend to exhibit significant complexity. Modelling of coreference resolution as a graph-based approach offer an alternative to deal with these limitations.§.§.§ Iterative Graph-based CR <cit.> proposed a novel approach to modelling coreference resolution, treating it as a graph problem. In this framework, the tokens within the text serve as nodes, and the connections between them signify coreference links (see Figure <ref>). Given a document D=[x_1,...,x_N] with length N, the coreference graph is formally defined as the matrix G ⊂ℕ^N × N, which represents the relationships between tokens. Specifically, the relationship type between any two tokens, x_i and x_j, is labelled as g_i,j∈{0,1,2} for the three distinct relation types: (0) no link, (1) mention link, and (2) coreference link.The primary objective of this approach is to learn the conditional probability distribution p(G|D). To achieve this, an iterative refinement strategy is employed, which captures interdependencies among relations. The model iterates over the same document D for a total of T iterations. In each iteration t, the predicted coreference graph G_t is conditioned on the previous prediction, denoted as G_t-1. Thus, the conditional probability distribution of the model is defined as follows:p(G^t|D, G^t-1) = ∏_i=1^N ∏_j=1^i p(g_i,j|D, G^t-1) The proposed model operates on two levels of representation. In each iteration, it predicts the entire graph. However, during the first iteration, the model focuses on predicting edges that pinpoint mention spans, given that coreferent links only have relevance when mentions are detected. From the second iteration, both mention links, and coreference links are refined. This iterative strategy permits the model to enhance mention-related decisions based on coreference resolutions, and vice versa. This framework utilises iterative graph refinement as a substitute for conventional pipeline architectures in multi-level deep learning models. The iterative process concludes either when the graph no longer undergoes changes or when a predetermined maximum iteration count is attained (see Figure <ref>).Ideally, encoding the entirety of the document in a single pass would be optimal. However, in practical scenarios, a constraint on maximum length arises due to limitations in hardware memory capacity. To address this challenge, <cit.> introduce two strategies: overlapping windows and reduced document approach. In the latter strategy, mentions are identified during an initial iteration with a focus on optimising recall, as previously suggested in <cit.>. Only the representations of these identified spans are subsequently used as inputs for the following iterations. <cit.> conducted experiments on the CoNLL 2012 corpus <cit.> and showed improvements over relevant baselines and previous state-of-the-art methods, summarised in Table <ref>.We compare our model with three baselines: <cit.> proposed the first end-to-end model for coreference resolution; <cit.> extended the previous model by introducing higher order inference; and <cit.> used the span based pre-trained model SpanBERT <cit.>.The `Baseline' of <cit.> uses ELMo <cit.> to obtain token representations, so versions of this Baseline which use `BERT-large' <cit.> and `SpanBERT-large' <cit.> as their pretrained models, are directly comparable to our `G2GT BERT-large' and `G2GT SpanBERT-large' models, respectively. These results show that coreference resolution benefits from making global coreference decisions using document-level information, as supported by the G2GT architecture. Our model achieves its optimal solution within a maximum of three iterations. Notably, due to the model's ability to predict the entire graph in a single iteration, its computational complexity is lower compared to that of the baseline approaches. § DISCUSSIONThe empirical success of Graph-to-Graph Transformers on modelling these various graph structures helps us understand how Transformers model language.This success demonstrates that Transformers are computationally adequate for modelling linguistic structures, which are central to the nature of language.The reliance of these G2GT models on using self-attention mechanisms to extract and encode these graph relations shows that self-attention is crucial to how Transformers can do this modelling. The large improvements gained by initialising with pretrained models indicates that pretrained Transformers are in fact using the same mechanisms to learn about this linguistic structure, but in an unsupervised fashion.These insights into pretrained Transformers give us a better understanding of the current generation of Large Language Models (LLMs).It is not that these models do not need linguistic structure (since their attention mechanisms do learn it); it is that these models do not need supervised learning of linguistic structure.But perhaps in a low-resource scenario LLMs would benefit from the inductive bias provided by supervised learning of linguistic structures, such as for many of the world's languages other than English.And these insights are potentially relevant to the issues of interpretability and controllability of LLMs. These insights are also relevant for any applications which could benefit from integrating text with structured representations.Our current work investigates jointly embedding text and parts of a knowledge base in a single G2GT model, providing a way to integrate interpretable structured knowledge with knowledge in text.Such representations would be useful for information extraction, question answering and information retrieval, amongst many other applications.Other graphs we might want to model with a Transformer and integrate with text include hyperlink graphs, citation graphs, and social networks.An important open problem with such models is the scale of the resulting Transformer embedding.§ CONCLUSION AND FUTURE WORKThe Graph-to-Graph Transformer architecture makes explicit the implicit graph processing abilities of Transformers, but further research is needed to fully leverage the potential of G2GT. §.§ Conclusions The success of the above models of a variety linguistic structures shows that Transformers are underlyingly graph-to-graph models, not limited to sequence-to-sequence tasks.The G2GT architecture with its RNGT method provides an effective way to exploit this underlying ability when modelling explicit graphs, effectively integrating them with the implicit graphs learned by pre-trained Transformers. Inputting graph relations as features to the self-attention mechanism enables the information input to the model to be steered by domain-specific knowledge or desired outcomes but still learned by the Transformer, opening up the possibility for a more tailored and customised encoding process. Predicting graph relations with attention-like functions and then re-inputting them for iterative refinement, encodes the input, predicted and latent graphs in a single joint Transformer embedding which is effective for making global decisions about structure in a text. §.§ Future Work One topic of research where explicit graphs are indispensable is knowledge graphs. Knowledge needs to be interpretable, so that it can be audited, edited, and learned by people. And it needs to be integrated with existing knowledge graphs.Our current work uses G2GT to integrate knowledge graphs with knowledge conveyed by text.One of the limitations of the models discussed in this paper is that the set of nodes in the output graph needs to be (a subset of) the nodes in the input graph.General purpose graph-to-graph mappings would require also predicting a set of new nodes in the output graph.One natural solution would be autoregressive prediction of one node at a time, as is done for text generation, but an exciting alternative would be to use methods from non-autoregressive text generation in combination with our iterative refinement method RNGT.The excellent performance of the models presented in this paper suggest that many more problems can be successfully formulated as graph-to-graph problems and modelled with G2GT, in NLP and beyond. The code for G2GT and RNGT is open-source and publicly available at <https://github.com/idiap/g2g-transformer>. § ACKNOWLEDGEMENT We would like to especially thank the Swiss National Science Foundation for funding this work, under grants 200021E_189458, CRSII5_180320, and 200021_178862.We would also like to thank other members of the the Natural Language Understanding group at Idiap Research Institute for useful discussion and feedback, including Florian Mai, Rabeeh Karimi, Andreas Marfurt, Melika Behjati, and Fabio Fehr. acl_natbib
http://arxiv.org/abs/2310.17936v1
{ "authors": [ "James Henderson", "Alireza Mohammadshahi", "Andrei C. Coman", "Lesly Miculicich" ], "categories": [ "cs.CL", "cs.AI", "cs.LG" ], "primary_category": "cs.CL", "published": "20231027072137", "title": "Transformers as Graph-to-Graph Models" }
Article Title]Influence of EOM sideband modulation noise on space-borne gravitational wave detection1]Mingyang Xu[1]Yujie [email protected],2]Hanzhong Wu[1]Panpan [email protected]]Hao Yan1]Yurong Liang[1]Chenggang [email protected][1]MOE Key Laboratory of Fundamental Physical Quantities Measurements, Hubei Key Laboratory of Gravitation and Quantum Physics, PGMF and School of Physics, Huazhong University of Science and Technology, Wuhan 430074, China[2]State Key Laboratory of Applied Optics, Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, ChinaClock noise is one of the dominant noises in the space-borne gravitational wave (GW) detection. To suppress this noise, the clock noise-calibrated time-delay-interferometry (TDI) technique is proposed. In this technique, an inter-spacecraft clock tone transfer chain is necessary to obtain the comparison information of the clock noises in two spacecraft, during which an electro-optic-modulator (EOM) is critical and used to modulate the clock noise to the laser phase. Since the EOM sideband modulation process introduces modulation noise, it is significant to put forward the corresponding requirements and assess whether the commercial EOM meets. In this work, based on the typical Michelson TDI algorithm and the fundamental noise requirement of GW detectors, the analytic expression of the modulation noise requirement is strictly derived, which relax the component indicator need compared to the existing commonly used rough assessments. Furthermore, a commercial EOM (iXblue-NIR-10 GHz) is tested, and the experimental results show that it can meet the requirement of the typical GW detection mission LISA in whole scientific bandwidth by taking the optimal combination of the data stream. Even when the displacement measurement accuracy of LISA is improved to 1 pm/ Hz^1/2 in the future, it still meets the demand.[ *=====§ INTRODUCTION In 1915, Einstein proposed the general theory of relativity, giving the most elegant and precise theory of gravity to date. Its predicted GWs were directly detected by the ground-based GW detector LIGO in 2015 <cit.>. Thus, a new era of GW astronomy was opened, and GW detection became a popular research direction for large-scale cosmic observation and basic theory test. Different GW detection methods have different sensitive frequency bands, which correspond to different wave sources and solve different scientific problems. All kinds of detection methods together help to obtain more comprehensive information about the universe. Ground-based GW detection is mainly sensitive to GWs in the frequency band of 10-10^4 Hz, and below 10 Hz is limited by ground vibration noise and gravity gradient noise <cit.>. However, there will be more abundant GW sources below 1 Hz. Space-borne GW detection can avoid the influence of ground vibration. Meanwhile, longer arm length will be sensitive to lower frequency GW signals. For this reason, several space-borne GW detection programs have been proposed internationally, including LISA <cit.>, DECIGO <cit.>, Tianqin <cit.>, Taiji <cit.>, etc.Space-borne GW detection uses laser interferometry to accurately measure the phase change of the laser, traveling back and forth between the test masses placed in separated spacecraft, and then extracts the GW information from the laser interferencing scientific data streams. The sensitivity limit of the detector is determined by the instrumental noise floor, constituting by the test-mass acceleration noise and the laser shot noise. To detect a GW signal, various noises should be controlled below this floor. For a typical space-borne GW detector, laser frequency noise is the dominant noise source. In space, due to orbit motion, arm lengths cannot be equal in real time, resulting in that laser frequency noises cannot cancel each other out in the interference data stream, which greatly affects the GW detection. To solve this problem, TDI technique has been developed to construct a virtual equal-arm interferometer by time-delaying and combining the data streams, eliminating laser frequency noise in common mode<cit.>. During the digital sampling process of the heterodyne interference signal, clock noise will be also introduced. Recent years, several efforts are made to explore the clock synchronization <cit.> and the clock-noise suppression. In this work, we focus on the latter. The characteristic Allan standard deviation of the state-of-the-art USO reads σ_A≈10^-13@1 s, and this clock jitter noise is about 2∼3 orders of magnitude higher than the noise floor, not meeting the requirement of the GW detection. As developing a more stable clock is difficult, TDI technique is extended to suppress clock noise. In general, there are two strategies. One is the ultrastable oscillator (USO) noise calibrated <cit.>, in which the laser beams are sideband modulated to construct an inter-spacecraft clock tone transfer chain, and then generate additional interspacecraft measurements about the clock noise comparison. These additional data streams can help to remove the clock noises in the scientific data streams. The other one is the optical frequency comb system connected <cit.>, in which the laser frequency noise is coherently linked to the clock noise, and one can modify the TDI combination to simultaneously suppress the laser and clock noises <cit.>. As the onboard optical comb technology is not yet mature, this paper focuses on the former strategy.In the USO noise-calibrated TDI technique, an EOM is a crucial experimental component, which is used to modulate the clock frequency multiplier signal to the laser phase, forming sideband data stream, as shown in Fig. <ref>. Sideband data stream and carrier data stream are transmitted with the laser between the spacecrafts. The combination of carrier and sideband data streams is mainly dominated by clock noise, and this additional data stream can be used to eliminate clock noise in the TDI combination. Residual clock noise suppression algorithms in different TDI combinations have been reported <cit.>. Theoretical studies show that clock sideband TDI algorithm can suppress clock noise well below detector noise floor <cit.>. However, in practice, the sideband modulation used to eliminate clock noise is not ideal. The homology between the frequency multiplier signal injected with the EOM and the USO <cit.>, the phase fidelity of the sideband modulation of the EOM <cit.>, and the phase fidelity before and after passing through the laser amplifier will introduce additional modulation noises <cit.>. Therefore, it is important to analyze the requirement of modulation noises on the detection of GWs. Currently, the modulation noise requirement has been roughly proposed without using TDI algorithm; a typical commercial EOM has been tested, and the result showed the modulation noise in the additional data stream is large; further combined with the laser amplitude stabilization technology, it can meet the demand of GW detection <cit.>. In this work, based on the principle of GW detection, starting from the laser interferencing data streams and TDI algorithms, we will give a more stringent and analytic expression of modulation noise requirement, and also make a related experimental test on the typical commercial EOM to see if it meets the requirements. The paper is laid out as follows: Sec. II provides the measurement principle of typical space-borne GW detection, and the residual modulation noise after eliminating laser frequency noise and clock noise by TDI combination is derived; Sec. III shows the experiment setup for testing a commercial EOM modulation noise; Sec. IV demonstrates the experimental results. Finally, Section V is conclusion.§ REQUIREMENT OF THE SIDEBAND MODULATION NOISE In this section, we derive the requirements for sideband modulation noise, both notations and conventions following those defined for LISA array <cit.>. The structure of LISA is shown in the Fig. <ref>. Each satellite contains two identical optical benches. It is assumed that platforms on one side are denoted as 1, 2, 3, and platforms on the other side are denoted as 1', 2', 3'. The distance between the two satellites is denoted L_i and L_i', where i=1, 2, 3, denotes counterclockwise direction and i'=1', 2', 3' denotes clockwise direction. Each optical platform contains three-stage measurement and three data streams, which are carrier-to-carrier data stream s_i^c, interfered by lasers from the remote optical bench and local optical bench, which carry information about GW signals; the test mass data stream ε_i , from adjacent bench to local bench, which contains information about spacecraft motion noise and test mass acceleration noise; reference data stream τ_i, from adjacent optical bench to local optical bench, which only contains information about laser frequency noise, fiber noise and clock noise. Taking optical bench 1 and 1’ as an example, these data streams can be written as <cit.>:s_1^c =h_1+D_3p_2'-p_1+( n⃗_3· D_3Δ⃗_2'+n⃗_3'·Δ⃗_1 ) -a_1q_1+N_1, ε _1 =p_1'-p_1-2n⃗_3'·( δ⃗_1-Δ⃗_1 ) +μ _1-b_1q_1, τ _1 =p_1'-p_1+μ _1-b_1q_1,ands_1'^c =h_1'+D_2'p_3-p_1'+( n⃗_2'· D_2'Δ⃗_3+n⃗_2·Δ⃗_1') -a_1'q_1+N_1', ε _1' =p_1-p_1'-2n⃗_2·( δ⃗_1'-Δ⃗_1') +μ _1-b_1'q_1, τ _1' =p_1-p_1'+μ _1-b_1'q_1.Similarly, the data streams on other optical benches can be obtained by cyclic permutation of the indices:1→2→3→1. Here h_i, p_i, q_i, n⃗_i, Δ⃗_i, δ⃗_i, N_i, and μ _i are the GW signal, laser frequency noise, clock noise, unit vectors between spacecraft, spacecraft motion noise, test mass acceleration noise, shot noise, and the fiber noise, respectively. D_i and D_i' are the time-delay operators, and for any function f(t), this operator satisfies the following convention:D_i' D_i f(t)≡D_i'i f(t)≡f[t-L_i'(t)/c-L_i(t-L_i'(t))/c]≈f[t-L_i'(t)/c-L_i(t)/c]with c being the speed of light. a_i and b_i are the coefficients corresponding to the heterodyne frequency which can be expressed as:a_i =ν _( i+1 ) '-ν _i/f_i,a_i' =ν _i-1-ν _i'/f_i,b_i =v_i'-ν _i/f_i=-b_i',wherev_i represents laser center frequency and f_i is the USO’s center frequency. By combining the carrier-to-carrier data stream, the reference data stream, and the test mass data stream, we can eliminate the spacecraft motion noise and the laser frequency noise with i’, and obtain: η _i ≡ s_i^c-ε _i-τ _i/2-D _i-1ε _(i+1)'-τ _(i+1)'/2-D _i-1τ _i+1-τ _(i+1)'/2=h_i+D_i-1p_i+1-p_i+n⃗_i-1·[ D _i-1δ⃗_(i+1)'-δ⃗_i ]+N_i+b_i+1D_i-1q_i+1-a_iq_i, η _i^' ≡ s_i^'^c-ε _i'-τ _i'/2-D _(i+1)'ε _i-1-τ _i-1/2+τ _i-τ _i'/2=h_i'+D _(i+1)'p_i-1-p_i+n⃗_i+1·[ δ⃗_i'-D _(i+1)'δ⃗_i-1]+N_i'+( b_i'-a_i') q_i. Eqs. (<ref>) and (<ref>) contain three laser frequency noises p_i, three clock noises q_i, and the noise floor determined by test mass acceleration noise and shot noise. In order to eliminate the remaining laser frequency noise, TDI is used, which relies on properly time-shifting and linearly combining data streams in Eqs. (<ref>) and (<ref>) to construct a virtual equal arm interferometry: TDI=∑_i=1^3( P_iη _i+P_i'η _i'),where P_i is the delay operator polynomial of different TDI combinations. By combining Eqs. (<ref>), (<ref>) and (<ref>), we can get the residual test mass acceleration noise and shot noise as:TDI^δ =∑_i=1^3{-[P_i+P_(i+1)'D_(i-1)'] n⃗_i-1·δ⃗_i+[P_i-1D_i+1+P_i'] n⃗_i+1·δ⃗_i'}, TDI^shot =∑_i=1^3P_iN_i+P_i'N_i'.The residual clock noise can be also obtained as:TDI^q=-∑_i=1^3[ a_iP_i+a_i'P_i'-b_i'( P_i'-P_i-1D _i+1) ] q_i.Based on the assumption that the different test mass acceleration noises are independent and at the same order of magnitude, and the same assumption is made for the shot noise and clock noise, one can get the power spectral density (PSD) of test mass acceleration noise, shot noise and clock noise as:S_TDI^δ( ω)=S_pf( ω) ∑_i=1^3 | P̃_i( ω)+P̃_(i+1)'( ω) D̃_(i-1)'( ω) |^2+S_pf( ω) ∑_i=1^3| P̃_i( ω) D̃_i-1( ω)+P̃_(i+1)'( ω) |^2 ,S_TDI^shot( ω)=S_opt( ω) ∑_i=1^3[ | P̃_i( ω) |^2+| P̃_i'( ω) |^2 ], S_TDl^q( ω)=S_Q( ω) ∑_i=1^3| a_iP̃_i(ω)+a_i'P̃_i'( ω)-b_i'[ P̃_i'(ω) -P̃_i-1( ω) D̃_i+1( ω) ] |^2,where S_pf=s_a^2/( 2π fc ) ^2 , s_a is the amplitude spectral density (ASD) of the test mass acceleration noise, f is the Fourier frequency; S_opt=( 2π f ) ^2s_x^2/c^2, s_x is the ASD of the shot noise; s_Q=s_q^2/v_0^2, s_q is the ASD of the clock noise.P̃_i represents the polynomial of the Fourier transform of the delay operators.With the typical TDI combinations, the residual clock noise is still about 3 orders of magnitude higher than the detector noise floor, limiting the detection of gravitational waves. To eliminate the clock noise, an inter-spacecraft clock tone modulated by an electro-optical modulator (EOM) will be used to get the information of the clock noise, which can be written as sideband-to-sideband data streams s_i^sb :s_i^sb= h_i+D_i-1p_(i+1)'-p_i+m_(i+1)'D_i-1q_i+1-m_iq_i-c_iq_i+N_i^sb+( n⃗_i-1· D_i-1Δ⃗_(i+1)'+n⃗_(i-1)'·Δ⃗_i )+m_(i+1)'D_i-1q_i+1^mod-m_iq_i^mod,where the coefficients of m_i≈GHz/MHz are determined by the driving frequency of the EOM, N_i^sb is the shot noise in sideband data streams andq_i^mod is the modulation noise. Modulation noise will enter the sideband data stream with the EOM, so the modulation noise will be introduced in the final combinations when the sideband data stream is used to eliminate clock noise. To this end, we use sideband data stream and carrier data stream to construct expressions mainly containing clock noise and modulation noise:r_i ≡s_i^c-s_i^sb/m_(i+1)'≈ q_i-D _i-1q_i+1+q_i^mod-D _i-1q_i+1^mod, r_i' ≡s_i'^c-s_i'^sb/m_i-1≈ q_i-D _(i+1)'q_i-1+q_i^mod-D _(i+1)'q_i-1^mod. Then, we take the Michelson combination as an example to derive the residual modulation noise after eliminating the laser frequency noise and clock noise. The delay operator of the first generation of the Michelson combination is: P_1=( D _2'2-1 ), P_2=0, P_3=( D _2'-D _33'2'),P_1'=( 1-D _33'), P_2'=( D _2'23-D _3 ), P_3'=0.Substitute Eq. (<ref>) into Eq. (<ref>), the residual clock noise is: X_1^q= [ b_1'( 1-D_33') ( 1-D_22') +a_1( 1-D_2'2) +a_1'( D_33'-1 ) ] q_1+[ a_2'( 1-D _2'2) D _3 ] q_2-[ a_3( 1-D _33') D _2'] q_3', Using Eq. (<ref>), we can build auxiliary clock noise measurements: K_X_1= b_1'( 1-D _33') ( r_1'+D _2'r_3 ) +a_1( r_1'+D_2'r_3 ) -a_1'( r_1+D_3r_2') +a_2'[ r_1'-( 1-D_2'2) r_1+D_2'r_3 ] -a_3[ r_1-( 1-D_33') r_1'+D_3r_2'] . Making a combination of Eqs. (<ref>) and (<ref>), one can obtain: X_1^q-K_X_1≈ -[ b_1'( 1-D_33') ( 1-D_22') +a_1( 1-D_2'2) +a_1'( D_33'-1 ) ] q_1^mod-[ a_2'( 1-D _2'2) D _3 ] q_2^mod+[ a_3( 1-D _33') D _2'] q_3'^mod. Thus, the clock noise is eliminated, while the modulation noise remains. Furthermore, the PSD of the modulation noise is:S_X_1^mod=f_i^2/ν _0^24sin ^2 u( S_q^mod)^2[( a_1-a_1') ^2+a_2'^2+a_3^2+4b_1'( a_1-a_1'+b_1')sin ^2 u],where S_q^mod is the ASD of the dimensionless relative modulation noise. Substitute Eq. (<ref>) into Eqs. (<ref>) and (<ref>), the PSD of residual test mass acceleration noise and shot noise is:S_X_1=s_a^2L^2/u^2c^4( 8sin ^2 2u+32sin ^2 u ) +16u^2s_x^2/L^2sin ^2 u,whereu=2π fL/c is a dimensionless quantity. For the second-generation Michelson combination X_2, similar to the case of the residual clock noise <cit.>, one can find :S_X_2^mod(ω) ≈4sin^22uS_X_1^mod(ω),S_X_2(ω) ≈4sin^22uS_X_1(ω).Making S_X_2^mod( ω)≤S_X_2( ω), we can get the requirement of the sideband modulation noise as:S_q^mod≤√(s_a^2L^2/u^2c^4( 8sin ^2 2u+32sin ^2 u ) +16u^2s_x^2/L^2sin ^2 u/f_i^2/ν _0^24sin ^2 u[ ( a_1-a_1') ^2+a_2'^2+a_3^2+4b_1'( a_1-a_1'+b_1') sin ^2 u ]). In thie paper, we will take the typical parameters of LISA to analyse, the arm-length L=2.5×10^6 km, the ASDS of the test mass acceleration noise and shot noise are 3×10^-15 ms^-2/ Hz^1/2 and 10×10^-12 m/ Hz^1/2. The coefficients a_if_i and b_if_i are between 5 MHz and 20 MHz. Here, we take a_1f_1=-a_1'f_1=a_2'f_2=a_3f_3=b_1'f_1=20 MHz, and this will corresponds to the strictest requirements of modulation noise. Fig. <ref> shows the modulation noise requirements. The dimensionless relative modulation noise requirements are multiplied by a factor of 20MHz/2π f to convent into phase with the unite of cycle. The blue and black lines are respectively the modulation noise requirements according to the derived analytic expression Eq. (<ref>) with LISA parameters of s_x=10 pm/ Hz^1/2 and s_x=1 pm/ Hz^1/2. The red line is the rough assessment of the modulation noise requirement according to Ref <cit.> with a shot noise of 1 pm/ Hz^1/2. It can be clearly seen from the figure that the modulation noise requirements given by the rough evaluation are more stringent at low frequencies, below 10 mHz, than the requirements given by the analytical results. In the rough evaluation method, a shot noise level equivalent to a spacecraft displacement, such as 1 pm/ Hz^1/2, is commonly allocated to the phase measurement system, while our analysis is based on the fundamental noise of gravitational wave detectors and combined with specific TDI combinations to propose indicator requirements for experimental components. The fundamental noise of gravitational wave detectors is composed of test mass noise and shot noise, and the former noise dominates at low frequencies. However, in the rough evaluation method, the indicator requirement is proposed only based on shot noise, which obviously proposes a more stringent indicator. Therefore, the analytical expression we obtained can provide a theoretical basis for the component selection for space-borne gravitational wave detectors. § EXPERIMENT SETUP OF MEASURING THE MODULATION NOISE OF EOM In our experimental test, the modulation noise of the EOM is mainly measured. Other modulation noises, such as frequency divider, frequency multiplier and fiber amplifiers will be tested in the future. Fig. <ref> shows the experimental setup used to test the EOM modulation noise. Since optical fiber is more likely to introduce ambient vibration noise and temperature fluctuation noise, we choose a spatial acousto-optic modulator (AOM) and build a spatial optical path. The whole interferometer is placed in a thermal insulation cotton box, which is used to reduce air fluctuations and temperature fluctuations. A better option would be placing the entire system in a vacuum container, but this is costly and not conducive for checking for interferometer problems. A 1064 nm laser (NKT Y10) is used as the laser signal source and two AOMs (GTCT-110 MHz) are used to generate the heterodyne interference, while the drive signals of AOM1 is 112.75 MHz and AOM2 is 107.25 MHz. We test a commercial polarization-maintaining fiber EOM (iXblue NIR-10 GHz), which has low plug loss and half wave voltage. We use the RF signal generators (Rigol DSG821) to generate signals of SG1=2.1 GHz and SG2=2.0955 GHz respectively, and through the coupler (Mini-Circuits ZX30-17-5-S+), part of which is injected into the EOM to form the upper and lower sidebands, and the other part through a mixer to get the noise of the SG itself. The two laser interferences with slightly different center frequencies and sideband can finally be received in the photodetector (Menlo FPD510-FS-NIR). We extract the upper band data stream, the carrier data stream and the lower band data stream respectively through the amplifier (Mini-Circuits ZFL-500-BNC+), the power divider (Mini-Circuits ZSC-4-2), the low-pass (DC-1.9 MHz) and band-pass filters (4.5 MHz-6.5 MHz, 9.5 MHz-11.5 MHz), and send them to the homemade FPGA phasemeter <cit.>. In our experiment, the SGs, the AOM drive signals and the phasemeter are referred to the common rubidium atomic clock (Stanford Research Systems FS725). For the experiment of Fig. <ref>, the carrier interference data stream can be written as:S_c=e_1× q_1^AOM-e_2× q_2^AOM+δ _c,where e_i, (i=1,2) is the modulation frequency of the AOM_i, q_i^AOMis the noise with dimensionless relative frequency introduced by the AOM modulation driven by a SG, andδ _c is the extra noise introduced by interferometer noise.The lower sideband data stream can be written as:S_sb^low= e_1× q_1^AOM-M_1× q_1^_SG-M_1× q_1^_EOM-( e_2× q_2^AOM-M_2× q_2^_SG-M_2× q_2^_EOM) +δ _c = ( e_1× q_1^AOM-e_2 × q_2^AOM) -( M_1× q_1^_SG-M_2× q_2^_SG) -( M_1× q_1^_EOM-M_2× q_2^_EOM) +δ _c,where M_i, (i=1,2) is the EOM modulation frequency,q_i^SG is the dimensionless relative frequency noise introduced by the SG, andq_i^_EOM is the dimensionless relative frequency noise introduced by the EOM sideband modulation. Here, we ignore the noise differences between carrier interferometers and sideband interferometers, and denote both of them as δ _c. The upper sideband data stream can be written as:S_sb^up= e_1× q_1^AOM+M_1× q_1^_SG+M_1× q_1^_EOM-( e_2× q_2^AOM+M_2× q_2^_SG+M_2× q_2^_EOM) +δ _c = ( e_1× q_1^AOM-e_2 × q_2^AOM) +( M_1× q_1^_SG-M_2× q_2^_SG) +( M_1× q_1^_EOM-M_2× q_2^_EOM) +δ _c.Thus, combining the carrier data stream and sideband data stream can eliminate the interferometer noise, and obtain the below relations:S_sb^up-S_c =S_c-S_sb^low=1/2(S_sb^up-S_sb^low)=( M_1× q_1^_SG-M_2× q_2^_SG) +( M_1× q_1^_EOM-M_2× q_2^_EOM).Since the noise of SG will affect the detection, we use the mixing frequency data between the SGs to deduct the noise of the SG in the final data processing:S_SG =M_1× q_1^_SG-M_2× q_2^_SG, γ _1 =S_sb^up-S_c-S_SG=M_1× q_1^_EOM-M_2× q_2^_EOM, γ _2 =S_c-S_sb^low-S_SG=M_1× q_1^_EOM-M_2× q_2^_EOM, γ _3 =S_sb^up-S_sb^low-2× S_SG=2×( M_1× q_1^_EOM-M_2× q_2^_EOM),We assume that the noise of EOM1 and EOM2 are unrelated and at the same level, and approximately assume that the driving frequencies of EOM1 (2.1 GHz) and EOM2 (2.0955 GHz) are the same (M=2.1 GHz). Finally, the EOM modulation noise measured in relative frequency jitter is: q_a^EOM=γ _1/√(2)M, q_b^EOM=γ _2/√(2)M, q_c^EOM=γ _3/2√(2)M. where Eq. (<ref>) is the combination of upper sideband data stream, carrier data stream and SG noise;Eq. (<ref>) is the combination of lower sideband data stream, carrier data stream and SG noise;Eq. (<ref>) is the combination of upper sideband data stream, lower sideband data stream and SG noise. § EXPERIMENTAL RESULTSFig. <ref> shows the experimental results, in which, all the dimensionless relative frequency jitters are multiplied by a factor of 20MHz/2π f to convent into phase with the unite cycle. The blue and black lines are respectively the modulation noise requirements according to the derived analytic expression Eq. (<ref>) with LISA parameters of s_x=10 pm/ Hz^1/2 and s_x=1 pm/ Hz^1/2, and the red line is the rough assessment of the modulation noise requirement according to Ref <cit.> with a shot noise of 1 pm/ Hz^1/2. The pink line represents the combination of the upper and lower sideband data, expressed as (S_sb^up-S_sb^low)/M·20 MHz/2π f, which can suppress the interferometer noise. The blue dotted line is related to the noise of SG, expressed as 2·S_SG/M·20 MHz/2π f. Although the SGs also externally reference the same rubidium clock, the frequency multiplier inside them cannot homologate the external clock with GHz output well. The pink and blue dotted lines are trending in the same direction, indicating that they are affected by the same noise, that is, the noise of the SG, which in the experiment we have eliminated in common mode as shown in Eq. (<ref>). The green line is figured with Eq. (<ref>) which is the combination of upper sideband data stream, carrier data stream and SG noise; the orange dotted line is figured with Eq. (<ref>), which is the combination of lower sideband data stream, carrier data stream and SG noise; and the purple line is figured with Eq. (<ref>) which is the combination of upper sideband data stream, lower sideband data stream and SG noise.In the experiment, the equivalent inter-spacecraft clock tone transfer chain can be formed by combining the upper/lower sideband and carrier data or the upper and lower sideband data. The experimental results show that the SGs multipling the clock from MHz to GHz cannot meet the requirement of LISA mission with s_x=10 pm/ Hz^1/2 at frequencies between 1 mHz to 0.1 Hz. Fortunately, mixing data between different SGs, one can measure the contribution of SG noise to the clock tone transfer chain, and further deduct it from the chain. Based on this, we find that all the clock tone transfer chains meet the requirement of LISA mission with s_x=10 pm/ Hz^1/2 in the whole scientific bandwidth, especially the chain formed by the upper and lower sideband data that meet the requirements of LISA mission with s_x=1 pm/ Hz^1/2 in the whole scientific bandwidth. This may because the power jitter of the laser is coupled to the phase jitter during the EOM modulation <cit.>, and common-mode the upper and lower sidebands can suppress the power jitter noise, since the upper and lower sidebands may have almost the same amplitude of optical power. Alternatively, one can introduce the active laser power feedback to mitigate the laser power jitter noise. Based on the above discussion and the purple line in Fig. <ref>, the commercial EOM (iXblue NIR-10GHz) meets the requirement of LISA mission with s_x=1 pm/ Hz^1/2, and the residual noise for the purple line may be dominated by the laser interferometer noise.§ CONCLUSIONIn space-borne GW detection, clock noise is about 2∼3 orders of magnitude higher than the typical GW signal. In order to suppress the clock noise, an inter-spacecraft clock tone modulated by an EOM will be used. Theoretical studies show that clock sideband TDI algorithm can suppress clock noise well below detector noise floor. However, in practice, the sideband modulation process is not ideal, which may introduce excessive modulation noise and affect GW detection. In this work, based on the typical Michelson TDI algorithm and the noise floor of GW detectors, the analytic expression of the modulation noise requirement is strictly derived. Compared to the modulation noise requirement from the existing commonly used rough assessment, the noise requirement from the analytic expression is relaxed at the frequencies below 10 mHz. This is because the rough assessment method proposed the noise requirement only based on shot noise, while typical gravitational wave detectors are dominated by test mass noise at low frequencies. Therefore, the rough assessment method proposed a more stringent modulation noise requirement. Overall, the analytical expression we obtained can provide a theoretical basis for the component selection for space-borne gravitational wave detectors.To evaluate whether the EOM component meet the requirements, the existing commercial EOM (iXblue NIR-10 GHz) in the laboratory has been tested. In the experiment, two commercial SGs are used to up-convert an external clock reference to GHz output. The experimental results show that the homology between the SGs and the external clock reference is not well, and the clock tone transfer chainsformed by combining the upper/lower sideband and carrier data or the upper and lower sideband data are limited by this noise. By mixing the signals from the two SGs, we construct the additional measurements for SG noises and deduct this noise from the clock tone transfer chains. Moreover, we find the differential noise between the upper and lower sideband data is lower than that between sideband data and carrier data, which may because common-mode the upper and lower sidebands can suppress the laser power jitter noise. Finally, we find the commercial EOM can meet the requirement of the typical GW detection mission LISA by taking the optimal combination of the data stream. Even when the displacement measurement accuracy of LISA is improved from 10 pm/ Hz^1/2 to 1 pm/ Hz^1/2 in the future, it still meets the demand. This work mainly focuses on the modulation noise introduced by the EOM, while these introduced by frequency dividers, frequency multipliers and laser amplifiers should be also analyzed to check whether the current commercial components satisfy the requirements, which will be our next research work. AcknowledgmentsThis work is supported by National Key Research and Development Program of China (2022YFC2204601); National Natural Science Foundation of China (11925503, 12275093 and 12175076); Natural Science Foundation of Hubei Province (2021CFB019), and State Key Laboratory of applied optics (SKLAO2022001A10).Authors' contributionsconceptualization, Mingyang Xu and Yujie Tan; methodology, Mingyang Xu and Yurong Liang; validation, Mingyang Xu, Hanzhong Wu and Hao Yan; writing—original draft preparation, Mingyang Xu; writing—review and editing, Mingyang Xu and Panpan Wang; supervision, Yujie Tan and Chenggang Shao. All authors reviewed themanuscript.Availability of data and materials Data underlying the results presented in this paper are not publicly available at this time but may be obtained from the authors upon reasonable request.§ DECLARATIONSConflict of interest The authors declare no conflicts of interest.
http://arxiv.org/abs/2310.18379v1
{ "authors": [ "Mingyang Xu", "Yujie Tan", "Hanzhong Wu", "Panpan Wang", "Hao Yan", "Yurong Liang", "Chenggang Shao" ], "categories": [ "astro-ph.IM", "gr-qc", "physics.ins-det" ], "primary_category": "astro-ph.IM", "published": "20231027020811", "title": "Influence of EOM sideband modulation noise on space-borne gravitational wave detection" }
APS/123-QED JILA, NIST, and Department of Physics, University of Colorado, Boulder, Colorado 80309, USA We study anisotropic thermalization in dilute gases of microwave shielded polar molecular fermions.For collision energies above the threshold regime, we find that thermalization is suppressed due to a strong preference for forward scattering and a reduction in total cross section with energy, significantly reducing the efficiency of evaporative cooling.We perform close-coupling calculations on the effective potential energy surface derived by Deng et al. [Phys. Rev. Lett. 130, 183001 (2023)], to obtain accurate 2-body elastic differential cross sections across a range of collision energies. We use Gaussian process regression to obtain a global representation of the differential cross section, over a wide range of collision angles and energies. The route to equilibrium is then analyzed with cross-dimensional rethermalization experiments, quantified by a measure of collisional efficiency toward achieving thermalization.Prospects for thermalization of microwave-shielded ultracold moleculesJohn L. Bohn January 14, 2024 ========================================================================The ever growing interest in quantum control of polar molecules motivates the cooling of molecular gases to unprecedented cold temperatures <cit.>. In bulk gases, reaching such temperatures can be accomplished through evaporative cooling <cit.>, a process which throws away energetic molecules and leverages collisions to rethermalize the remaining, less energetic, distribution. Understanding and controlling 2-body scattering for thermalization is, therefore, of great importance for ultracold experiments.To this end, the exciting advent of collisional shielding with external fields has permitted a large suppression of 2-body losses between molecules <cit.>. Thermalization relies instead on the elastic cross section, which is generally dependent on the field-induced dipole-dipole interaction and their energy of approach.Of particular interest to this Letter is collisional shielding with microwave fields <cit.>, recently achieved at several labs around the world <cit.>.In analogous gases of magnetic atoms with comparatively small dipole moments, dipolar scattering remains close-to-threshold <cit.> at the ultracold but nodegenerate temperatures of T ∼ 100 nK <cit.>.For dipoles, threshold scattering occurs when the collision energy is much lower than the dipole energy E_ dd, in which case the scattering cross section becomes energy independent <cit.> with a universal analytic form <cit.>. Numerical studies of thermalization are made much simpler at universality, since collisions can be sampled regardless of collision energy <cit.>.However, this convenience is lost with the polar molecular gases of interest here. Take for instance a gas of fermionic ^23Na^40K, as we will concern ourselves with in this study. This species has a large intrinsic dipole moment of d = 2.72 D, so that even ultracold temperatures have majority of collisions occurring away from threshold with an energy dependent cross section. In this Letter, we find that non-threshold collisions can dramatically reduce thermalization and thus, the efficiency of the cooling process.Ignoring all 1 and 2-body losses for a focused study on elastic collisions, the decrease in gas total energy E = 3 N k_B T along with the number of molecules N, approximately follows the coupled rate equations <cit.>d N / d t= - ν(κ)γ_ thN, d E / d t= -1 / 3 λ(κ) γ_ thE, where ν(κ) = ( 2 + 2κ + κ^2 ) / ( 2 e^κ ) and λ(κ) = ( 6 + 6κ +3κ^2 + κ^3 ) / ( 2 e^κ ) are functions of the energetic truncation parameter κ = U / (k_B T) <cit.>.By continuously lowering the energetic depth of the confining potential U(t) = U_0 exp(-t/τ) over a time interval τ, highly energetic molecules are forced to evaporate away, lowering the number of molecules along with the gas temperature as shown in Fig. <ref>. For the plot, Eq. (<ref>) is solved by taking evaporation to occur with an initial trap depth U_0/k_B = 4μK over τ = 0.5 s, in a harmonic trap with mean frequency ω = 2π× 100 Hz, starting at temperature T_0 = 400 nK and molecule number N_0 = 20,000.The evaporation efficiency, defined as the slope of T vs N on a log-log scale, is governed by the thermalization rate γ_ th.The figure shows efficient cooling for the low-energy threshold cross sections (dashed red curve), and significantly less efficient cooling for the realistic cross sections (solid black curve).The remainder of this Letter provides the microscopic mechanisms that lead to this dramatic difference, and efficient theoretical tools we employ to obtain these conclusions. Shielded collisions—Central to this study, are collisions that occur between molecules shielded by circularly polarized microwaves <cit.>. The resulting potential energy surface between two such molecules is conveniently described by a single effective potential <cit.>:V_ eff(r)= C_6 / r^6 [ 1 - ( r̂·Ê )^4 ]+ d^2 / 4 πϵ_03 ( r̂·Ê )^2- 1 / r^3 ,where r = ( r, θ, ϕ ) is the relative position between the two colliding molecules,Ê is the axis along which the dipoles are effectively aligned, d = d_0 / √( 12 (1 + (Δ/Ω)^2) ) is the effective molecular dipole moment and C_6 = d_0^4 ( 1 +(Δ/Ω)^2)^-3/2 / ( 128 π^2 ϵ_0^2 ħΩ ). Here Δ and Ω are the detuning and Rabi frequency respectively, of the microwaves. A y = 0 slice of the effective microwave shielding interaction potential is plotted in the inset of Fig. <ref>. Notably, the long-range 1/r^3 tail of V_ eff(r) is almost identical to that of point dipole particles, modified only by an overall minus sign. As a result, the close-to-threshold elastic cross sections for microwave shielded molecules are identical to those for point dipoles. It is natural to introduce units based on the reduced mass μ, dipole length and dipole energy:a_d=μd^2 / 4 πϵ_0 ħ^2 and E_ dd = ħ^2 /μ a_d^2 ,respectively. Threshold scattering is then expected to occur for collision energies E ≪ E_ dd.With the microwave parameters Δ = 2π× 15 MHz and Δ = 2π× 9.5 MHz, which will be assumed in what follows, the molecules see a dipole length of a_d ≈ 3900 a_0, corresponding to a dipole energy of E_ dd/k_B ≈ 360 nK. Therefore, temperatures comparable to E_ dd / k_B are insufficient to keep molecular scattering in the threshold regime <cit.>.Moreover, since the dipole energy scales as E_ dd∼ d^-4, larger dipoles require much lower temperatures to achieve universal dipolar threshold scattering as alluded to earlier.Away from threshold, the integral cross section σ in the presence of microwave shielding (dashed black curve), develops a nontrivial energy dependence that clearly differs from that of plain point dipoles (dotted blue curve) as illustrated in Fig. <ref>. The plotted cross sections were obtained from close-coupling calculations logarithmically spaced in energy, with a universal loss short-range boundary condition <cit.> (see Supplementary Material for further details).Away from threshold at E ≈ E_ dd, the microwave shielded integral cross section does not deviate much from its value at threshold (solid red line in Fig. <ref>). But the differential cross section could still have its anisotropy changed substantially, which is what ultimately affects thermalization <cit.>. For a study of both non-threshold differential scattering and its implications to thermalization in nondegenerate Fermi gases, we take its nonequilibrium evolution as governed the Boltzmann transport equation <cit.>. Formulated in this way, numerical solutions treat the molecular positions and momenta as classical variables, while collisions can be efficiently computed by means of Monte Carlo sampling <cit.>. But on the fly close-coupling calculations would be too expensive for such sampling over a broad range of collision energies and angles. Instead, we propose the following.Gaussian process fitting—At a given collision energy, theelastic differential cross section D_ el, is a function of the dipole alignment axis Ê, and the relative ingoing and outgoing momentum vectors ħk and ħk', respectively.Collectively, we refer to this set of parameters as β.By first performing close-coupling calculations at several well chosen collision energies E = ħ^2 k^2 / (2 μ) [ The differential cross section suffers innate convergence issues due to singularities in the scattering amplitude <cit.>. Fortunately for us, forward scattering does not contribute toward cross-dimensional thermalization, of which we are concerned with in this Letter. We leave addressing these issues to a future manuscript. ], we can use the resultant scattering data to infer an M-dimensional continuous hypersurface that approximates D_ el, with a Gaussian process (GP) model <cit.>. GP regression is a machine learning technique used to interpolate discrete data points, stitching them together to form a continuous global surface. To do so, a GP assumes that D_ el(β) evaluated any 2 nearby points in its coordinate space, β_i and β_j, are Gaussian distributed with a covariance given in terms of a function K(β_i, β_j), called the kernel. A parameterized functional form for the kernel is chosen prior to the surface fitting process, reducing the task of combing through an infinite space of possible functions that best match the data, to a minimization over the kernel parameters. This minimization step is referred to as training the GP model.Several symmetries in the differential cross section help to reduce the computational load of training slightly. Rotated into the frame where Ê points along the z axis, which we refer to as the dipole-frame, the unique hypersurface regions effectively live in an M = 4 dimensional space, with coordinates β = (E, η, θ_s, ϕ_s).As defined, η = cos^-1k̂·Ê is the angle between the dipole and incident relative momentum directions, where it is convenient to select k̂ to lie in its x,z plane.The angles θ_s and ϕ_s, denote the inclination and azimuthal scattering angles respectively, in this frame. Doing so, the differential cross section possesses the symmetryD_ el (E, η, θ_s, ϕ_s)=D_ el (E, η, θ_s, -ϕ_s).Consequently, we only need to specify the differential cross section for angles within the domain η, θ_s, ϕ_s ∈ [ 0, π ], to fully describe its global structure. More details of the appropriate frame transformations are provided in Supplementary Material.To perform the interpolation with GP regression, we utilize the Matérn-5/2 kernel <cit.>,which is better able to capture the sharp jumps in a non-smooth function, over higher-order differentiable kernels such as the radial basis function. This kernel contains a parameter w that sets a length scale over which features of the data vary in coordinate space, that is optimized during the model training process.This kernel is typically not ideal for periodic input data, so we make the periodicity of the angles ( η, θ_s, ϕ_s) explicitly known to the GP model by training it with the cosine of these angles, instead of the angles themselves.Furthermore, log_10(E / E_ dd) is fed into the GP model in place of E, to reduce the disparity in fitting domains between each coordinate of β. The GPmodel is trained over the range log_10(E / E_ dd) = -6 to 2, corresponding to collision energies of E / k_B ≈ 0.36 pK to 36 μK. After training on ∼ 10,000 samples of D_ el(E, η, θ_s, ϕ_s), the resulting GP fit obtains a mean-squared error of ≈ 0.5 % against the close-coupling calculations[ We utilize more points than is usually necessary for GP fitting in this study, so as to obtain more accurate results of subsequently computed quantities in this Letter. We also optimize the model's hyperparameters <cit.> on top of just the kernel parameters. Even so, the Gaussian process model has issues faithfully reproducing the differential cross section around η, θ_s = 90^∘, known to have a discontinuity at threshold <cit.>. Fortunately, this angular segment corresponds to forward scattering, which does not contribute to the cross-dimensional thermalization process of interest here. We ignore this issue until necessary for consideration in future works. ],which we take as an accurate representation of the actual cross section. In Fig. <ref>, we plot the total cross section σ(E, η) = ∫ D_ el(E, η, Ω_s) dΩ_s, at various collision energies.There is a marked variation in the η dependence, indicating a higher tendency for side-to-side collisions (η=90^∘) over head-to-tail ones (η=0^∘) at higher energies. To highlight the dominant anisotropic scattering process, Fig. <ref> also provides plots of the differential cross section at η = 45^∘, the approximate angle at which σ is maximal. As energy increases from subplots (a) to (d), the scattered angle dependence of D_ el becomes biased toward forward scattering, reducing the effectiveness of collisions for thermalization as discussed below.Alphabetic labels in Fig. <ref> consistently correspond to the collision energies: (b) E = 0.2 E_ dd, (c) E = 2 E_ dd and (d) E = 20 E_ dd. The Born approximated cross sections at threshold <cit.> are labeled with (a). Collisional thermalization—Fast and easy access to the accurate differential cross section via its GP model now permits accurate theoretical investigations of nondegenerate gas dynamics.More specifically, we are concerned here with a gas' route to thermal equilibrium. A common experiment for such analysis is cross-dimensional rethermalization <cit.>, in which a harmonically trapped gas is excited along one axis, then left alone to re-equilibrate from collisions.We present results in terms of the temperatures along each axis i, defined in the presence of a harmonic trap as T_i = ( ⟨ p_i^2 ⟩ / m+ m ω_i^2 ⟨ q_i^2 ⟩ ) / 2, where ⟨…⟩ = ∫ d^3 q d^3 p f(q, p) ( … ) denotes a phase space average over the phase space distribution f in molecular positions q and momenta p, whileω_i are the harmonic trapping frequencies.As is usual in cross-dimensional rethermalization, we consider an excitation of axis i then proceed to measure the thermalization rate along axis j. This is modeled by taking axis i to have an initial out-of-equilibrium temperature T_i = T_0 + δ_i/k_B, with a perturbance in energy δ_i, while the the other 2 axes are simply at initial temperature T_0.In the case of a dilute gas, the relaxation of T_j follows an exponential decay in time, whose rate γ_ij is related to the standard collision rate γ_ coll, by a proportionality factor ε_ij = γ_ij / γ_ coll. As defined, the quantity ε_i j is the inverse of the so-called number of collisions per rethermalization <cit.>, a measure of thermalization common to the literature <cit.>. We opt to utilize its inverse instead as it is the more natural definition to discuss efficiency of evaporative cooling. Usually defined as γ_ coll = ⟨ n ⟩⟨σ v_r ⟩ with phase space averaged number density ⟨ n ⟩ and 2-body elastic rate ⟨σ v_r ⟩, ε_ij represents the efficiency of each non-threshold collision toward thermalization of the gas.This collisional efficiency is formally cast in terms of the integralε_ij ≈α_i jπ^2 / 64 ∫ d^3 κ/ ( 2 π )^3e^ -κ^2 / 4 /√(π)∫ d^2Ω' D'_ elκ/⟨σκ⟩Δκ^2_iΔκ^2_j,where Δκ_i^2 = κ_i'^2 - κ_i^2 is the collisional change in adimensional relative momenta κ = p_r ( m k_B T_0 )^-1/2,α_i j = 3/2 if i = j, and α_i j = -3 otherwise (see Supplementary Materials).The integral above has been evaluated analytically in the threshold scattering regime <cit.>, both for identical dipolar fermions and bosons.Evidently from Eq. (<ref>), ε_i j is symmetric in its indices which leaves only 6 unique configurations of i and j.Asserting the dipoles lie in the x,z-plane and tilted with angle Θ = cos^-1Ê·ẑ, we compute Eq. (<ref>) with Monte Carlo integration [ The Monte Carlo integration gives a ≲ 1% error, which is mostly imperceptible in the log-linear plot.] and plot the results in Fig. <ref>. Each subplot (a to f) shows a different (i, j) configuration, within which, ε_i j is plotted against the dipole tilt angle Θ as dashed curves, for the temperatures T = 10 nK (black), T = 100 nK (dark gray), T = 400 nK (gray) and T = 1 μK (light gray).Interestingly, the ε_i j terms involving excitation or rethermalization along y essentially lose their dependence on Θ around 400 nK, beyond which collisions are less efficient than even nondipolar p-wave scattering (dashed blue line in Fig. <ref>) <cit.> for all Θ.This decrease can be intuited by looking at the differential cross section around η = 45^∘, around which the total cross section is maximal. As evidenced from the subplots of D_ el in Fig. <ref>, forward scattering is favored at higher collision energies, limiting momentum transfer between axes and therefore, also the efficiency of collisions toward rethermalization. Preferential forward scattering is what ultimately leads to the reduction in evaporation efficiency, earlier described and seen in Fig. <ref>. There, the rate of thermalization was approximated by the average γ_ th = γ_ coll∑_i,jε_i j / 9, as is expected for evaporation along all 3-dimensions. The dipoles were assumed aligned along Θ = 90^∘, and γ_ th interpolated over several temperatures to solve Eq. (<ref>).Realistically, forced evaporation by trap depth lowering tends to occur primarily along 1 direction, reducing the evaporation efficiency in the presence of molecular losses <cit.>. The resulting out-of-equilibrium momentum distribution from single axis evaporation will be much like that in cross-dimensional rethermalization experiments, where an anisotropic collisional efficiency could now be used to your advantage. For instance, near unity collisional efficiency is achieved in the threshold regime with ε_x z specifically at Θ = 45^∘. Optimal evaporation protocols could thus be engineered by varying the molecular dipole orientation relative to the axis of evaporation. We leave such investigations to a future work.Outlook and conclusions—By constructing a GP model of the elastic differential cross section between microwave shielded polar molecular fermions, we have found that non-threshold collisions can greatly diminish the efficacy of collisions toward thermalization of a nondegenerate gas.It is thus prudent to perform evaporation in the threshold regime, with the caveat that Pauli blocking in fermions would also lower the collisional efficiency below the Fermi temperature <cit.>.If deployed in direct simulation Monte Carlo solvers <cit.>, this GP model could also permit accurate dynamical studies in the Fermi degenerate or hydrodynamic regimes.The latter is motivated by restrictions of ε_i j, only being able to describe thermalization in dilute samples. With larger molecular dipoles at densities required to achieve quantum degeneracy, the collision rate is far exceeded by the mean trapping frequency, demanding equilibration of trapped dipolar gases be treated within a hydrodynamic framework <cit.>.The method of GP interpolation proposed here could similarly be applied to DC field shielded molecules <cit.> and bosonic species.Acknowledgments—The authors are grateful to Luo Xin-Yu for motivating discussions and insights on evaporation in molecular Fermi gases. This work is supported by the National Science Foundation under Grant Number PHY2110327. Supplemental material for: Prospects for thermalization of microwave-shielded ultracold moleculesReuben R. W. Wang and John L. BohnJILA, NIST, and Department of Physics, University of Colorado, Boulder, Colorado 80309, USA§SCATTERING CALCULATIONS OF SHIELDED MOLECULESFor 2 polar molecules scattering of the effective potential V_ eff(r) provided in the main text, scattering solutions can be obtained by first expanding the wavefunction in the basisψ( r ) = ∑_ℓ, m_ℓ u_E, ℓ, m_ℓ(r) / rY_ℓ, m_ℓ(θ, ϕ),where Y_ℓ, m_ℓ(θ, ϕ) are spherical harmonics, and u_E, ℓ, m_ℓ(r) are solutions to the radial time-independent Schrödinger equation:(d^2 / d r^2- ℓ ( ℓ + 1 ) / r^2+k^2 ) u_E, ℓ, m_ℓ(r)=2 μ/ħ^2 ∑_ℓ', m'_ℓ⟨ℓ, m_ℓ|V_ eff(r)|ℓ', m'_ℓ⟩ u_E, ℓ', m'_ℓ(r).Above, k^2 =2 μ E / ħ^2 is the collision wavenumber, and the explicit matrix elements ⟨ℓ, m_ℓ| V_ eff(r) |ℓ', m'_ℓ⟩, provided in below (<ref>).Numerical scattering solutions associated to Eq. (<ref>), require picking a consistent convention when referencing the associated scattering matrices. We present our adopted convention as follows. First defining the matrices D_ℓ, m_ℓ^ℓ', m'_ℓ = δ_ℓ, ℓ'δ_m_ℓ, m'_ℓ d^2 / d r^2 , W_ℓ, m_ℓ^ℓ', m'_ℓ = δ_ℓ, ℓ'δ_m_ℓ, m'_ℓ( k^2 - ℓ ( ℓ + 1 ) / r^2 )-2 μ/ħ^2 ⟨ℓ, m_ℓ|V_ eff(r)|ℓ', m'_ℓ⟩, and the fundamental set of radial wavefunction solutions U(r; E), Eq. (<ref>) can be recast as the compact system of equations:[ D + W]U = 0.In principle, these equations can be solved numerically at a given collision energy E, by propagating the log derivative matrix Y(r) = U^-1(r) ∂U(r) /∂ r= ∂logU(r) /∂ r ,from r = 0 to r →∞. In practice, however, propagating to ∞ is not possible so we only do so up to r = r_ match, then match Y(r) to the asymptotic solutions where the distant colliders no longer interact.Moreover, we side step the issue of singularities at the origin by imposing a short-range boundary condition by starting the propagation at a minimum radius r = r_min, then initializing the diagonal log-derivative matrix there as <cit.> Y_ℓ, m_ℓ^ℓ, m_ℓ (r_min) = -i √( W_ℓ, m_ℓ^ℓ, m_ℓ(r_min) ),that assumes universal short-range loss. This boundary condition prevents dipolar scattering resonances <cit.> which simplifies our current study. Propagation is done with an adaptive radial step size version of Johnson's algorithm <cit.>, utilizingr_min =100 a_0, r_ match = √(ħ^2 L ( L + 1 ) / m E ) +50 a_d, where a_0 is the Bohr radius and L is the largest value of ℓ utilized in the calculation. Typically, we utilize L = 121 or as many as is required for numerical convergence.The asymptotic solutions to Eq. (<ref>) arise by considering the domain where r is much larger than the range of the potential, so that Eq. (<ref>) is well approximated as(d^2 / d r^2- ℓ ( ℓ + 1 ) / r^2+k^2) u_E, ℓ, m_ℓ(r)≈ 0.This asymptotic radial equation is solved by the 2 independent solutions (up to arbitrary normalization): f_E, ℓ(r) =k r j_ℓ(k r), g_E, ℓ(r)=k r n_ℓ(k r), where j_ℓ(kr) and n_ℓ(kr) are the spherical Bessel and Neumann functions respectively. Then defining the matricesF_ℓ, m_ℓ^ℓ', m'_ℓ(r; E)= δ_ℓ, ℓ'δ_m_ℓ, m'_ℓ f_E, ℓ(r), G_ℓ, m_ℓ^ℓ', m'_ℓ(r; E) = δ_ℓ, ℓ'δ_m_ℓ, m'_ℓ g_E, ℓ(r), arbitrary solutions to Eq. (<ref>), and in fact Eq. (<ref>), can be written asU(r) = N[ F (r) - KG (r) ],where K is the reactance matrix that is responsible for matching the numerical scattering solutions U to the asymptotic solutions in Eq. (<ref>) at r = r_ match. In particular, the off-diagonal elements of K provide information on the channel couplings that arise due to the interaction potential for a given incident collision channel.The matrix N is relevant only for normalization. The reactance matrix can be written in terms of the logarithmic derivative viaK = . F (r) Y (r) - ∂/∂ r F (r) /G (r) Y (r) - ∂/∂ r G (r) |_r = r_ match,from which, we can then compute the other scattering matrices via the relations <cit.> S = I +i K/I -i K, T = i ( S - I ). The scattering matrices above permit us to evaluate the scattering amplitude, noting that m_ℓ remains a good quantum number in these collision (App. <ref>), asf_ sc( k, k̂' ) =- 2 π/ k ∑_ m_ℓ∑_ℓ, ℓ' i^ℓ Y_ℓ, m_ℓ^*(k̂)T_ℓ, m_ℓ^ℓ', m_ℓ(k)Y_ℓ', m_ℓ(k̂') i^-ℓ',which gives the appropriately antisymmetrized elastic differential cross section <cit.> via D_ el ( k, k̂' ) = 1/2 f_ sc( k, k̂' ) - f_ sc( k, -k̂' ) ^2,total cross section <cit.> σ( k )= ∫ d^2 k̂' D_ el ( k, k̂' ) =4 π^2 / k^2 ∑_ m_ℓ∑_ℓ̃, ℓ, ℓ' i^ℓ - ℓ̃ Y_ℓ̃, m_ℓ(k̂) Y_ℓ, m_ℓ^*(k̂) [ T_ℓ̃, m_ℓ^ℓ', m_ℓ(k)]^* T_ℓ, m_ℓ^ℓ', m_ℓ(k),and integral total cross sectionσ = 2 π/ k^2 ∑_ m_ℓ∑_ℓ, ℓ' T_ℓ, m_ℓ^ℓ', m_ℓ(k) ^2.§.§Matrix elements of effective potential To perform the scattering calculations on the effective single-channel microwave shielded potential energy surface, we are required to compute the ⟨ℓ, m| V_ eff(r) |ℓ', m'_ℓ⟩ matrix elements. We list these elements explicitly in this section.The matrix elements for V_ dd(r) are given as⟨ℓ, m_ℓ| V_ dd(r) |ℓ', m'⟩ =d_ eff^2 / 4 πϵ_0 r^3 ⟨ℓ, m_ℓ|[ 4 √(π/ 5 ) Y_2,0(θ, ϕ)]|ℓ', m'_ℓ⟩=d_ eff^2 / 4 πϵ_0 r^3 [ 4 √(π/ 5 )∫ dΩY_ℓ, m_ℓ^*(Ω) Y_2,0(Ω) Y_ℓ', m'_ℓ(Ω) ]=d_ eff^2 / 4 πϵ_0 r^32 ( -1 )^m_ℓ√( (2ℓ + 1) (2ℓ' + 1) )[ℓ2 ℓ';000 ][ℓ2 ℓ'; -m_ℓ0 m'_ℓ ],while the matrix elements for V_6(r) are given as⟨ℓ, m_ℓ| V_6(r) |ℓ', m'_ℓ⟩ =C_6 / r^6 ⟨ℓ, m_ℓ|( 1 + cos^2θ) sin^2θ|ℓ', m'_ℓ⟩= 4 √(π) C_6 / r^6 ⟨ℓ, m_ℓ|[2 / 5Y_0,0(θ ,ϕ ) 2 / 7 √(1 /5)Y_2,0(θ ,ϕ )- 4 / 105Y_4, 0(θ ,ϕ )] |ℓ', m'_ℓ⟩=C_6 / r^6 2 ( -1 )^m √( ( 2 ℓ + 1 ) ( 2 ℓ' + 1 ) )×[2 / 5 [ℓ0 ℓ';000 ][ℓ0 ℓ'; -m_ℓ0 m'_ℓ ] 2 / 7 [ℓ2 ℓ';000 ][ℓ2 ℓ'; -m_ℓ0 m'_ℓ ]-4 / 35 [ℓ4 ℓ';000 ][ℓ4 ℓ'; -m_ℓ0 m'_ℓ ]]. §FRAME TRANSFORMATIONS FOR GAUSSIAN PROCESS FITTING For efficient GP fitting of the elastic differential cross section, it is optimal to choose a coordinate frame whereby the symmetries are most conveniently handled.Naively, the differential cross section during a two-body collision involves 3 unit vectors: 1) the dipole alignment axis Ê, 2) the incident relative momentum k and 3) the outgoing relative momentum k', therefore requiring 6 angular coordinates. This description is the case in a lab-frame (LF), where without loss of generality, we define it such that the dipole axis lies in its x,z-plane Ê_ LF = ( sinΘ, 0, cosΘ )^T, and the other 2 vectors are given in terms of spherical coordinates as k̂ = [ sinθcosϕ; sinθsinϕ; cosθ ],k̂'= [ sinθ' cosϕ'; sinθ' sinϕ'; cosθ' ].However, we can also define a dipole-frame (DF) which utilizes the dipole alignment direction as its z-axis, Ê = ẑ_ DF, while its x axis is aligned to the plane in which both Ê and k̂ lie, so thatŷ_ DF = Êk̂/ | Êk̂ |1 / | Êk̂ | [ -cosΘsinθsinϕ; cosΘsinθcosϕ - sinΘcosθ;sinΘsinθsinϕ ].The remaining x̂_ DF axis is then obtained with the cross product x̂_ DF = ŷ_ DFẑ_ DF. In the event where Ê and k̂ coincide, we simply choose x̂_ DF = [ sin(Θ + π/2);0; cos(Θ + π/2) ],and also ŷ_ DF = Êx̂_ DF. The differential cross section only cares about the relative angle between k̂ and Ê, but not the vectors themselves. The dipole frame allows a convenient handling of this fact, we can simply write k̂ = ( sinη, 0, cosη )^T where η = cos^-1k̂·Ê. So to obtain the post-collision relative momentum in the dipole-frame, we can construct the required rotation matrix R(DF← LF ), by the method of direction cosines R(DF← LF )= [ x̂_ DF·x̂_ LFx̂_ DF·ŷ_ LFx̂_ DF·ẑ_ LF;ŷ_ DF·x̂_ LF ŷ_ DF·ŷ_ LF ŷ_ DF·ẑ_ LF;ẑ_ DF·x̂_ LF ẑ_ DF·ŷ_ LF ẑ_ DF·ẑ_ LF ]. The outbound relative collision vector is then given in dipole-frame as k̂'(DF ) = R(DF← LF )k̂'(LF ). We then denote inclination and azimuthal scattering angles in the dipole frame as θ_s and ϕ_s respectively. Furthermore, the dipole frame as defined makes the differential cross section symmetric about the x,z-plane, only requiring us to specify ϕ_s within the range [0, π]. The entire differential cross section can then be obtained by specifying its value in the appropriate energy interval, and for η, θ_s, ϕ_s ∈ [0, π].§THE EIKONAL APPROXIMATIONAt collision energies much larger than E_ dd, the scattering becomes semiclassical with the total cross section well approximated by the Eikonal approximation <cit.>. Within this approximation, the scattering amplitude, considering on the 1/r^3 long-range tail of V_ eff, is given byf_ sc^ Ei( k, k̂' )=a_d k̃/ 2 π i ∫_0^2π dϕ∫_0^∞ d b̃ b̃ e^ i q̃b̃cosϕ[exp( - 1 /k̃∫_-∞^∞Ṽ_ eff(r̃') dz̃' )- 1 ] =a_d k̃/ 2 π i ∫b̃d b̃ dϕ e^ i q̃b̃cosϕ[exp( - 2 i /k̃b̃^2 sin^2αcos( 2 ϕ - 2 β ) )- 1 ],where α = cos^-1( k̂·Ê ), β = tan^-1( ŷ·Ê / x̂·Ê ), q = k - k' is the momentum transfer vector, b is the impact parameter and tildes denote adimensional quantities normalized by the relevant dipole units (see the main text).From the scattering amplitude, we can compute the total cross section using the optical theoremσ^ Ei_ total( k )=4 π/ kIm{ f_ sc^ Ei( q̃ = 0 ) },which requires evaluation of the scattering amplitude at forward scattering q̃ = 0:f_ sc^ Ei( k, k̂' )=a_d / 2 π i k̃∫ℓ̃ d ℓ̃ dϕ[e^ - 2 i k̃/ℓ̃^2 sin^2αcos( 2 ϕ - 2 β )- 1 ] =a_d / 2 π i k̃∫ℓ̃ d ℓ̃ dϕ[∑_m=0^∞ϵ_m(-i)^m J_m (2 k̃/ℓ̃^2 sin^2α) cos[ 2 m ( ϕ - β ) ] - 1] =a_d / i k̃∫ℓ̃ d ℓ̃[J_0(2 k̃/ℓ̃^2 sin^2α)- 1] = i a_d sin^2α,where ϵ_m = 1 if m = 0, and ϵ_m = 2 if m > 0, and we utilized the substitution ℓ = k b. Then plugging the forward scattering amplitude into the optical theorem givesσ^ Ei_ total( k ) =4 π a_d / k [ 1 - ( k̂·Ê)^2 ],which averaged over incident directions, givesσ^ Ei_ total =8 π a_d / 3 k ,identical to the formula obtained for point-dipole scattering in Ref. <cit.>. The result above is applicable to distinguishable dipoles, and so may not be quantitatively accurate in describing our study of scattering between identical fermions. Nevertheless, it serves to provide a useful visual aid to the expectation energy scaling, and seems to actually give rather favorable quantitative agreement. §DERIVING THE COLLISIONAL EFFICIENCY TOWARD THERMALIZATION Obtaining the form of N_i j in the main text requires formulation of the Enskog equations. To do so, we define the phase space averaged quantity ⟨χ_i ⟩ = k_B (T_i - T_ eq ), which quantifies the system's deviation from its equilibration temperature, T_ eq. Then multiplying the Boltzmann equation <cit.> by χ_i and integrating it over phase space variables <cit.>, we derived Enskog equations that govern the relaxation of ⟨χ_j ⟩: d ⟨χ_i ⟩/ dt= C[ χ_i ], C[ χ_i ] =⟨ n ⟩/ 2 ∫ d^3 p_r / mp_r c_r(p_r, t)∫ d^2Ω'D_ elΔχ_i,where ⟨ n ⟩ is the average number density, c_r(p_r, t) is the distribution of relative momentum p_r, and Δχ≡χ^' + χ_1^' - χ - χ_1 denotes the amount by which χ changes during a collision event. Taken only perturbatively from equilibrium along axis i, Eq. <ref> is approximated by the decay law C[ χ_j ] ≈ - γ_i j⟨χ_j ⟩, which results in the short-time relationγ_i j = - . 1 /( 𝒯_j(t) - T_eq)d 𝒯_j(t) / d t |_t = 0,identifying γ_i j as the thermalization rate. Now considering the collision integralC[ χ_j ] =⟨ n ⟩/ 2 ∫ d^3 p_r / mp_r c_r(p_r, t)∫ d^2Ω'D_ elΔχ_j, =k_B ⟨ n ⟩/ 2 ∫ d^3 p_r / mp_r c_r(p_r, t)∫ d^2Ω'D_ elΔ T_j,we move to center of mass and relative momentum coordinates, P = (p + p_1) / 2 and p_r = p - p_1 respectively, so that the change in χ_i is given asΔχ_i=Δ p_i^2/ 2 m = p'^2_j + p'^2_1, j - p_j^2 - p_1, j^2 / 2 m = p'^2_r, j + P'^2_j - p^2_r, j - P^2_j/ 4 m = p'^2_r, i - p^2_r, i/ 4 m . As for the relative momentum distribution, we Taylor expand it at t = 0 with respect to δ_i / k_B T_0 to get c_r( p_r, 0 )= ∏_i(1 / 4 π m k_B T_i )^1/2exp( - p_r,i^2 / 4 m k_B T_i ) ≈c_r^(0)(p_r) [ 1 + (p_r, i^2 / 4 m k_B T_0-1 / 2 ) δ_i / k_B T_0 ], c_r^(0)(p_r)=1 / ( 4 π m k_B T_0 )^3/2exp( - p_r^2 / 4 m k_B T_0 ). The expressions above render the collision integral C[ χ_j ]≈⟨ n ⟩/ 2 ∫ d^3 p_r / mp_r c_r^(0)(p_r) [ 1 + (p_r, i^2 / 4 m k_B T_0-1 / 2 ) δ_i / k_B T_0 ] ∫ d^2Ω' D_ el(p'^2_r, j - p^2_r, j/ 4 m ) = δ_i / 16 ( m k_B T_0 )^2 ⟨ n ⟩/ 2 ∫ d^3 p_r / mc_r^(0)(p_r) p_r ∫ d^2Ω' D'_ elp_r, i^2 ( p'^2_r, j - p^2_r, j),which upon utilizing the time-reversal symmetry of elastic collisionsC[ χ_j ] ≈δ_i / 16 ( m k_B T_0 )^2 ⟨ n ⟩/ 2 ∫ p_r^2 d p_r c_r^(0)(p_r)p_r / m ∫ d^2Ω d^2Ω' D'_ elp_r, i^2 ( p'^2_r, j - p^2_r, j) = δ_i / 16 ( m k_B T_0 )^2 ⟨ n ⟩/ 2 ∫ p_r^2 d p_r c_r^(0)(p_r)p_r / m ∫ d^2Ω' d^2Ω D'_ elp_r, i'^2 ( p^2_r, j - p'^2_r, j),the expression above can also be written in a form that is explicit in the symmetry under exchange of indices i and j:C[ χ_j ]= -δ_i / 32 ( m k_B T_0 )^2 ⟨ n ⟩/ 2 ∫ d^3 p_r / mc_r^(0)(p_r) p_r ∫ d^2Ω' D'_ el( p'^2_r, i - p^2_r, i) ( p'^2_r, j - p^2_r, j).We have used the suggestive notation D'_ el =D_ el(p_r, Ω'). Plugging C[ χ_j ] as written into Eq. (<ref>) and taking 𝒯_j(t) - T_ eq = ϵ_j / k_B, we obtain γ_i j = -C[ 𝒯_j ] /( 𝒯_j(t) - T_ eq) = -k_B /ϵ_jC[ 𝒯_j ] = δ_i /ϵ_j ⟨ n ⟩/ 512 ∫ p_r^2 d p_r / ( π m k_B T_0 )^3/2exp( - p_r^2 / 4 m k_B T_0 )p_r / m ∫d^2Ω d^2Ω' D'_ el(p'^2_r, i - p^2_r, i/ m k_B T_0 ) (p'^2_r, j - p^2_r, j/ m k_B T_0 ). Finally, taking the limit of δ_i / (k_B T_0) → 0,we obtain ε_i j in the main text,having defined α_i j = δ_i / ϵ_j and using the equipartition theorem (T_ eq = T_0 + δ_i/3 k_B) to getδ_i /ϵ_j=3 / 2 , i = j, -3, i ≠ j.
http://arxiv.org/abs/2310.17812v2
{ "authors": [ "Reuben R. W. Wang", "John L. Bohn" ], "categories": [ "cond-mat.quant-gas", "quant-ph" ], "primary_category": "cond-mat.quant-gas", "published": "20231026230647", "title": "Prospects for thermalization of microwave-shielded ultracold molecules" }
Regenerations and applications Gianluca Pacienza January 14, 2024 ==============================empty emptyLearning from demonstration (LfD) is apopular technique that uses expert demonstrations to learn robot control policies. However, the difficulty in acquiring expert-quality demonstrations limits the applicability of LfD methods: real-world data collection is often costly and the quality of the demonstrations depends greatly on the demonstrator's abilities and safety concerns.A number of works have leveraged data augmentation (DA) to inexpensively generate additional demonstration data, but most DA works generate augmented data in a random fashion and ultimately produce highly suboptimal data.In this work, we propose Guided Data Augmentation (GuDA),a human-guided DA framework that generates expert-quality augmented data.The key insight of GuDA is that while it may be difficult to demonstrate the sequence of actions required to produce expert data, a user can often easily identify when an augmented trajectory segment represents task progress. Thus, the user can impose a series of simple rules on the DA process to automatically generate augmented samples that approximate expert behavior.To extract a policy from GuDA, we use off-the-shelf offline reinforcement learning and behavior cloning algorithms.We evaluate GuDA on a physical robot soccer task as well as simulated D4RL navigation tasks, a simulated autonomous driving task, and a simulated soccer task.Empirically, we find that GuDA enables learning from a small set of potentially suboptimal demonstrations and substantially outperforms a DA strategy that samples augmented data randomly.§ INTRODUCTION Learning from demonstration (LfD) is a popular learning paradigm in which robots learn to solve complex tasks by leveraging successful demonstrations provided by a human. In contrast to more traditional control methods that require a human expert to pre-program desired control sequences or formulate control as a constrained optimization problem <cit.>, LfD is an intuitive alternative that enables experts and non-experts alike to develop control policies.Instances of LfD such as imitation learning (IL) <cit.> and offline reinforcement learning[ Offline RL can learn from suboptimal data, but it is far more successful with expert demonstrations <cit.>. Thus, we view it as an LfD method.] (RL) <cit.> have proven to be viable methods for learning effective policies in real-world tasks such as robot manipulation <cit.> and autonomous driving <cit.>. The performance and generalization capabilities of LfD methods depends greatly on the quantity and quality of demonstrations provided to the learning agent <cit.>. Ideally, we would provide large amounts of expert-quality demonstrations, but acquiring such data is often challenging in real-world tasks: the expense of data collection often limits us to just a few demonstrations, and the quality of these demonstrations depends on the demonstrator's level of expertise as well as the degree of safety they must exercise while collecting data.Moreover, while prior works have shown that many offline RL algorithms perform well even with highly suboptimal data <cit.>, these same works show that offline RL performs far better with expert-quality data.As such, we focus on developing methods that enable practitioners to cheaply acquire high-quality demonstrations.In this work, we introduce Guided Data Augmentation (GuDA), a human-guided data augmentation framework capable of generating large amounts of expert-quality data from a limited set of demonstrations.Data augmentation (DA) refers to techniques that generate additional synthetic experience without the expense of task interaction by applying transformations on previously collected experience.These transformations – or data augmentation functions (DAFs) – typically leverage task-specific invariances and symmetries inherent to many real-world tasks (e.g. translational invariance <cit.>, gait symmetry <cit.>).Most prior DA works sample augmented data from a given DAF uniformly at random <cit.> or randomly generate augmented trajectories from a learned dynamics model <cit.>.Unfortunately for the use of these techniques for LfD, randomly generated augmented experience is generally highly suboptimal and does not capture behaviors needed to solve a given task.The key insight of GuDA is that a human expert can often determine if a trajectory segment resembles expert data by simply checking if its sequence of states brings the agent closer to solving the task.Thus, instead of randomly sampling augmented data, GuDA uses a series of user-defined rules toautomatically generate augmented data that makes substantial progress towards task completion. To make this concept more concrete, imagine we are training an autonomous vehicle to park in a parking lot using a limited set of demonstrations (Fig. <ref>).Since a parking lot has a relatively uniform surface, we can generate augmented experience by translating and rotating the agent in our demonstrations.Expert behaviors for this tasks include (1) driving towards the desired parking spot and (2) orienting the car inside the parking spot. However, a uniformly random sampling of augmented data will most often produce data in which the agent drives away from the parking spot or approaches it at an unfavorable angle.With GuDA, we can generate relevant augmented data by only sampling augmented trajectory segments in which the agent successfully parks the car. Such augmented data closely mimics data that an expert policy would generate and provides more varied expert-quality data without asking for more demonstrations. The benefits of GuDA are twofold:First, GuDA enables practitioners to generate expert data without the expense of task interaction.Second, instead of requiring that an expert demonstrate an optimal sequence of actions required to solve a task, GuDA simply requires the user to judge if an augmented trajecto segment represents progress towards task completion.We evaluate GuDAwith off-the-shelf offline RL and behavior cloning algorithms on simulated navigation, autonomous driving, and soccer tasks as well as a physical robot soccer task. Empirically, GuDA enables robots to learn effective policies starting from just a few demonstrations – even highly suboptimal demonstrations.Moreover, we find that GuDA greatly outperforms an DA strategy that samples augmented data uniformly at random. In summary, our contributions are * We demonstrate how a user can guide data augmentation to inexpensively produce expert-quality data from potentially suboptimal experience.* We show that GuDA significantly outperforms the most widely used DA strategy – one that samples augmented data uniformly at random – highlighting the benefits of a more intentional approach to DA.In fact, this random DA strategy often harms performance.§ RELATED WORK In this section, we provide an overview of prior work in LfD and data augmentation. §.§ Learning from Demonstrations (LfD) LfD methods have taken many forms in the literature. In this section, we discuss LfD methods relevant to our work. §.§.§ Imitation LearningThe simplest imitation learning (IL) method is behavior cloning (BC), a technique in which the agent learns to map observed states to expert actions in a supervised manner.BC often produces policies that generalize poorly to unobserved states and cannot produce policies that exceed the performance level achieved by the expert <cit.>.DAgger <cit.> mitigates these drawbacks by iteratively running BC and then collecting additional data with the BC-trained policy, though this online interaction may be prohibitively expensive in robotics tasks <cit.>. In contrast to BC, inverse RL (IRL) methods <cit.> infer a reward function from demonstrations and then learn a policy which optimizes this reward function. By avoiding simple copying of the demonstrator,the agent can generalize to states not provided in demonstrations and potentially exceed the demonstrator's performance.However, IRL assumes the demonstrator optimizes some true reward function and thus still requires expert data.Moreover, many IRL algorithms require online interaction with the task and thus, like DAgger, may be impractical when further online interaction is infeasible <cit.>.To address limitations found in both types of IL methods, GuDA generates large amounts of expert data from a limited set of demonstrations. §.§.§ Offline Reinforcement Learning Offline RL <cit.> is a learning paradigm in which an RL agent learns from a static dataset of task demonstrations.Rather than mimicking demonstrations, these methods learn a reward-maximizing policy from reward labels provided with the demonstrations. These methods are designed such that, in principle, they can learn even with suboptimal data.Nevertheless,offline RL is generally far more successful with expert data <cit.>. Thus, we view offline RL as an LfD technique.One core challenge with offline RL is extrapolation error: state-action pairs outside of the dataset's support can attain arbitrarily inaccurate state-action values during training, causing learning instabilities and poor generalization during deployment <cit.>.This challenge is particularly problematic for real-world robotics tasks in which offline data is scarce. Offline RL algorithms typically combat extrapolation error with policy parameterizations that only consider state-action pairs within the dataset <cit.> or behavioral cloning regularization <cit.>.GuDA, like other DA strategies (Sec. <ref>), can be viewed as a technique to mitigate extrapolation error by simply generating more data without further task interaction. However, GuDA also improves dataset quality by generating expert augmented data.§.§ Data Augmentation Data augmentation (DA) refers to techniques which generate synthetic data by transforming previously collected experience.DA has been applied a variety of tasks, including algorithm discovery <cit.>, locomotion <cit.>, and physical robot manipulation <cit.>.This technique is particularly useful for robotics; it can generate data that matches real-world dynamics without further task interaction. DA is most often used to generate perturbed data with the same semantic meaning as the original data.Many vision-based RL works have trained agents to be robust to visual augmentations commonly used in computer vision <cit.>, and similar approaches have been applied to to non-visual tasks <cit.>.These approaches are orthogonal to GuDA; they use DA to learn robust policies, whereas GuDA uses DA to improve dataset quality.Perturbation-based DA methods more closely relate to domain randomization <cit.> which also aims for policy robustness. Other works exploit invariances and symmetries in a task's dynamics to generate data that is semantically different from the original data.Hindsight experience replay (HER) <cit.> counter-factually relabels a trajectory's goal.Counterfactual Data Augmentation (CoDA) <cit.> and Model-based CoDA (MoCoDA) <cit.> generate additional data by stitching together locally independent features of different transitions.Several works use a learned model to generate augmented data <cit.>.Most of these works focus on developing DAFs or frameworks for incorporating augmented data into learning and simply generate augmented experience in a random fashion.In contrast, GuDA focuses on the importance of sampling expert-quality augmentations.Two prior works are most closely related to GuDA: EXPAND <cit.>, which applies visual augmentations to irrelevant image regions identified by human feedback, and MoCoDA <cit.>, which allows users to specify a parent distribution to control the distribution of augmented data.GuDA differs from EXPAND in that we focus on non-visual tasks with more complex DAFs more relevant to robotics. In contrast to MoCoDA, GuDA is a model-free DA framework and can be applied when data is too scarce to model the data distribution, as is commonly the case in physical tasks.Moreover, GuDA provides a more intuitive interface for DA that enables fine-grained control of the distribution of augmented data.§ PRELIMINARIES In this section, we formalize the RL setting and the notion of a data augmentation function that we use in this work. §.§ Offline Reinforcement Learning Since our empirical analysis considers offline RL methods,we adopt notation from the RL literature and formalize a task as a sequential decision-making process with a known reward function.We note that a reward function may be unavailable for certain tasks; in such case, our proposed GuDA framework can use BC instead. When one is available, we can use offline RL to attempt to improve over the demonstrator. Formally, we consider finite-horizon Markov decision processes (MDPs) <cit.> defined by (, , p, r, d_0, γ) whereanddenote the state and action space, respectively, p(' |, ) denotes the probability density of the next state ' after taking actionin state , and r(,) denotes the reward for taking actionin state .We write d_0 as the initial state distribution, γ∈ [0, 1) as the discount factor, and H the length of an episode.We consider stochastic policies π_θ : ×→ [0,1] parameterized by θ.The RL objective is to find a policy that maximizes the expected sum of discounted rewards J(θ) = 𝔼_π_θ, _0∼d_0[∑_t=0^H γ^t r(_t,_t)]. In the offline RL paradigm, the agent cannot collect data through environment interaction and must instead learn from a static datasetof transitions collected by a different policy.§.§ Data Augmentation Functions In this section, we formally introduce a general notion of a data augmentation function (DAF).At a high level, a DAF generates augmented data by applying transformations to an input trajectory segment.These transformations often exploit task-specific invariances and symmetries relevant to many real-world tasks. More formally, letdenote the set of all possible trajectory segments and let Δ() denote the set of distributions over .A DAF is a stochastic functionf: →Δ() mapping a trajectory segment ((_i, _i, r_i, '_i))_i=1^k of length kto an augmented trajectory segment ((_i, _i, r̃_i, '_i))_i=1^k.We assume DAFs always assign the true reward to augmented transitions, i.e., r̃ = r(,).As in most prior works, we assume a user can specify a DAF f that exploits a symmetry or invariance in a given domain <cit.>. § GUIDED DATA AUGMENTATIONThe difficulty in acquiring near-optimal demonstrations limits the applicability of LfD methods.While DA can inexpensively generate data from a limited set of prior data, most of the resulting augmented data is not expert-quality.To make augmented data more relevant for imitation learning and offline RL, we introduce Guided Data Augmentation (GuDA), a DA framework that uses a set of user-specified rules to automatically generate augmented data that resembles expert data.This approach thus shifts the burden from the user having to provide optimal actions for demonstrations to the user simply having to understand which augmented data represents progress towards task completion. §.§ Method OverviewWe assume access to a datasetof demonstrations and a task-relevant DAF f from which we can sample augmented data.Prior to offline training, GuDA generates an augmented datasetconsisting of the original demonstrations plus n augmented samples generated by f.Afterwards, an agent learns fromusing an off-the-shelf LfD algorithm.The core difference between GuDA and previous DA works (e.g., <cit.>) lies in how GuDA samples augmented data from f.In general, most augmentations capture highly suboptimal behaviors.Instead of sampling augmented data uniformly at random as in commonly done in prior works, GuDA imposes a series of simple sampling rules to automatically generate expert-quality augmented data.A user can often identify such sampling rules using basic intuitions on how to solve a task. To illustrate how a user might identify sampling rules, consider a maze navigation task in which legged robot must reach a fixed goal state from a fixed initial position (Fig. <ref>). In this task, we assume access to a DAF that translates the agent to a new position and rotates the direction in which the agent moves.While it is difficult to demonstrate the precise sequence of leg movements required to optimally solve the maze, we can easily identify when an augmented version of an existing trajectory segment progresses the agent towards the goal.A randomly sampled augmentation from our translate-and-rotate DAF will most likely have the agent move away from the goal rather than towards it. Moreover, the agent only needs to learn suitable actions for a small fraction of maze positions near the shortest path to the goal, but our DAF will mostly generate data in regions of the maze that an optimal policy would never visit.To ensure we generate expert augmented data, we can simply restrict our DAF to (1) only sample new positions near the shortest path to the goal (green region), and (2) always rotate the agent so its displacement is closely aligned with the shortest path (green arrows). The exact specification of sampling rules for GuDA is a domain-specific process that depends on which DAFs are available as well as what task progress looks like in a given domain.In this work, we focus on navigation, manipulation, and autonomous driving tasks which have intuitive notions of task progress;an agent makes progress if it moves closer to a specified goal location (navigation and driving) or if it moves an object closer to a specified goal location (manipulation).In the remainder of this section, we describe the DAFs we use and discuss how we can sample from these DAFs to ensure we only generate data that shows task progress. §.§ Implementation We focus on four DAFs that leverage invariances and symmetries common to many tasks in the physical world:* : Since the dynamics of agents and objects are often independent of their position, this DAF translates the agent and/or object to a new position. * : Since the dynamics of agents and objects are often independent of their orientation, we can rotate the direction the agent and/or object faces to produce motion in a different direction. * : An agent that moves to the left often produces a mirror image of an agent moving to the right, so we can reflect the agent's left-right motion. * : In goal-conditioned tasks, dynamics are generally independent of the desired goal state <cit.>.Thus, we can replace the true goal with a new goal. Table <ref> describes the tasks in our empirical analysis as well as the sampling rules we implement to automatically generate expert-quality data from combinations of these DAFs.We include the following simulated tasks: D4RL maze2d and antmaze locomotion tasks <cit.>, a parking task <cit.>, and a robot soccer task.We also validate GuDA on a physical robot soccer task, and we further discuss this task's sampling rules in Section <ref>.GuDA can in principle be implemented in many different ways and can be adapted depending on which DAFs are available.For instance, we found that theDAF was helpful in maze2d but often harmed performance in antmaze. Thus, in antmaze, we simply translate trajectory segments to relevant positions for which the original displacement direction represents significant task progress.Since offline RL methods perform better with noisy expert data <cit.>, we inject noise into our sampling rules.For instance, in maze2d tasks, all rotated trajectory segments align closely – but often not exactly – with the optimal direction of motion.§ EXPERIMENTS We design an empirical study to answer the following questions:(1) Does GuDA enable learning from a limited set of potentially suboptimal demonstrations? (2) Does GuDA yield larger returns than a random DA strategy?§.§ Empirical Setup We first evaluate GuDA on simulated tasks described in Table <ref>.In all tasks, we start with a small initial dataset containing at least one successful – though not necessarily expert-level – demonstration (Table <ref>).These datasets often contain failures or suboptimal behaviors as well: maze2d datasets contain data in which the agent moves away from the goal, soccer datasets contain trajectories where the agent kicks the ball out of bounds, and parking datasets contain trajectories where the car fails to park at its designated goal.For maze2d and antmaze tasks, we hand-pick a small number of trajectory segments from the original `-v1' and `-diverse-v1' D4RL datasets, respectively.For the remaining tasks, we use pre-trained expert policies to generate demonstrations.We consider two baselines: a DA strategy that randomly samples augmented data (Random DA), and no augmentation (No DA).We generate 1 million augmented transitions and then perform offline learning with BC, TD3+BC <cit.>, and AWAC <cit.>. We train for 1 million policy updates and report the inter-quartile mean (IQM) return achieved over 10 independent runs <cit.>. §.§ Simulated Experiments Fig. <ref> shows IQM normalized returns for each algorithm in each task.GuDA almost always outperforms Random DA and No DA – and often by a large margin.For instance, GuDA yields returns 3x larger than the next best strategy (No DA) in antmaze-medium.GuDA with TD3+BC is also the only strategy that can solve antmaze-large with significance.While Random DA is often beneficial in maze2d and soccer-sim tasks, it often performs worse than No DA in other tasks.For instance, Random DA harms performance with all algorithms in antmaze-umaze, with BC and AWAC in antmaze-medium, and with BC and TD3+BC in parking.Since BC mimics the provided data, it is understandable that Random DA may harm performance with BC.However, since offline RL algorithms can in principle learn from suboptimal data, these findings emphasize the importance of generating expert augmented data even for offline RL.We additionally investigate the effect of (1) the number of augmentations we generate and (2) the size of our demonstration dataset.As shown in Fig. <ref>, increasing the number of augmentations in general yields larger returns for both GuDA and Random DA, but GuDA can match Random DA's performance with far fewer augmentations.Moreover, Fig. <ref> shows that GuDA outperforms Random DA if our initial dataset contains 50k transitions.Thus, GuDA can be beneficial even with abundant demonstration data. §.§ Physical Experiments We further evaluate GuDA in a physical robot soccer task in which a NAO V6 robot must score from the Easy and Hard initializations shown in Fig. <ref> and <ref>.The robot “kicks" the ball by simply walking into it. The ball's movements appear highly stochastic; they depend on how the robot's feet contact the ball and foot positions are not included as policy inputs. This stochasticity coupled with noisy vision-based state estimation makes this task notably difficult.We collect demonstrations using an expert policy pre-trained in a low-fidelity soccer simulator with simplified dynamics and perfect state estimation.Our demonstration dataset contains a single physical trajectory of the agent kicking the ball from the center of the field to the goal (Fig. <ref>).This demonstration is highly suboptimal, as the robot fumbled the ball and had to take a circuitous route to the goal. To apply GuDA, we first identify two task-relevant behaviors inour initial demonstration: (B1) the robot executing a tight turn to the ball, and (B2) the robot scoring with the ball away from the sideline (Fig. <ref>).We use GuDA to generate augmented trajectories that trace out the path an expert might take to successfully score: weandB1 to demonstrate the agent approaching the ball at a favorable angle, and we , , andB2 to demonstrate the agent scoring with the ball away from the sideline.Because we use a physical demonstration, our augmented data accurately matches the task's true dynamics.We generate 1 million augmented samples using GuDA and Random DA and train agents using IQL <cit.> for 1 million policy updates. We also compare agents to the expert demonstrator (Expert).Table <ref> and Fig. <ref> show the success rate and IQM time to score for each agent.[We include videos of trained policies in our submission.]With the Easy initialization, GuDA scores faster and more frequently than Random DA and No DA. GuDA and expert policies have similar success rates, but GuDA scores significantly faster than the expert as well.We attribute this speedup to how the GuDA policy trained on augmented data that matches the physical world's dynamics whereas our expert policy trained in a low-fidelity simulator.With the Hard initialization, only the GuDA agent can consistently score.Random DA and No DA policies always kick the ball out of bounds.Even the expert policy almost always fails.Our results show that GuDA not only outperforms Random DA but also enables an agent to surpass its demonstrator in a difficult physical task with just a single suboptimal demonstration. § CONCLUSION In this work, we introduced Guided Data Augmentation (GuDA), a human-guided data augmentation (DA) framework which generates expert-quality augmented data without the expense of real-world task interaction.In GuDA, a user imposes a series of simple rules on the DA process to automatically generate augmented samples that approximate expert behavior.GuDA can serve as a intuitive way to integrate human expertise into offline learning from demonstrations; instead of requiring that an expert demonstrate a near-optimal sequence of actions to solve a task, GuDA simply requires the user to understand what augmented data represents progress towards task completion.Empirically, we demonstrate that GuDA outperforms a widely applied random DA strategy and enables offline learning from a limited set of potentially suboptimal demonstrations.Furthermore, we show how GuDA yields an effective policy in a physical robot soccer task when given a single highly suboptimal trajectory.Our findings emphasize how a more intentional approach to DA can yield substantial performance gains. The core limitation of GuDA is that it requires domain knowledge to specify sampling rules. Since the sampling rules required to generate expert augmented data are task dependent, GuDA must be implemented separately for each task.Nevertheless, these rules can be derived from basic intuitions on what task progress looks like and are simple to implement.While our empirical analysis focuses on behavior cloning and offline RL, GuDA can in principle be applied to other learning methods – both offline and online. In future work, we intend to study how GuDA interacts with other learning methods such as inverse RL and online RL.Furthermore, given the effectiveness of DA, we plan to conduct a broader analysis investigating the the most effective way to integrate augmented data into offline RL.Such an analysis would further strengthen the effectiveness of GuDAas well as other DA techniques. IEEEtran
http://arxiv.org/abs/2310.18247v1
{ "authors": [ "Nicholas E. Corrado", "Yuxiao Qu", "John U. Balis", "Adam Labiosa", "Josiah P. Hanna" ], "categories": [ "cs.LG", "cs.RO" ], "primary_category": "cs.LG", "published": "20231027163400", "title": "Guided Data Augmentation for Offline Reinforcement Learning and Imitation Learning" }
firstpage–lastpage Random Fields from Quenched Disorder in an Archetype for Correlated Electrons: the Parallel Spin Stripe Phase of La_1.6-xNd_0.4Sr_xCuO_4 at the 1/8 Anomaly B. D. Gaulin January 14, 2024 ===========================================================================================================================================================Here we present the cloud population extracted from M51, following the application of our new high-resolution dust extinction technique to the galaxy (Faustino Vieira et al. 2023). With this technique, we are able to image the gas content of the entire disc of M51 down to 5 pc (0.14"), which allows us to perform a statistical characterisation of well-resolved molecular cloud properties across different large-scale dynamical environments and with galactocentric distance. We find that cloud growth is promoted in regions in the galaxy where shear is minimised; i.e. clouds can grow into higher masses (and surface densities) inside the spiral arms and molecular ring. We do not detect any enhancement of high-mass star formation towards regions favourable to cloud growth, indicating that massive and/or dense clouds are not the sole ingredient for high-mass star formation. We find that in the spiral arms there is a significant decline of cloud surface densities with increasing galactocentric radius, whilst in the inter-arm regions they remain relatively constant. We also find that the surface density distribution for spiral arm clouds has two distinct behaviours in the inner and outer galaxy, with average cloud surface densities at larger galactocentric radii becoming similar to inter-arm clouds. We propose that the tidal interaction between M51 and its companion (NGC 5195) - which heavily affects the nature of the spiral structure - might be the main factor behind this. galaxies: ISM – galaxies: spiral – galaxies: individual (M51) – ISM: clouds – dust, extinction § INTRODUCTION Stars form in the cold and dense molecular phase of the interstellar medium (ISM) in galaxies. The mechanism (or mechanisms) that trigger and regulate star formation (SF) in galaxies is still not well understood. In particular, it is not clear if the galactic environment has a direct impact in the galaxy's ability to form stars. Locations in galaxies with a higher density of molecular gas (e.g. spiral arms) seem to also harbour a higher concentration of young stars, which implies a higher star formation rate (SFR) towards those regions <cit.>. One possible explanation for the higher SFR seen towards spiral arms is that the spiral arms themselves enhance SF. In a scenario first proposed by <cit.> and <cit.>, SF is triggered as the gas is compressed due to a shock that forms along the trailing edge of a spiral arm. Naturally, in this scenario, the "star formation efficiency" (SFE), or the SFR per unit gas mass, is higher in spiral arms than in less dense regions of galaxies <cit.>. On the other hand, the increase of SFR towards spiral arms may just be a byproduct of the higher surface densities observed in that particular galactic environment. In other words, the underlying gravitational potential of the spiral reorganises and gathers the gas together, with no direct effect in the process of SF <cit.>. If so, the observed SFE across galaxies should be effectively constant, which is in fact observed by several studies <cit.>. Additionally, even if SF is not directly enhanced by the galactic environment, the large-scale dynamics may still play a critical role in regulating and disrupting SF across galaxies. Whether this dominates over other disruption mechanisms such as stellar feedback (and where this occurs in the galactic context) is still an active area of research <cit.>.Oftentimes, the molecular ISM of a galaxy is divided by astronomers into discrete structures known as molecular clouds (MCs), in order to better understand the initial conditions of SF. It is possible to investigate the link (or lack of) between the small, cloud-scale physics and the overarching galactic dynamics by analysing any systematic differences between MCs situated within different large-scale dynamical structures in galaxies (i.e. spiral arms, inter-arm regions, bars, etc.). In other words, by comparing the different cloud populations within galaxies, we can begin to understand if a galaxy's morphology has a direct impact in its ability to form stars. There have been many statistical characterisations of MCs in the Milky Way <cit.>, which benefit from the relatively small distances involved and thus achieve higher spatial resolution. Still, when it comes to linking MC properties and large-scale dynamics, Galactic studies are intrinsically limited given the difficulty of pinpointing locations of clouds within the context of the Galaxy <cit.>. Molecular gas observations in other galaxies do not suffer from this issue but instead are limited by sensitivity and resolution. However, with the advancement of instrumentation, extragalactic SF studies are now able to distinguish and resolve giant molecular clouds (GMCs), catapulting us into an exciting era of SF and ISM studies <cit.>.In(hereafter Paper I), we presented a new high-resolution dust extinction technique that utilises archival optical Hubble Space Telescope (HST) data to retrieve parsec-scale dust (and gas) surface density maps for entire nearby galaxies. In Paper I, we applied this technique to M51 as our test-case (briefly described in §<ref>). M51 (NGC 5194) is an excellent candidate for cloud studies, as it is nearby and face-on, with bright spiral arms. It is a galaxy with a vast amount of multi-wavelength ancillary data and observational studies <cit.>, as well as numerical simulations studying its evolution and dynamics <cit.>. Here, our high-resolution gas surface density map of M51 from Paper I is used to extract an extensive cloud catalogue (§<ref>). We analyse the properties of our molecular cloud sub-sample across large-scale galactic environment (§<ref>), as well as with galactocentric distance (§<ref>). We provide a summary of our findings in §<ref>.§ DATA In Paper I, we presented a novel technique which retrieves measurements of dust extinction along each line-of-sight for entire disc galaxies at parsec-scales, using archival HST optical data (F555W or V-band). A detailed description of the technique can be found in the original paper, but we present a brief overview of how the technique works here. Our high-resolution dust extinction technique is adapted from Galactic extinction studies conducted in the infrared (IR) <cit.>, which measure dust attenuation against a reconstructed, smoothly varying stellar light map, rather than determine the extinction from individual stars of similar spectral type. We construct this stellar distribution map by applying a sizeable median filter (∼600 pc) to the HST V-band image, after the removal of bright point-like sources. Fundamentally, this extinction technique compares the observed V-band intensity of each pixel in the map against the intensity from the reconstructed stellar distribution, which mimics the total stellar light if there were no extinction. The attenuation caused by dust is measured through: τ_V = - ln( I_V - I_fg/I_bg), where τ_V is the optical depth of the HST V-band, I_V is the observed V-band intensity, and I_fg and I_bg are the foreground and background fractions, respectively, of the reconstructed stellar light model relative to the absorbing medium (i.e. dust). We assume that the attenuating dust sits in a layer near the galaxy's mid-plane in a "sandwich"-like geometry, and that the dust follows the radial profile of the stellar light. Our technique includes a calibration for the dust/stars geometry assumption, through comparison with Herschel Space Observatory <cit.> lower-resolution observations of dust emission <cit.>, so that our extinction dust mass estimates (at the 36" Herschel resolution) are consistent with those derived from dust emission.The measured V-band optical depth can be converted to dust surface densities (Σ_dust inunits) through a dust mass absorption coefficient for the V-band (κ_V): Σ_dust=τ_V/κ_V. In Paper I and in this work, we adopt κ_V=1.786 pc^2 M_⊙^-1 from <cit.>. Additionally, we assume a dust-to-gas mass ratio of 0.01 to derive gas surface densities from the dust map. The reader is referred to Paper I for further details. Following our application of this extinction technique to M51, we obtained a gas surface density map of the galaxy at a spatial resolution of ∼5 pc (0.14"), with which we are able to study spatially resolved cloud populations across the galaxy. This statistical analysis of molecular clouds (MCs) in the different dynamical environments of M51 (and with galactocentric radius) is the focus of the present paper. We adopt an inclination of 22^∘ <cit.>, a position angle of 173^∘ <cit.>, and a distance of 7.6 Mpc <cit.> for M51. § CLOUD POPULATION FROM HIGH-RESOLUTION EXTINCTION METHOD The gaseous content of galaxies is a multiphase continuum, and therefore "clouds" are not naturally occurring structures. Still, dividing the ISM into discrete clouds is a well-established technique that allows us to analyse the conditions of a galaxy's ISM in a statistical manner. In order to study how the properties of M51's ISM vary as a function of the galaxy's large-scale dynamics and galactocentric distance, we must first extract clouds from our extinction-derived surface density map (Σ).§.§cloud decomposition The gas surface densities derived from the technique outlined in Paper I are decomposed into discrete clouds using theclustering algorithm (v.0.3.2)[<https://github.com/Astroua/SCIMES/>], initially described in <cit.>. The updated version ofwe use here is detailed in <cit.>.works on the dendrogram tree of the input image - building the dendrogram from our gas surface density map is therefore the first step in our cloud extraction process.A dendrogram <cit.> is comprised of three types of hierarchical structures: trunks or "ancestors", which are the lowermost structures in the hierarchy of the input map from which all other structures in the dendrogram stem from; branches which are the intermediate structures within the tree (i.e. structures that both have a parent and at least one child structure associated to them); and leaves, the structures at the very top of the hierarchy with no child structures associated to them. In this study, we make use of the [<http://www.dendrograms.org/>] implementation package to compute our dendrogram.requires three initial inputs:(the minimum threshold below which no value is considered when building the dendrogram),(the minimum difference between two peaks for the dendrogram to consider them as two separate, independent structures), and(minimum number of pixels a structure must have to be considered an independent structure). To obtain the full dendrogram of our surface density map, we choose= 2,=9, and=27 pix as our parameters. We choose aslightly above 0 to help segment the most diffuse material into trunks of a manageable size (if taking= 0, the larger trunks would span almost the entire map). Tests were conducted in several small regions of M51 to check the effect of this lower threshold - no significant differences were observed on the finalextraction (except on the exact position of the boundaries of the most diffuse clouds), since most clouds are segmented above this threshold. Our choice of minimum value dismisses only 4.4% of the total number of pixels in our map, which hold only 0.1% of the total mass. To ensure that all structures within our dendrogram are spatially resolved, we setto be roughly equal to the number of pixels equivalent to 3 resolution elements (∼9 pixels per resolution element). We tested different values of , from 3 × the(i.e. 6) to 6× the(i.e. 12), in various small regions of our map and found no significant difference in the final selection of structures, suggesting that thesegmentation outputs are not strongly impacted by the choice of dendrogram input parameters <cit.>.Using the dendrogram as a guide,uses graph theory to perform spectral clustering and find regions with similar properties in emission (or in our case, in surface density) <cit.>. In practice,creates a graph that connects all leaves of the dendrogram (even those that do not have the same parent trunk) to build an affinity matrix that quantifies the relationship strength between the leaves. This process becomes extremely computationally (and memory) intensive when applied to large maps such as ours. To make cloud extraction more manageable, splitting the map into smaller sections is necessary. The most common way of doing this is to apply straight cuts to the data, which then requires dealing with clouds that touch those sharp edges separately <cit.>. We adopt a different approach and define "organic" masks using the trunk structures from the dendrogram, since these structures are at the bottom of the hierarchy and encompass all other structures present in the data. From the full dendrogram, we retrieve 29752 trunks in total. Ancestors that have just one child structure and ancestors that have no children structures (i.e. isolated leaves) cannot be clustered and therefore bypass the need to run theclustering algorithm on them - they can directly be considered clouds. We then retrieve the masks of the remaining (3406) ancestor structures, and sort them into 4 horizontal strips of 0.03 ^∘ (∼ 2 arcmin), according to the Declination of their centroid position. This also allows us to create 4 non-overlapping sub-fields of our gas surface density map that, alongside the dendrogram, can be fed to . For the cloud extraction with , we opt to use the "radius" criterion for the clustering, with a user-defined scaling parameter of 90 pc (about two times the typical MW GMC size, e.g. ), to aidon the identification of structures of a few tens of parsecs equally across the 4 fields and make the cloud extraction more robust[If left to decide the scaling parameter on its own,works out the number of clusters to assign based on the contrast of the affinity matrices by default. As such, any given structure can change the way the dendrogram leaves are clustered depending on the dynamic range of the dataset. The dynamic range present within structures in the complex inner parts of M51 will be very different from the range present in the more diffuse outer parts. Therefore, in regions that span more hierarchical levels, theextraction could potentially differ from the more "flat" regions (i.e. outskirts) without defining a common scaling parameter, making the clustering non-comparable between regions.]Thesegmentation recovered a total of 25291 clusters across the 4 sub-fields. Including the smaller ancestors that were directly put aside from the original dendrogram, our full sample has 51633 clouds. We produce a catalogue with the properties of our full cloud sample as well as the cloud assignment map for M51 which are made available at <https://dx.doi.org/10.11570/23.0030>. The cloud properties held in our catalogue are detailed in Appendix <ref>. §.§ Sub-sample of molecular clouds Stars are known to form in the coldest and densest (i.e. molecular) phase of the ISM. To establish any link between SF and galactic dynamics it is, therefore, necessary to focus on the structures encompassed in the star-forming molecular gas. Our cloud catalogue (§<ref>) makes no distinction between atomic and molecular clouds since dust traces the total gas and we did not impose any restrictions on the cloud extraction itself. To retrieve a molecular sub-sample we must impose a surface density threshold from which we expect the ISM to be dominantly molecular. Consequently, we consider only the clouds with average surface density above 10 <cit.> as molecular clouds. To make sure the molecular clouds are well-resolved, we also impose that its footprint area be larger than 3 beams (∼27.75 pix, or an area A of roughly 90 pc^2). Finally, our technique works out the dust attenuation through comparison with a reconstructed, smoothed stellar background. Therefore, structures that are picked up in regions with a faint background are not likely to be as well-defined as clouds in areas where the stellar background is more robust. We adopt a robust background threshold of I_0=0.09 e^-/s (justification of this choice in Appendix <ref>). Each of these criteria has a corresponding flag in our full cluster catalogue: Molecular_cut, Size_cut, and Robust_bg (see Table <ref>). The resulting sub-sample of molecular clouds (which we will refer to as science sample from here on) contains 13258 molecular clouds, which are flagged in our full cluster catalogue with Molecular_cut=1, Size_cut=1, and Robust_bg=1. The bottom panel of Fig. <ref> shows the molecular clouds retrieved for a small section of M51 versus the full sample of clouds for the same region (top panel). The total mass in our extinction-derived gas map of M51 is M_gas=8.9×10^8 (±3.4×10^5) M_⊙[The calculated uncertainty on our total gas mass is derived from propagating the uncertainties of the opacity (and consequently mass) estimates for each pixel in our map (see Paper I). This, of course, is only the "formal" error, and is likely a lower estimate as it does not account for uncertainties in the assumed distance to the galaxy, opacity law, or any other systematic errors and assumptions.]. If we consider only the predominantly molecular gas in our map of M51 (i.e. Σ > 10), we obtain a total molecular mass of M_mol=6.9×10^8 (±8.9×10^4) M_⊙. Our full sample of clouds encompasses ∼80% of the total gas in our map of M51, whilst our science sample holds ∼64%.§ TRENDS WITH LARGE-SCALE ENVIRONMENTIt is unclear if SF is more efficient in particular regions of galaxies, such as well-defined spiral arms, or if the higher rates of SF seen towards certain regions are simply a natural consequence of material crowding in the arms. If SF is dependent on the environment, then we would expect the cloud populations of those regions to have systematic differences in their characteristics. Some studies report some dissimilarities between their spiral arms and inter-arm populations <cit.>, whilst other detect no significant differences in the global properties of the cloud population <cit.>. The high resolution of our extinction map of M51 (∼ 5 pc) provides us with a unique opportunity to perform an in-depth statistical characterisation of MCs across different dynamical environments.In order to analyse the environmental dependency of MC properties across the entire disc of M51 we must first construct a mask with the different large-scale environments. We approximate the inter-bar and nuclear bar of M51 (NB) to a circle with galactocentric radius[The galactocentric radius, R_gal, is the deprojected distance to the galactic centre, accounting for the inclination and position angle of M51 (see App. <ref>).] R_gal < 0.85 kpc, and the molecular ring (MR) to a ring spanning 0.85 < R_gal < 1.3 kpc <cit.>. We make use of the M51 environmental masks <cit.> from the PdBI Arcsecond Whirlpool Survey <cit.> of the inner 5 kpc of the spiral arms (SA) and inter-arms (IA), and expand them for the full disc. This is done by using the extinction surface densities (convolved with a ∼16" median filter) as a guide to continue the spiral arms from the end of the PAWS mask until the edges of M51.Given that these masks were done mostly manually, they should not be taken as a strict or accurate definition of the spiral arm positions, instead, they serve as a means to provide all-galaxy statistical estimates. The resulting M51 environmental masks are shown in Fig. <ref>.§.§ Surface density probability density functions Using the CO(2-1) observations from the PHANGS-ALMA survey <cit.>, <cit.> and <cit.> both report higher gas surface densities towards the centre of galaxies, with a more pronounced increase for barred galaxies. This was attributed to gas inflows driven by bars. Using our higher-resolution surface density map of M51, we investigate these trends in M51. Figure <ref> showcases the reverse cumulative distribution, or probability density function (PDF), of the gas mass surface densities for each environment (normalised by the number of pixels in each environmental mask). It is clear from the figure that the centre of M51 (NB + MR) hosts overall higher surface densities than the other regions, with the molecular ring in particular being the densest environment of M51, consistent with the results reported by the PHANGS-ALMA survey (as well as PAWS). In fact, the median of the surface density distribution (Σ_gas, listed in Table <ref>) for the molecular ring is over twice as large as the spiral arms value, and over 3 times the IA value. The molecular ring is effectively a dynamical gas transport barrier where material can accumulate easily, and produce the high densities observed <cit.>. When compared to the MR, the nuclear bar Σ_gas distribution displays a lack of intermediate to high surface densities (80-150 ), hinting at some disruptive mechanism that is absent from the molecular ring (likely streaming motions/shear driven by the bar's potential). The Σ_gas distribution in the inter-arms shows a steady decline past the 10 molecular threshold, consistent with a diffuse region from which we would expect more atomic gas. In comparison, the spiral arms contain a much larger amount of low to intermediate surface densities (10-80 ), although with a steeper decline towards high surface densities. §.§ Molecular cloud properties Since MCs are not isolated and perfectly spheroidal structures, the complex dynamics of their surroundings will reflect on the shape and size of the clouds. If there are systematic variations of cloud morphology (as well as mass) between large-scale environments, this could shine a light on the dynamics at play and their impact on the formation and evolution of clouds. In their study of GMCs in M51, <cit.> propose an evolutionary picture: the spiral arm potential well encourages the molecular gas to consolidate into massive giant associations, which are then stretched apart and fragmented into smaller, lower-mass, elongated structures as they exit the spiral arms and encounter intense shear <cit.>. This picture is supported by several other observational and numerical studies that report an abundance of filamentary objects in the inter-arms <cit.>, and high-mass objects in the spiral arms <cit.>. It is important to note the effect of resolution in these findings however, since lower resolution can blend structures into massive associations, notably in crowded regions like the spiral arms. In another study of M51, <cit.> found that both shear driven by galactic dynamics and stellar feedback can be responsible for disrupting MCs, and consequently suppressing SF. More recently, <cit.> argue that early (pre-supernovae) stellar feedback mechanisms are the main driver of cloud disruption in galaxies. Determining which is the dominant process in SF regulation (shear or stellar feedback), and where in galaxies this occurs, is crucial to better understand cloud lifecycles and lifetimes, and their role in SF and in galaxy evolution <cit.>. With our dust extinction technique, we are able to break up large cloud associations and resolve MC structure with a lot more detail, and thus observe the impact of these disruption mechanisms on individual clouds. In this work, we examine the properties of MCs in search of systematic differences between large-scale environments, which would suggest a direct link between cloud-scale physics and galactic environment. Presently, we do not attempt to pinpoint the exact driver of different characteristics in cloud populations (i.e. driven by shear or stellar feedback), but our spatially resolved cloud catalogue does allow for such an exercise. In future work, we plan to also analyse cloud properties with azimuth and as a function of distance to the nearest spiral arm, and also in relation to various SF tracers.The various cloud properties analysed in this paper are listed in Table <ref> (and further detailed in Appendix <ref>) with some of them also illustrated in Fig. <ref>. We find that the central region of M51 shows systematic differences from the characteristics of the MCs from the disc, with most of the analysed properties presenting higher median values in the centre. This suggests that M51's centre has a substantial impact in the formation and evolution of all of its MCs <cit.>. In particular, MCs located in the molecular ring tend to be denser, whilst in the nuclear bar they are more elongated (but equally massive). On the other hand, the spiral arm and inter-arms MC populations do not show significant differences in their statistics, with the exception of the average cloud surface densities and mass, where the SA median is slightly higher. In their simulation of an M51 analogue, <cit.> find similar trends in cloud properties; i.e. the central clouds show significantly different characteristics from the disc, whilst the SA and IA cloud populations seem very similar in their properties.§.§.§ Mass and surface density As can be seen in Table <ref>, our molecular clouds have a median mass of roughly 4×10^3 M_⊙ and a radius of about 9 pc. These are smaller clouds (in both size and mass) than those from the numerical work of <cit.>, with a median mass and radius of 2×10^4 M_⊙ and 16 pc, respectively. This could be due to the model's resolution limitations in the lower column density regime <cit.>, coupled with the specific prescription for the Supernovae feedback, which naturally leads to a lower amount of MCs with smaller masses and radii in low column regions such as the inter-arms. Nevertheless, we see similar trends in cloud properties between environments and the same range in cloud mass values (extending up to about 10^6.5 M_⊙) as <cit.>. Our MCs are also much smaller on average than the PAWS clouds <cit.>, where the median mass and radius are 7.6×10^5 M_⊙ and 48 pc, respectively. It is clear that comparing cloud masses between different studies is not the most informative, as masses (and sizes) are heavily dependent on each study's definition of a molecular cloud and its boundaries, as well as resolution limits which might lead to beam smearing in crowded regions, as well as undetections. In fact, given its 1" resolution (∼40 pc), the completeness limit of the PAWS catalogue, 3.6×10^5 M_⊙, is already much higher than our median cloud mass. Similarly, the resolution limitations imply a minimum effective radius of 20 pc for the PAWS GMCs, which is already over a factor 2 larger than our typical cloud radius (∼9 pc), meaning the average MC in our catalogue might go undetected in PAWS or appear unresolved within a beam area. Therefore, care is needed when comparing cloud catalogues, especially when comparing absolute values.In Fig. <ref> we present a comparison of average cloud surface densities for the cross-matches between the PAWS GMCs and our extinction-derived HST MCs. By definition, average cloud surface densities already account for cloud size (i.e. Σ_MC=M/A), which reduces the effect of different resolutions between studies, although it is not entirely removed as we will address later. We perform this cross-matching to ensure that we are only comparing clouds that roughly exist in the same space, so that the comparison is as fair as possible. The footprint masks of the PAWS GMCs <cit.> were deprojected into 2D masks and regridded to the HST native grid (0.049"/pix). We find that 1296 of the total 1507 PAWS GMCs have a spatial match with HST clouds, meaning our catalogue successfully matches with 86% of the PAWS catalogue. The remaining 211 unmatched PAWS GMCs are likely associated with clusters, which prevent any measurement of extinction in the relevant region (as detailed in Paper I). Out of the 4843 HST MCs in the PAWS FoV (highlighted in Fig. <ref>), only 35% (1700) match with at least one GMC in PAWS. This significant fraction of unmatched HST clouds is again a reflection of the differences in column sensitivity and resolution of the two catalogues: the unmatched HST clouds typically have lower average surface density (∼15.1) and thus are likely associated with CO-dark molecular gas, and are also too small (∼8.5 pc) to be resolved by PAWS. In addition to cross-matching, we also recalculate the average surface densities of PAWS clouds with a scaled CO-to-H_2 conversion factor. <cit.> employ the standard Galactic CO-to-H_2 conversion factor, X_CO=2×10^20 cm^-2(K km s^-1)^-1, when deriving their cloud masses (and surface densities) from CO luminosity. In Paper I, we found that the determination of X_CO is heavily influenced by the assumed dust model, and that assuming the Galactic X_CO overestimates the PAWS surface densities by roughly a factor 7 relative to our estimates. Adopting the scaled value of X_CO=3.1 (±0.3) ×10^19 cm^-2(K km s^-1)^-1 removes this discrepancy and makes the two studies comparable (see Paper I for more details). Once all these steps are taken to ensure the comparison of cloud properties between our catalogue and the one from PAWS is as fair as possible, we find that the median cloud surface densities (as well as observed trends between environments) are virtually identical for both catalogues (shown in Fig. <ref>). Still, from the figure we can see that the observed range of cloud surface density values for the PAWS GMCs are consistently larger than the HST range. Again, this is likely linked to resolution: in crowded areas, a larger beam might blend multiple clouds in the same line of sight resulting in larger surface densities, whilst small-sized and more isolated clouds might get smeared within the beam, resulting in a "dilution" of the observed flux in a larger area (i.e. lower surface density).§.§.§ Elongation In order to evaluate the elongation of clouds, we employ two different methods of measuring aspect ratio. The first is a moments-based aspect ratio, AR_a/b, which is the ratio between a cloud's surface density-weighted semi-major axis (a) and semi-minor axis (b). The second metric is purely geometrical, based on the medial axis of the cloud, which is the longest continuous line connecting the points within a cloud furthest away from its external edges. The medial axis aspect ratio, AR_MA, is defined as the ratio between the length of the medial axis (L_MA) and twice the average distance from the medial axis to the cloud's edge (W_MA). For further details please refer to Appendix <ref>. Both aspect ratio metrics suggest the same trends between the spiral arms and inter-arms in that, although the values are higher overall for AR_MA, both environments seem to have equally elongated clouds. It is possible that any disparities between these two populations are only seen in the most extreme clouds (i.e. tails of the distributions) rather than the bulk of the population, which will be further examined in §<ref>. Both AR_MA and AR_a/b indicate that MCs in the nuclear bar are more elongated than anywhere else in the galaxy. Due to their non-axisymmetric nature, bars are known to drive gas inflows and produce intense shear <cit.>, likely stretching clouds apart (i.e. providing higher aspect ratios). The picture is more unclear when considering the molecular ring population; the overall trend with the remaining large-scale environments is different for the two metrics (for AR_a/b, the MR clouds have the second highest median, but for AR_MA, the same clouds have the lowest median). The difference in trends between the two aspect ratio metrics employed here is not wholly unexpected. In a recent morphology study of the SEDIGISM clouds <cit.>, <cit.> note that measures of aspect ratio vary quite significantly depending on the methodology adopted. Using an aspect ratio as a proxy for cloud elongation is not straightforward as it depends on the specific morphology of the clouds, and should therefore be used with care (for an example see Appendix <ref>). It is beyond the scope of this paper to address this discrepancy between aspect ratios, but a more detailed cloud morphology study using RJ-plots <cit.> is within our future plans. §.§ Cloud cumulative mass distributions As highlighted in the previous section (§<ref>), although there are clear differences in the masses of MCs between the centre and the disc of M51, the medians of the distributions alone are not the most informative, particularly when analysing any potential differences between the IA and SA clouds. Therefore, we also analyse how cloud masses are distributed within each large-scale environment, by building a cumulative mass distribution. To obtain the mass spectra for our sample, we opt to exclude the clouds that include saturated pixels, since these clouds have more uncertain masses (see Paper I). Figure <ref> shows the mass spectra for the MCs in our science sample for the different M51 environments normalised by the number of clouds in each environment. From the figure, it is possible to see that the central regions of M51 (NB and MR) have the highest concentration of high-mass clouds (M≳ 10^5.5 M_⊙), followed by the spiral arms. There's also a sharp decline in high-mass objects for the IA - the MCs in this region seem to have predominantly low to intermediate masses. These findings agree with what was found, albeit at a lower resolution, by <cit.> in their GMC study of M51 using PAWS CO data, and more recently by <cit.> in their study of GMCs across PHANGS spiral galaxies. The trends seen in the MC mass spectra follow the same trends as what we saw for the pixel-by-pixel surface density distributions (Fig. <ref>), and thus the cloud segmentation process used in this study is unlikely to be the cause of the cloud mass distributions seen here. The cumulative mass distribution can be fit with a simple power-law of the shape: N = ( M / M_0 )^γ + 1, where N is the number of clouds with mass M that is larger than the reference mass M_0, and γ the index of the power-law. However, given the steepening of the mass distributions seen at higher masses, we opt for a truncated power-law of the form: N = N_0 [ ( M/M_0)^γ + 1 - 1 ], with M_0 being the maximum mass of the distribution, and N_0 the number of clouds corresponding to the truncation mass, M_t = 2^1/(γ+1) M_0 (i.e. the point at which the mass distribution stops following a simple power-law). The index of the truncated power-law informs us on how the mass is distributed: in massive cloud structures for γ > -2, and in smaller clouds for γ < -2. For the spiral arms and inter-arms, we fit the mass spectra with Eq. (<ref>) for masses greater than 10^5.5 M_⊙, which is the point from which the distributions seem to have a shape similar to a truncated power-law. We adopt a lower mass threshold of 10^5 M_⊙ for the nuclear bar and molecular ring due to the reduced number of clouds with masses higher than 10^5.5 M_⊙. The resulting parameters from the fits are listed in Table <ref>, and the fits themselves (both simple and truncated) are shown in Fig. <ref>. The global cumulative mass distribution of all the MCs in our sub-sample is very steep, with a fitted index γ < -2, which indicates that our M51 clouds are preferentially low-mass objects. We can again see in Fig. <ref> that the cloud population in the centre of M51 (NB + MR) has different characteristics from the disc (SA + IA) with very different slopes of the truncated fits. Both the nuclear bar and molecular ring present γ > -2, whilst the spiral arms and inter-arms fits have γ < -2, suggesting that clouds in the disc are typically low-mass, whilst MCs in the centre have larger masses, in line with our results from §<ref>.Although the nuclear bar truncated fit has the shallowest slope (indicative of preference towards high-mass objects), the distribution itself does not extend to high masses (highest mass ∼ 9×10^5 M_⊙), suggesting that cloud growth is being hindered and/or that massive clouds are being destroyed in this region. This is likely a result of the complex dynamics and intense shear caused by the bar, although <cit.> also argue that the enhanced interstellar radiation field in M51's bulge could also have an effect. On the other hand, the molecular ring also has a low spectral index but its distribution reaches higher mass values (∼ 2×10^6 M_⊙), consistent with an environment that promotes cloud agglomeration. The IA cumulative mass distribution presents the steepest slope out of all the considered environments, indicating that the inter-arms host dominantly lower mass MCs. Furthermore, the IA distribution extends up to a smaller mass relative to the spiral arms cumulative mass distribution, even though the two distributions are very similar in the low-to-intermediate mass range (<10^5.5 M_⊙, see Fig. <ref>). It seems that high-mass objects in the inter-arms either have difficulty forming or are destroyed quickly after formation. On the other hand, the spiral arms mass cumulative distribution reaches the highest mass among all considered environments (∼ 2.6×10^6 M_⊙), even though its slope is relatively steep. The SA then have favourable conditions for clouds to grow more massive even though most of its population seem to be low-mass objects. From their simulations of an interacting galaxy, <cit.> also observe a steeper slope in their IA cumulative cloud mass distribution relative to the SA slope, with SA clouds reaching higher masses. Additionally, the fitted index of their whole cloud population, γ = -2.39, is very close to the value we find (γ=-2.43).Interestingly, the fitted parameters (γ, M_0, and N_0) change quite significantly when fitting the mass cumulative distributions of only the MCs inside the PAWS FoV (i.e. clouds at smaller galactocentric radii, R_gal≲ 5 kpc), as can be seen from the bottom section of Table <ref>. Overall, MCs seem to be more massive inside the PAWS FoV than when considering the full galaxy, hinting at a radial trend in cloud mass (which will be analysed in more depth in §<ref>). Notably, the slope of the truncated fit for the inner spiral arms is much shallower than for the full arms, with γ > -2, whilst the index for the inner inter-arms is still γ < -2. §.§ Extreme cloudsAs evidenced by §<ref> and §<ref>, although the bulk properties of a galaxy's different cloud populations may be fairly similar, differences arise when analysing the tails of the distributions <cit.>. If "extreme" clouds (i.e. the clouds at the tail of the relevant distribution) are enhanced in certain large-scale galactic environments, then this points at physical processes that directly facilitate the formation of specific types of clouds in specific regions of the galaxy, which could then have a direct impact on SF. Figure <ref> showcases the spatial distributions of the top 100 most extreme clouds within the context of M51 for the different cloud properties considered: mass, average surface density, aspect ratio, and signatures of high-mass star formation. Table <ref> holds the expected cloud fractions according to the global distribution of MCs across M51 (i.e. number of clouds in an environment divided by the total number of clouds), as well as the fractions reported for the tails of some of the analysed distributions (i.e. number of extreme clouds in an environment divided by size of extreme sub-sample, N=100). If the environment has no direct role in dictating the existence of such extreme clouds, we would expect the fractions for the extreme clouds to reflect the global cloud fractions. In the sections below we analyse each set of extreme clouds in more detail, and put those in context with the expected trends as per other literature results.To determine if the distribution of our extreme clouds is significant, we conduct a Pearson χ^2 statistical analysis, which compares the observed distribution of a sample against a theoretical distribution and searches for similarities in frequencies. The χ^2 value is given by the below expression:χ^2 = ∑^n_i=1( O_i - E_i )^2/E_i,where n is the number of environments considered (i.e. NB, MR, SA and IA), O_i is the number of observed counts in environment i (i.e. number of clouds), and E_i is the number of expected counts within environment i for a sample of size N, such that E_i=f_i N with f_i representing the probability of a cloud belonging to environment i (i.e. the fraction of our science sample situated in each environment, listed in Table <ref>). Here, we use our molecular sub-sample as our theoretical distribution, and calculate the χ^2 statistics between our top 100 extreme clouds and the theoretical distribution. To test if the derived χ^2 values are statistically significant, we determine the likelihood (p_rnd) of obtaining our calculated χ^2 values if we randomly draw N = 100 clouds from our science sample (without replacement). To do so, we performed 100 000 random draws of N = 100 clouds, and determined the χ^2 value for each draw (Eq. <ref>) against the expected or theoretical distribution (i.e. our science sample). We build a cumulative distribution of the 100 000 derived χ^2 values to illustrate the likelihood of obtaining a certain χ^2 value from pure random sampling, as shown in Fig. <ref>. By comparing the χ^2 values of our extreme sub-samples to the values resulting from random sampling, we are able to determine how likely we are to retrieve the observed extreme cloud distribution from a random sampling of the global population, and therefore judge whether any observed differences are statistically significant. In cases where the likelihood is low, the large-scale environment may have a direct role in promoting those specific types of extreme clouds. The exact values resulting from the χ^2 statistics for our extreme clouds (listed in Table <ref>) should not necessarily be taken at face value, and should serve instead as a means to compare between the different sub-sets of extreme clouds. From this analysis, we can see that properties like the footprint area (A), the medial axis length (L_MA), and aspect ratio have the highest p_rnd values, suggesting that these properties mimic the general distribution more, while others like surface density have much lower p_rnd. In the following sections, we explore these trends in more detail, looking at extreme clouds in terms of their mass/surface density in §<ref>, elongation in §<ref>, and high-mass star formation in §<ref>.§.§.§ Most massive/highest surface densitySome observational studies of M51 have suggested that the spiral arms are the preferred location of the most massive MCs <cit.> - a natural consequence of spiral arms hosting more material, which increases the frequency of cloud-cloud collisions leading to the formation of high-mass objects <cit.>. In the previous section (§<ref>) it was already highlighted that the spiral arms seem able to form higher mass MCs than the inter-arms, despite having similar distributions in the low-to-intermediate mass range. It follows that when isolating the most massive MCs in our molecular sub-sample, the spiral arms boast a much higher number of these high-mass MCs than the inter-arms - a trend that does not follow the overall distribution of MCs across M51 and is therefore likely to be significant. Furthermore, a significant percentage of these extremely massive clouds reside in the molecular ring (a factor 4 more than what would be expected from statistics), a region also known to harbour an accumulation of material. The lack of high-mass MCs in the nuclear bar and inter-arms due to complex dynamics and shear was already seen and addressed in §<ref>. When looking at the bottom left panel of Fig. <ref>, it is clear that the densest MCs in our science sample prefer the spiral arms (an increase of roughly 63% relative to the cloud fraction expected from the overall statistics). Additionally, these extremely dense clouds are heavily concentrated towards the inner regions of M51, again hinting at some strong radial trends (further analysed in §<ref>). Moreover, there is an increase of extremely dense clouds in the molecular ring relative to the expected statistics. From the figure, these dense MR clouds mostly correspond to the beginning of the spiral arms of M51 within the ring. The densest clouds seem to mostly be located in crowded areas where intense shear is absent, which hints at a dependence of the dense gas mass fraction as a function of a large-scale dynamical environment. For the most massive and highest surface density clouds we obtain χ^2 values of 25.6 and 230, respectively, with corresponding likelihoods p_rnd of 4×10^-4 and <10^-5. Both extreme sub-samples are therefore unlikely to be randomly drawn from our science sample, especially the highest surface density clouds. It is important to note that although a small amount of these extreme clouds have masses/surface densities that we do not necessarily trust due to saturation effects or observational limits (see Appendix <ref>), the trends we report remain the same when removing these more uncertain clouds from our analysis.§.§.§ Most elongatedIn their numerical study of GMCs of a two-armed spiral galaxy, <cit.> found that although the median properties of the inter-arm and spiral arm populations are similar in terms of aspect ratio, the most elongated MCs in their sample belong almost exclusively to the inter-arms. This could be suggestive of intense shear stretching massive MCs as they exit the spiral arms into the inter-arms <cit.>, or disruption caused by stellar feedback <cit.>. We have seen from §<ref> that there are no significant differences in cloud elongation between the IA and SA populations when looking at the medians of either metric of aspect ratio. When looking at the top 100 most elongated MCs according to their AR_MA instead, the majority of highly elongated clouds are located in the inter-arms. However, using AR_a/b instead gives no significant increase of highly filamentary structures in the inter-arms. This discrepancy between the two metrics might be due to filamentary clouds that have a "curved" nature (e.g. ring-like), which would have a large AR_MA but a low AR_a/b (further discussed in Appendix <ref>). Clouds such as these might be potential "bubbles" which are driven by stellar feedback <cit.>. Both metrics report higher fractions of extremely elongated clouds in the nuclear bar than expected from statistics alone (factor 2.5 increase for AR_MA and 5 for AR_a/b), reflective of the complex dynamical processes and intense shear seen towards that region. The molecular ring presents the most drastic difference between the two metrics (as was already pointed out in §<ref>), with the moment aspect ratio metric reporting a significant increase of extremely elongated clouds whilst the medial axis aspect ratio sees no increase at all. We derive a χ^2 value of 15.7 and p_rnd=0.004 for the extreme AR_MA sub-sample, and χ^2=61.7 and p_rnd<10^-5 for AR_a/b. The statistics suggest that these extreme sub-samples deviate from the theoretical distribution, however the deviations seem to be driven predominantly by the nuclear bar of M51, where both metrics agree on a surplus of extremely elongated clouds. The discrepancies between AR_a/b and AR_MA and in particular, their different behaviour with different cloud morphologies (further discussed in Appendix <ref>) make it hard to draw any definite conclusions. In an attempt to isolate the truly elongated clouds, we instead retrieve the most elongated MCs from both metrics combined. To do so, we first standardise both distributions to make them comparable. We scale the AR_MA and AR_a/b distributions to both have a standard deviation of 1 and a mean of 0. Looking at the clouds with aspect ratio above 3σ in both rescaled distributions returns just 23 MCs - 9% in NB, 9% in MR, 30% in SA and 52% in IA. If we relax the threshold down to 2σ, 88 MCs are considered and the percentages become 3% in NB, 8% in MR, 41% in SA, and 48% in IA, as shown in Table <ref>. In either case, there is no significant increase of highly elongated MCs towards the inter-arms, but the amount of extremely elongated clouds in the nuclear bar remains statistically significant. The molecular ring population still hosts a significant fraction of these extreme clouds relative to the expected distribution. The MR is a region known to have low shear <cit.>, so it could be that stellar feedback is the mechanism responsible for disrupting the MCs in this environment, although a more detailed cloud classification is needed to draw any definite conclusions. The distribution of our AR_scaled sub-sample has a χ^2 value of 15.3 and a likelihood p_rnd of 0.005. We also do not observe any trend of cloud size (either through equivalent radius or medial axis length) across the large-scale environment.§.§.§ High-mass star formingThe highest mass and densest MCs in our sample are preferentially located in the spiral arms (and also the molecular ring), as shown in §<ref>. However, whether this enhancement of massive/dense clouds is then reflected on a different type of SF happening in those clouds, is still unclear. For instance, if high-mass star formation (HMSF) requires a cloud reaching higher masses or densities, then environments with a surplus of massive/dense MCs will also have a higher frequency of clouds hosting HMSF compared to the statistical distribution of clouds in general. In particular, if HMSF sites are enhanced in spiral arms then it may mean that SF is directly enhanced from the passage of the spiral density wave <cit.>, rather than just a byproduct of orbit crowding in spiral arms <cit.>. We thus investigate the HMSF potential for our sample of clouds, by using the empirical relation derived by <cit.> to define a surface density threshold above which clouds are potential hosts for HMSF. The original HMSF threshold in <cit.>, M[M_⊙]= 870 (R[pc])^1.33, was determined with the opacity law κ_λ = 12.1 (λ/250 m)^1.75 cm^2g^-1. In turn, our adopted opacity law isκ_λ = 21.6 (λ/250 m)^2 cm^2g^-1 from <cit.>, and thus we scale the HMSF threshold down to M[M_⊙]= 487 (R[pc])^1.33. The difference in dust mass from using either our specific opacity with a dust emissivity index of β=2 or the opacity employed by <cit.> with β=1.75 is only around 20%, a small difference given the uncertainties on the masses themselves. Figure <ref> displays the mass-size distribution of the clouds in our sample, with the solid red line representing the aforementioned HMSF threshold scaled to our adopted absorption coefficient. Around 15% of our science sample sits above the HMSF threshold (2022 out of 13258 MCs). Of these 2022 MCs, 3% belong to the nuclear bar, 6% to the molecular ring, 53% to the spiral arms, and the remaining 38% to the inter-arms. The molecular ring and spiral arm fractions resulting from adopting this single threshold for HMSF are significantly higher than what would be expected from the overall distribution (2% and 45%, respectively, see f in Table <ref>). This indeed suggests that MCs in the molecular ring and spiral arms could be more prone to potentially host HMSF. In Fig. <ref> we also highlight known 8 m sources in M51 from <cit.>, which are thought to trace the highly embedded and young stellar population of the galaxy. These 8 m cores have a typical diameter of 3" (barely above the 2.4" FWHM resolution of the data), which corresponds to a physical size of about 110 pc for M51. Given the distances involved as well as the physical sizes of these sources (much larger than our typical cloud), it is likely that these are tracing unresolved sites of clustered HMSF. As such, we use these 8 m sources as HMSF signposts to determine the validity of an empirical surface density threshold. Using the catalogued central position of each 8 m source from <cit.>, we create circular masks for each individual source with a 3" diameter. Cross-matching the source masks with the footprint masks of our MCs gives 509 matches out of 670. Over 100 sources are dismissed in this step: some fall outside the bounds of our map (the original catalogue includes NGC 5195), others are encompassed in diffuse clouds that are not considered in our molecular sub-sample, and others are not embedded anymore (i.e. young clusters also showing in the visible) leading us to not be able to measure any visual extinction in that region. Out of these 509 sources, 169 match with only 1 MC, whilst the remaining 340 match with multiple of our MCs. In order to perform an environmental analysis, we choose to only keep the match with the closest cloud (i.e. shortest distance between centroid of source and centroid of cloud). The top panel of Fig. <ref> illustrates the exact source-cloud matches, whilst the bottom panel depicts the closest matches in the multiple clouds cases. Our cross-matching results in 509 8 m sources from <cit.> matching with 460 of our MCs (49 MCs have multiple associated sources, whilst the rest have unique, one-to-one matches), which are shown in Fig. <ref>. Out of these 460 MCs with an associated HMSF signpost, 279 are above the empirical HMSF line, whilst 181 are below. Adopting such a surface density threshold would cause us to miss roughly 65% of true positives (i.e. MCs with an associated 8 m core yet are below the HSMF line). It is important to note that due to our source-cloud matching by proximity, some clouds may not be the true hosts of the 8m source, which will affect this fraction of missed true positives. Furthermore, while 8m can trace young clusters, it is not quite able to trace the younger and much more embedded young stellar objects present in the densest parts of MCs (i.e. tracers of "on-going" SF), and therefore our sample of HMSF signposts is by no means complete.Although there is a concentration of potential HMSF signposts towards the upper right corner of both panels of Fig. <ref> (i.e. towards higher-mass objects), there is still a significant amount of low-density and low-mass MCs that are HMSF candidates. In fact, of the highest surface density and highest mass clouds analysed in §<ref>, only 11 and 43, respectively, have an associated 8 m source. Additionally, there seems to be an increase of clouds with an associated 8 m source towards the molecular ring and the spiral arms, as shown in the bottom right panel of Fig. <ref> and also by the χ^2 and p_rnd values we obtain (23.1 and 10^-4, respectively), which we also noted from applying the <cit.> HMSF threshold. Despite this increase in HMSF signposts towards particular environments, from this analysis alone we are not able to distinguish between a higher star formation rate in more crowded regions (MR and SA) and an actual increase of star formation efficiency (i.e. the environment itself has a direct impact on the star formation process, rather than just gathering star-forming material). Even though our HMSF signpost sample is not complete, it does seem that there is a complex interplay of effects leading towards HMSF rather than a simple density/mass threshold from which all clouds can start forming massive stars. It is worth noting that the HMSF threshold proposed by <cit.> was originally derived for infrared dark clouds, which are very high column density objects. Our data is much more sensitive to the lower end of column density, and therefore applying this threshold may not be particularly relevant or useful. This analysis will benefit from higher resolution mid-IR observations (e.g. from JWST) that are able to probe a younger stellar population that is too embedded to show in 8m with previous data for nearby galaxies. § TRENDS WITH GALACTOCENTRIC RADIUS In the previous sections we have looked at whether galactic environments have a direct impact on the characteristics of their cloud population and consequently SF, and found that although the large-scale dynamics do shape cloud characteristics, there is no strong sign that SF efficiency is enhanced towards any environment in particular <cit.>. Non-axisymmetries in the gravitational potential (i.e. spiral arms, nuclear bars) cause the gas in a galaxy to continuously flow not just between large-scale environments, but also radially. Naturally, we would expect the distribution of the ISM to be heavily influenced by these flows.<cit.>, for example, find a factor 20 decrease of molecular mass surface densities from the centre to the outskirts of M51 (R_gal∼ 12kpc). More recently, <cit.> also identify a trend of decreasing cloud masses towards larger galactocentric radii in their simulated MC population of an M51-like galaxy. We thus make use of our high-resolution dataset to analyse the distribution of several properties of our MC sub-sample as a function of galactocentric radius.§.§ Radial profilesFigure <ref> shows the radial profiles of the MCs properties analysed in this paper, where M51 has been divided into 39 concentric bins of width 225 pc, with the exception of the first and last bin, which span 400 and ∼440 pc, respectively, given the lack of clouds seen at those radii. From the middle panels of the figure, we confirm that there is a general declining trend with galactocentric distance for both cloud mass and cloud average surface density, although the decline is less pronounced past R_gal=4 kpc. The sudden spike in cloud masses at around R_gal=8 kpc seems to be mostly due to a large group of MCs concentrated towards the end of the spiral arm leading up to NGC 5195. There is no obvious radial trend of cloud size either through equivalent radius or medial axis length (leftmost panels of Fig. <ref>), except in the first few bins corresponding to the nuclear bar, where clouds seem to be longer. Both metrics of aspect ratio (rightmost panels of Fig. <ref>) remain fairly constant at all radii, apart from a slight increase for the first radial bins again corresponding to the nuclear bar.§.§ Radial profiles per large-scale environmentSimple 1D radial profiles average different environments together; looking instead at the same radial bins but within each environment separately will highlight any interesting signatures that might otherwise get washed out by the mixing with other environments. Figure <ref> illustrates the average cloud surface density and medial axis aspect ratio for the separate galactic environments of M51 with galactocentric distance. The remaining properties from Fig. <ref> do not show significant changes, apart from cloud mass which has a similar trend to Σ_MC. The Σ_MC radial profiles of the separate large-scale environments have very distinct features (shown in the top panel of Fig. <ref>). As can be seen from the figure, towards the inner galaxy there is a sudden drop of Σ_MC at ∼1.7 kpc in the spiral arms (also present when building a radial profile of each pixel's surface density within our mask of the spiral arms), and it coincides with a known region of little to no SF <cit.>. In their kinematic study of M51 using PAWS data, <cit.> find inflowing non-circular motions driven by the start of the spiral arms between 1.3 < R_gal < 2 kpc, coinciding with our dip in Σ_MC for the spiral arms. Additionally, <cit.> find a deviation from a pure spiral pattern caused by two dominant arms (m = 2 mode) for 1 < R_gal < 2.2 kpc <cit.>, which could increase the streaming motions of the gas, depleting the available reservoir at those radii and lowering the observed densities. Once reaching the molecular ring, cloud surface densities in the spiral arms seem to rise again, likely from gas being stalled against the MR dynamical barrier. There is also little to no SF detected for the inner ∼ 750 pc of M51, where peculiar motions driven by the bar are dominant and heavily disrupt and disperse the gas <cit.>.As shown in the top panel of Fig. <ref>, the distribution of Σ_MC for the inter-arms is fairly constant across radial distance, meaning that the declining trend seen for the global profile is indeed driven by the bar and spiral arms of M51. Additionally, the tentative flattening of cloud densities past R_gal∼4 kpc witnessed in Fig. <ref> is much more pronounced when looking at the spiral arms, and the phenomena causing it does not seem to affect the inter-arms. To further investigate this shift in behaviour, Figure <ref> highlights the differences in properties of the cloud populations in the inner (R_gal<4 kpc) and outer (R_gal>4 kpc) galaxy, for both the spiral arms and the inter-arm regions. As was already seen in the radial profiles, it is clear that MCs in the inner spiral arms are much denser than IA clouds at the same radii, whilst the average density of both populations is similar at larger galactocentric radii (top panel of Fig. <ref>). The same trend is seen for cloud mass, although less pronounced. The most elongated clouds in the inner galaxy seem to develop in the inter-arms (since the upper part of the inner IA violin plot is more populated, shown in bottom panel of Fig. <ref>), whilst at larger radii the SA and IA distributions are virtually identical.The clue to this behaviour may lie in the nature of the spiral arms of M51. If M51 was composed of a single quasi-stationary density-wave with a fixed pattern speed, we would expect to see enhanced surface densities/masses throughout the entire spiral arms (relative to the inter-arm regions), since the gas would be harboured and compressed in the strong spiral gravitational potential well generated by the density wave <cit.>. This behaviour is indeed similar to what we see in the top panel of Fig. <ref> for R_gal<4 kpc, but not so much for the outskirts of the galaxy. For a density-wave type of pattern, we would also expect to observe newborn stars within the spiral arms and increasingly older stars as you move along in azimuth (i.e. a stellar age gradient), which again is observed in M51 by some studies <cit.>, but not by others <cit.>. In fact, several studies, both numerical and observational, argue against a fixed pattern speed in M51, and thus a single density-wave type of pattern <cit.>. Instead, the spiral structure of M51 seems to have a more transient nature, which evolves dynamically with time as a function of the tidal interaction with its companion NGC 5195 <cit.>. The top panel of Fig. <ref> suggests that the gas in the spiral arms of M51 has two distinct behaviours. In the inner galaxy (R_gal<4 kpc), the spiral arms boast much higher average cloud surface densities relative to the inter-arm regions, similar to the expected behaviour driven by a density-wave type of pattern which promotes a higher frequency of massive SA MCs. On the other hand, in the outer galaxy (R_gal>4 kpc), cloud surface densities are very similar for both SA and IA. This change in behaviour occurs at around the same radii for which <cit.> and <cit.> find significant changes in torque signs (at 3.8 kpc) and potential-density phase shifts (at 4.1 kpc), respectively, which the authors attribute to a co-rotation of the spiral pattern with the gas. Given that there is substantial evidence that M51 does not have a single pattern speed (as mentioned above), the notion of co-rotation becomes more complex; still it is clear that there is a sharp change in behaviour at this radius. It seems that, even though the spiral pattern is not rotating at a fixed speed in the inner galaxy, the gas is still rotating faster than the spiral arms, meaning that the gas feels the compression due to the passage through the spiral arm as it would on a density-wave type of pattern. As mentioned above, at large galactocentric radii (R_gal>4 kpc) the SA cloud surface densities become more comparable to the inter-arms, suggesting that past R_gal=4 kpc the spiral pattern and the gas are nearly co-moving. In other words, the outer spiral arms seem to be generated by local gravitational instabilities and behave more like material arms rather than a density-wave <cit.>, which is likely due to the influence of the tidal interaction. The outer spiral arms in M51 are therefore unable to drive the same density enhancement seen in the inner arms <cit.>, since it seems that at large R_gal the gas does not have enough time to cross the bottom of the spiral potential well given both the larger gas crossing times between the arms in the outskirts of the galaxy and the fact that the outer spiral arms seem to evolve at a much quicker rate relative to the inner spiral arms. Additionally, due to the weaker gravitational potential, the outer arms are less protected against shear thus resulting in their "fractured" appearance (as can be seen from the environmental mask in Fig. <ref>). In the shear-dominated inter-arm regions, we would not expect the gas to be much affected by the tidal interaction. We thus hypothesise that the sharp change in behaviour for the spiral arms at R_gal=4 kpc is due to the dynamics of the interaction of M51 with its companion. Additionally, in M51 SF occurs mostly on the convex side of the spiral arms at 2 < R_gal < 3 kpc <cit.>, where we also find a peak in the average cloud surface density. The surrounding areas, namely within the inter-arm region, are likely to be affected by the feedback from these SF events, potentially leading to cloud disruption which could result in higher aspect ratios. For this region, there is a clear difference in the SA and IA medial aspect ratio profiles shown in the bottom panel of Fig. <ref>, where MCs in the inter-arms have higher aspect ratios than their counterparts in the SA. This could be consistent with stellar feedback disrupting the IA MCs, but could also be attributed to the strong shearing motions at this radii splitting clouds apart <cit.>. Furthermore, the shaded areas in the bottom panel of Fig. <ref>, which represent the interquartile range of cloud aspect ratios, seem to have different peaks depending on galactocentric radii. In the outer galaxy, clouds with high aspect ratios appear to be evenly distributed between SA and IA, but this is not the case at smaller galactocentric radii. For R_gal<4 kpc, the majority of the highly elongated clouds seem to reside in the inter-arms, meaning that at these radii the inter-arms are more prone to develop the most elongated structures within our sample. This finding agrees well with the previously presented framework <cit.>: in the inner galaxy where the pattern resembles a density-wave, the stronger spiral potential will protect clouds from intense shear within the arms but not in the inter-arm regions, leading to a higher frequency of fragmented/stretched clouds in the IA. This also explains why we do not find a surplus of extremely elongated IA clouds in §<ref>, since we take the top 100 elongated clouds over the entire sample, effectively losing any effect the different spiral patterns may have on the clouds at different galactocentric radii. To draw any firm conclusions, a more rigorous analysis in quantifying the shear and feedback in these regions is needed, as well as a more robust classification of truly filamentary clouds (as previously discussed in §<ref>). This will be the focus of future work.§ SUMMARY AND CONCLUSIONSIn <cit.> we presented a new high-resolution extinction mapping technique, with which we mapped the gas content of M51 (NGC 5194) at a spatial resolution of 0.14" (∼ 5 pc). Here, we extract clouds from our gas surface density map using<cit.>. We compile a catalogue for all the identified clouds in M51 with measurements of several physical properties, which we release with this paper alongside all the footprint masks for each structure. With that catalogue we then analyse the sub-sample of molecular clouds across the galaxy, in search of any evidence of how their properties might be affected by large-scale galactic environment as well as a function of galactocentric radius (and the combination of the two). Our findings can be summarised as follows:* We find that molecular clouds residing in the centre of M51 show distinct differences from the disc population. Average cloud sizes, masses, surface densities, and aspect ratios (mostly within the nuclear bar) are higher in the inner few kiloparsecs of M51 than for the disc.* We fit truncated power laws to the cumulative cloud mass distribution within each large-scale dynamical environment of M51. We find that the gas in M51 is preferentially organised into low-mass clouds in the disc and high-mass clouds in the centre. Additionally, the spiral arms and molecular ring host the highest concentration of high-mass clouds, whilst the inter-arms and nuclear bar distributions show a sharper decline towards higher masses.* We isolate the most extreme clouds in our science sample with the purpose of ascertaining if a given cloud property is particularly enhanced towards a specific environment within the galaxy. We find no obvious enhancement of extremely large clouds (in both area and length) in any large-scale environment. On the other hand, there is a surplus of extremely elongated clouds in the nuclear bar region of M51. Additionally, the most massive and highest surface density clouds in our science sample show a clear preference for the molecular ring and spiral arms, suggesting that these environments host beneficial conditions for cloud growth.* Although we detect an increase of high-mass star formation (as traced by 8 m from ) towards the spiral arms and molecular ring of M51, we are not able to determine if the higher star formation rate is simply due to crowding or an actual increase of star formation efficiency. We also find that assuming a surface density-mass threshold as an indicator of the ability of a given cloud to form stars appears to be an oversimplified approach that does not capture the complicated juxtaposition of effects in play. Although the SF analysis performed in this paper is very simplified, it nonetheless seems to agree with more in-depth star formation rates/efficiency studies, which find little evidence for enhanced star formation efficiencies in spiral arms <cit.>.* There is no apparent trend between the galactocentric radius and cloud elongation or size for the disc of M51 when considering the entire population of clouds (without splitting into environments). There is a declining trend of surface densities towards the outskirts, as well as cloud mass and average cloud surface density.* When using the 2D positional information to analyse the properties of clouds as a function of galactocentric distance for each environment separately, we find that although the average surface densities of the inter-arm molecular cloud population remain constant with galactocentric radius, the spiral arm clouds show a different behaviour at small and large radii. In fact, for R_gal<4 kpc, there is a clear contrast between cloud surface densities of the inter-arms and spiral arms, whilst at larger radii they have similar radial profiles. Additionally, at small R_gal, the most elongated (i.e. highest aspect ratio) clouds seem to mostly belong to the inter-arms. * We find a sudden dip in surface densities at roughly 1.7 kpc in the spiral arms, where <cit.> detect an increase of non-circular motions driven by the start of the spiral arms and a potential perturbation in the spiral pattern <cit.>. For this radial region, we also observe higher cloud aspect ratios in the inter-arms than in the spiral arms. Non-axisymmetric features (i.e. stellar bar, spiral arms) in M51 exert a substantial influence on how the gas is organised across the galaxy. There is a clear difference in characteristics between the cloud populations of the centre and the disc of M51. Peculiar motions driven by the nuclear bar heavily disrupt the clouds in that region, preventing and/or destroying higher mass objects and stretching out clouds, reflecting into high aspect ratios. Similarly, shearing motions (driven by the differential rotation of the gas) seem to have a similar effect in the inter-arms, albeit the observed characteristics of the inter-arm clouds could also be caused by stellar feedback. A more reliable quantification of cloud morphology is needed in order to distinguish the linearly elongated clouds driven by shear from the more distorted/ring-like clouds potentially associated with feedback regions. Nonetheless, in environments where shear is low (i.e. molecular ring and spiral arms), gas is allowed to accumulate resulting in the development of higher mass/density clouds. Additionally, we find that the tidal interaction between M51 and its companion has a strong influence on the cloud population of the spiral arms, but a minimal effect (if any) in the inter-arms clouds. At small radii, the spiral pattern resembles a density-wave type of pattern, where the strong spiral potential piles material up, and increased cloud-cloud collisions drive cloud masses up in the arms. Consequently, MCs in the inner spiral arms show enhanced surface densities/masses relative to their counterparts in the inter-arm regions. At large radii, where the tidal interaction seems to have a stronger influence, the spiral arms are evolving on a much shorter time-frame and appear to be driven by local gravitational instabilities, which affects both the gas and the stars similarly. Consequently, the outer spiral arms are not as able to promote cloud growth, resulting in the similarities seen between the inter-arm and spiral arm molecular cloud populations at those radii. This study demonstrates the power of larger number statistics on resolved cloud populations, as well as wider coverage across entire galaxies, in unravelling the potential effects of the environment on the formation and evolution of clouds. The spatially resolved information we obtain from our extinction-derived gas surface densities <cit.> allows for cloud-scale studies to be conducted across not only various galactic environments, but also across different galaxy types. Such exercises are fruitful in developing our understanding of SF as a galactic-driven process, and learn which mechanisms hinder or enhance the formation of stars (and where this occurs), which naturally has repercussions in the evolution of galaxies.§ ACKNOWLEDGEMENTS We thank the anonymous referee for their comments and suggestions, which have helped improve the manuscript. HFV and ADC acknowledge the support from the Royal Society University Research Fellowship URF/R1/191609. TAD, NP, MWLS and MA acknowledge support from the UK Science and Technology Facilities Council through grants ST/S00033X/1 and ST/W000830/1. The calculations performed here made use of the computing resources provided by the Royal Society Research Grant RG150741. MQ acknowledges support from the Spanish grant PID2019-106027GA-C44, funded by MCIN/AEI/10.13039/501100011033. HFV acknowledges Sharon Meidt for the use of the PAWS environmental mask. Based on observations made with the NASA/ESA Hubble Space Telescope, which is operated by the Association of Universities for Research in Astronomy, Inc. (Program #10452). DustPedia is a collaborative focused research project supported by the European Union under the Seventh Framework Programme (2007-2013) call (proposal no. 606847). The participating institutions are: Cardiff University, UK; National Observatory of Athens, Greece; Ghent University, Belgium; Université Paris Sud, France; National Institute for Astrophysics, Italy and CEA, France.§ DATA AVAILABILITYWith this paper, we release the full catalogue off all clouds extracted withand the respective cloud masks in <https://dx.doi.org/10.11570/23.0030>, as well as in the FFOGG (Following the Flow of Gas in Galaxies) project website (<https://ffogg.github.io/>).mnras§ CLUSTER PROPERTIES AND CATALOGUEAlongside this paper, we make available the complete catalogue[<https://ffogg.github.io/ffogg.html>] of all the clouds extracted from our high-resolution extinction map of M51 usingand(description of cluster extraction given in §<ref>). Table <ref> specifies all the cluster properties contained in our catalogue. §.§ Coordinates The right ascension and declination of each cloud's centroid (RA_deg and Dec_deg, respectively) was estimated bywhen building the full dendrogram of our map. The galactocentric distance, R_gal, is estimated between the centroid position of each cloud and the centre of the galaxy. The galaxy's centre position is determined from the PAWS environmental mask. R_gal already takes into account M51's position angle and inclination <cit.>.§.§ Geometrical properties From our full dendrogram,also computes the area of the ellipse encompassing each cloud (Area_ellipse), the exact footprint area of a cloud (Area_exact), the semi-major and semi-minor axis of a cloud (Major_axis_a and Minor_axis_b, respectively), and the cloud's position angle (PA, measured counter-clockwise in degrees from the +x axis in pixel coordinates). Using the exact footprint area of each cloud we compute its equivalent radius, R_eq, which is calculated assuming that the cloud is a circle such that R_eq = √(A / π), where A is the exact area of the cloud. Determining the aspect ratio of a cloud gives a basic estimate of the cloud's morphology: MCs with aspect ratio close to unity are circular, and MCs with high aspect ratio are elongated. The first aspect ratio we consider is the intensity-weighted moment aspect ratio, AR_ab, defined as the ratio between a cloud's semi-major axis (Major_axis_a) and semi-minor axis (Minor_axis_b). The other aspect ratio metric we use is the medial axis aspect ratio, AR_MA. The medial axis is the longest running spine of a cloud's mask that is also the furthest away from the external edges of the cloud (all holes within a cloud are filled before the calculation). It is not weighted by intensity, and is a purely geometrical approach. The medial axis is found by extracting the "skeleton" of the cloud (i.e. reducing the cloud to its filamentary structure). AR_MA is then set as the ratio between the medial axis length, L_MA, and the medial axis width, W_MA, such that: AR_MA = L_MA / W_MA. L_MA is simply the length of the determined medial axis, and W_MA is twice the average distance from the medial axis to the cloud's external edge. The process of retrieving the medial axis fails when a cloud is too small (not enough pixels to erode away until only the skeleton remains); we set AR_MA to 1 for these cases. We do not attempt to retrieve filamentary structures for "fluffy", diffuse clouds - i.e. clouds that do not pass our robust background cut (further explained in following paragraphs) - in order to economise computational time; AR_MA is set 0 for these. §.§ Masses and surface densities The total "flux" of a cloud (i.e. the sum of each pixel's gas mass surface density within a cloud, Sigma_tot) is computed byusing the bijection paradigm (see ). The average surface density of each cloud, Sigma_avg, is then estimated by taking the total sum of surface densities within the cloud (Sigma_tot) and dividing it by the cloud's footprint area (Area_exact). Similarly, the peak surface density for each entry in the catalogue, Sigma_peak, is simply the highest surface density observed within a cloud. The mass of the cloud, Mass, is then estimated as M = Σ_avgA (i.e. average surface density of cloud multiplied by its area).In Paper I, we quantified the uncertainty of our opacity estimates through 10^4 Monte Carlo realizations for each pixel in our gas surface density map. Our science sub-sample, which holds only clouds with average surface density above 10, has a maximum relative uncertainty of 45%. Above 14 (the median cloud surface density across our molecular sub-sample, see Table <ref>), the maximum relative error drops below 30%. It is also possible to obtain the relative uncertainty of masses and surface densities for each cloud in our catalogue. We compute the ratio between the total absolute error of the cloud (i.e. sum of the Monte Carlo mass/surface density uncertainties of each pixel inside the cloud in quadrature) and the total mass/surface density of the cloud. Each cloud's relative error on the mass and surface density is listed in the catalogue under Rel_err. In Paper I, we also determined the maximum surface density we are able to measure reliably given photometric noise, which has little impact in our cloud catalogue. In fact, out of the 13258 clouds that constitute our molecular sub-sample, only 27 MCs have more than 30% of their area containing pixels where the surface density exceeds the maximum measurable surface density.§.§ Additional Tags In our analysis we only consider a subset of our full sample where we are more certain the clouds are real and dominantly molecular. As described in §<ref>, we consider only clouds that have a footprint area bigger than 3 resolution elements (flagged with Size_cut=1), are above the molecular surface density threshold, Σ> 10 (Molecular_cut=1), and are against a robust stellar background (Robust_bg=1). The last flag is necessary because our technique retrieves extinction features through comparison with a modelled stellar distribution. Consequently, in regions where the stellar distribution is faint, the structures seen in extinction might not be real and are instead artefacts of our choice of background. The cloud shown in Fig. <ref> (ID: 3) is an example of such a structure. Although its average surface density is above our molecular threshold (Σ_avg∼ 10.9) and its size is above 3 resolution elements (A ∼ 6.4×10^3 pc^2), it is not likely to be a real molecular cloud. In fact, almost 41% of the pixels within this object have a measured surface density above the maximum surface density we can reliably measure (as explained in §<ref>). This cloud borders the edge of M51 where there are not many stars that allow us to retrieve a reliable estimate of the stellar distribution (I_0), which is instrumental for our extinction technique (see Paper I for details). We therefore apply a robust I_0 cut to rule out these diffuse structures. Figure <ref> shows the original HST V-band image with a choice of I_0 contours overlaid. Taking too large of a cut (e.g. I_0 = 0.1 e^-/s, shown in green) rules out faint regions within the galaxy itself (which may be real), not just in the outskirts. Taking too little of a cut (e.g. I_0 = 0.08 e^-/s, shown in blue) will not sufficiently exclude all faint locations. Our adopted I_0 threshold (I_0 = 0.09 e^-/s, shown in red) seems like an adequate choice of cut where most of the galaxy is still considered and the regions without much stellar light are dismissed.§ CAVEATS IN ASPECT RATIO METRICSIt is useful to systematically study cloud morphologies, as a shape of the cloud may be linked or dictated by the dynamics of the surrounding medium. The simplest technique often employed is determining a cloud's aspect ratio. In a simplistic view, an aspect ratio would allow us to distinguish between "spherical" clouds and "filamentary" clouds. One way to estimate the aspect ratio of a cloud is through its moments where the structure is approximated by an intensity-weighted ellipse and the semi-major and semi-minor axis are then determined (a and b, respectively), with the aspect ratio then being defined as AR_a/b = a / b. Another way to estimate the aspect ratio of a cloud is through the medial axis, AR_MA. This is a more geometrical approach by nature; it does not impose an elliptical structure and it is not weighted by intensity, instead it only takes the cloud's footprint mask into account to find the longest running spine which sits furthest away from the cloud edges. However, both of these metrics have issues when the morphology of a cloud becomes more complex, and also behave differently with different morphologies. For example, for the cloud depicted in Fig. <ref>, with the moments approach we retrieve an AR_a/b of 1.3, suggesting that we are dealing with a fairly circular cloud, even though it is clear from the figure that this is not the case. On the other hand, the medial axis aspect ratio AR_MA has a value of 15.1, suggesting that this cloud is highly filamentary in nature. However, upon visual inspection, this MC is perhaps somewhere in between, and better classified as a ring-like cloud rather than a true filamentary structure. Thus while the aspect ratio can be used as a first glance at overall trends, any conclusions need to be carefully considered, as a more robust classification is needed in order to differentiate between real elongated structures and ring-like (or other complex morphologies) MCs.
http://arxiv.org/abs/2310.18210v1
{ "authors": [ "Helena Faustino Vieira", "Ana Duarte-Cabral", "Timothy A. Davis", "Nicolas Peretto", "Matthew W. L. Smith", "Miguel Querejeta", "Dario Colombo", "Michael Anderson" ], "categories": [ "astro-ph.GA" ], "primary_category": "astro-ph.GA", "published": "20231027153220", "title": "Molecular clouds in M51 from high-resolution extinction mapping" }
Reduction of Sufficient Conditions in Variational Obstacle Avoidance Problemsfootnoteinfo [ January 14, 2024 ========================================================================================= Contour integral based rational filter leads to interior eigensolvers for non-Hermitian generalized eigenvalue problems. Based on the Zolotarev's problems, this paper proves the asymptotic optimality of the trapezoidal quadrature of the contour integral in terms of the rational function separation. A composite rule of the trapezoidal quadrature is derived. Two interior eigensolvers are proposed based on the composite rule. Both eigensolvers adopt direct factorization and multi-shift generalized minimal residual method for the inner and outer rational functions, respectively. The first eigensolver fixes the order of the outer rational function and applies the subspace iteration to achieve convergence, whereas the second eigensolver doubles the order of the outer rational function every iteration to achieve convergence without subspace iteration. The efficiency and stability of proposed eigensolvers are demonstrated on synthetic and practical sparse matrix pencils.Generalized eigenvalue problem; non-Hermitian matrix; contour integral; trapezoidal quadrature; optimal rational approximation; Zolotarev problem. § INTRODUCTION We aim to solve the large-scale interior eigenvalue problem for non-Hermitian matrices. Such problems arise from many fields including but not limited to electronic structure calculations, dynamic system simulations, control theory, etc. Most of these applications only require part of eigenvalues of interest, and many of which are interior eigenvalues.The interior non-Hermitian generalized eigenvalue problem we consider isA x_i = λ_i B x_i, λ_i ∈,whereis the region of interest, matrix pencil (A, B) is regular, and either or both of A and B is non-Hermitian. The goal is to find all eigenpairs (λ_i, x_i) in the region . Once the problem in aregion can be solved, the entire spectrum could be partitioned into a union of many regions. The interior eigensolver could be applied to all regions in parallel to obtain the full eigendecomposition.Methods for non-Hermitian generalized eigenvalue problems have been developed for decades. The QZ method <cit.> is a popular one in practice for dense and small-to-medium scale matrices. When a sparse and large-scale matrix is considered, iterative methods <cit.> are preferred. Among iterative methods, many adopt the combination of a contour-based filter and the subspace iteration, e.g., Sakurai-Sugiura (SS) method <cit.> and variants of FEAST method <cit.>. The original SS method suffers from numerical instability due to the ill-conditioned Hankel matrix. Then Sakurai and Sugiura proposes CIRR <cit.>, which uses Rayleigh-Ritz projection to avoid the explicit usage of the momentum and block version SS method <cit.>. The number of linear systems therein is reduced, and so is the order of the Hankel matrix. The FEAST method originally proposed for Hermitian matrices is extended to non-Hermitian matrices and results in many variants, dual FEAST <cit.>, BFEAST <cit.>, HFEAST <cit.>, etc. For all the contour-based filters or rational filters in the methods above, the convergence and convergence rate highly depend on the locations and weights of poles. Although the trapezoidal quadrature leads to a good convergence behavior <cit.>, its optimality remains unknown for non-Hermitian matrices. In this paper, we discuss the optimality of the trapezoidal quadrature and its composite rule property. On the top of the property, we propose interior eigensolvers for non-Hermitian generalized eigenvalue problems.Our contributions in this paper can be summarized in two parts: theoretical analysis and algorithm design. Theoretically, with the tool of Zolotarev's problems, we prove that when the contour is a circle, the inverse power method leads to an optimal rational seperation for a non-Hermitian generalized eigenvalue problem. The trapezoidal quadrature of a contour integral achieves asymptotically optimality in seperation. A composite formula for the trapezoidal quadrature is proposed to facilitate the later algorithm design. More specially, given a rational function R_k(z) from the trapezoidal quadrature, we derive the composite formula as R_k(z) = R_k_2(T(R_k_1(z))) for k = k_1 k_2 and T(·) being a Möbius transform. In the algorithm design part, we propose two novel algorithms based on the composite formula of the trapezoidal quadrature. The first algorithm adopts k_1 and k_2 as hyperparameters and applies the subspace iteration with the fixed filter R_k_2(T(R_k_1(z))) to matrix pencils. The inner rational function R_k_1 is implemented with direct matrix factorization, whereas the outer rational function R_k_2 is implemented via the multi-shift generalized minimal residual method (GMRES). The second algorithm adopts k_1 as a hyperparameter and removes the subspace iteration. The convergence of the second algorithm is guaranteed by doubling k_2 every iteration until the rational approximation is accurate enough. Similar to the first algorithm, the inner and outer rational functions are implemented via direct factorization and multi-shift GMRES, respectively. Thanks to the nature of multi-shift GMRES, doubling k_2 does not significantly increase the computational cost. Numerical results on both synthetic and practical matrix pencils demonstrate the efficiency of the two proposed algorithms. Both theoretically and numerically, the second algorithm is suggested for practical usage.The rest of this paper is organized as follows. In <ref>, we introduce the basic idea and practical consideration of the contour integral based filter. Later, we introduce the Zolotarev third and fourth problems with related theorems and the optimalities of rational function separation in <ref>. Two algorithms are proposed in <ref>. Then, numerical experiments demonstrate the efficiency of both proposed algorithms in <ref>. Finally, <ref> concludes the paper. § SUBSPACE ITERATION WITH RATIONAL FILTERSubspace iteration with rational filter is a class of eigensolvers for interior non-Hermitian generalized eigenvalue problems <ref>. All eigensolvers in this class use the subspace iteration framework and adopt various filters, i.e., rational functions with different choices of weights and poles. These rational filters include various discretizations of the contour enclosing , which is the desired region of eigenvalues. In this section, we will first review the subspace iteration and then discuss contour-based rational filters with various discretization strategies. Some practical considerations, i.e., the number of vectors and the number of poles, are discussed in the end. §.§ Subspace iteration The general framework of the subspace iteration with filter iterates between two phases: 1) refining the subspace via filter; 2) solving a reduced eigenvalue problem in the subspace.In the first phase, the filter is applied to an approximated basis of the subspace, and a refined representation of the subspace is obtained. For Hermitian eigenvalue problems, left and right eigen-subspaces are identical. Hence, only the basis of the right eigen-subspace is usually refined, and its complex conjugate is used as that of the left eigen-subspace. However, for non-Hermitian eigenvalue problems, left and right eigen-subspaces are different. After the right eigen-subspace is refined, an extra step is needed to obtain an approximation of the left eigen-subspace. In the second phase, the original large-scale eigenvalue problem is projected to the left and right eigen-subspaces and reduced to an eigenvalue problem of a much smaller scale. Then the small-scale eigenvalue problem is solved by classical dense eigensolvers, which results in the approximated eigenvalues of the original problem. The approximated eigenvectors could be calculated as well. Some filters depend on the approximated eigenvalues, whereas others do not. For filters that do not use the approximated eigenvalues, the second phase serves as a calculation of the stopping criteria.Due to the potential ill-conditioned eigenbasis of non-Hermitian matrices, the generalized Schur vectors could be extracted to represent the eigen-subspaces and lead to a more stable scheme. Such a subspace iteration idea has been combined with FEAST for non-Hermitian matrices and results in HFEAST <cit.>. Let U be the vectors approximating the right eigen-subspace, i.e., U is the result of applying the filter. The orthonormal basis of U is denoted as V = orth(U). As in HFEAST <cit.>, the orthonormal basis of the left eigen-subspace could be constructed as W = orth(AV - σ BV), where σ is the shift different from the eigenvalues of (A, B). After obtaining the approximated orthonormal bases of the left and right eigen-subspace, the reduced generalized eigenvalue problem (W^* A V, W^* B V) is addressed by the QZ algorithm and yields the generalized Schur form,P_L^* (W^*AV) P_R = H_A and P_L^* (W^*BV) P_R = H_B,where P_L and P_R are orthogonal matrices, H_A and H_B are upper triangular matrices. The approximated eigenvalues are,λ̃_i = (H_A)_i,i / (H_B)_i,i,for i = 1, 2, …, s. To obtain the eigenvectors, we further calculate the left and right eigenvectors of (H_A, H_B) and denote them as V_L and V_R respectively. The approximated left and right eigenvectors of (A, B) are, respectively,W P_L V_L and V P_R V_R. The overall framework of the subspace iteration in HFEAST <cit.> with filter ρ(·) is summarized in <ref>. In the rest paper, we adopt the subspace iteration as in <ref> and focus on the construction of ρ(·).§.§ Contour based filter and discretizationThe basic idea behind the filter is to construct a matrix function whose value is close to zero outside the regionand different from zero inside . One good choice of matrix functions is the indicator function of , which could be constructed via a contour integral enclosing the region . The indicator function ofvia contour integral admits,f(z) = 1/2 π∮_Γ1/ζ-z = 1, z ∈0, z ∉,where Γ is the positively oriented Jordan curve boundary encloses the region . [In (<ref>), we implicitly assume that the eigenvalues of (A, B) do not locate on the boundary of .]For a diagonalizable matrix pencil (A, B), i.e.,A X = B X Λ,with X being the eigenvectors and Λ is a diagonal matrix with eigenvalues on its diagonal, the indicator function f(z) applying to matrices becomesf(B^-1A) = 1/2π∮_Γ (ζ I - B^-1 A)^-1 = X [ 1/2π∮_Γ(ζ I - Λ)^-1] X^-1 = X _ (Λ) X^-1,where _(·) denotes the indicator function for region . In <cit.>, a result similar to <ref> is proved, which contributes to the theoretical foundation that the contour integral works even if the non-Hermitian system is defective.Various numerical discretizations of the contour integral <ref> lead to the various filters. In many applications, especially non-Hermitian eigenvalue problems, the contour Γ is circular. In many other applications, the contour could be conformally mapped to a circle. Hence, in this paper, we will discuss the discretization of contour integrals for Γ being a circle. When the contour is a unit circle, we could reparameterize the circle by e^θ for θ∈ [0, 2π). The integral <ref>, then, is a one-dimensional integral and could be numerically evaluated by various quadrature rules. Generally, the discretized integral with k points could be written asR_k(z) = ∑_i=1^k w_i/p_i - z,where {w_i}_i=1^k are weights, {p_i}_i=1^k are poles. For example,when the trapezoidal quadrature is applied, the integral <ref> is numerically approximated by R_k^(T)(z) = 1/k∑_i = 1^k e^θ_i/e^θ_i - z,where θ_i = 2(i - 1/2) π/k. From the form of <ref>, we notice that R_k(z) is a rational function. Let _n,m = {P(z)/Q(z):deg(P(z))≤ n, deg(Q(z))≤ m} be the set of rational functions,where P(z) and Q(z) are polynomials and deg(·) denotes the degree of the polynomial. The discretized contour integral <ref> is in _k-1, k.When the discretized contour integral applies to matrices, the rational matrix function yieldsR_k(B^-1A) = ∑_i=1^k w_i(p_i I - B^-1 A)^-1 = ∑_i=1^k w_i (p_i B - A)^-1B.The matrix function R_k(B^-1A) in <ref> is used as the filter in <ref>.Among various quadrature rules, the optimal quadrature needs to be decided based on a criterion. As we will see later, the convergence rate of subspace iteration mainly depends on the ratio <ref>. Since we do not know eigenvalues in a priori, we could assume that there is an annulus around the boundary ofas a generalized eigengap. The inner part and the outer part are,I = {z:|z|≤ a}, andO={z:|z|≥ b},where a and b are the radii of the inner and outer parts of the annulus, I contains all the eigenvalues inside . Then the criterion is defined as,= sup_z ∈ O |R_k(z)|/inf_z ∈ I |R_k(z)|.When the ratio is small, the convergence of the subspace iteration is fast. Hence, we would like to address the following optimization problem to obtain the optimal weights and poles for a given k,inf_{w_i}_i=1^k, {p_i}_i=1^ksup_z ∈ O |R_k(z)|/inf_z ∈ I |R_k(z)|.From <ref>, what we want is to separate the values inside and outside by enlarging the values in I and reducing the values in O at the same time. Following the above argument, the contour is not necessarily the boundary ofand we can choose any contour in the annulus,annu(a,b)={z:a≤ |z|≤ b}.Moreover, we could discard the concept of contour discretization and view it as a rational function seperation problem. One could imagine that as b - a becomes larger, it is easier to separate the values inside and outside the annulus with rational functions. The drawback of using a larger b is that more eigenvalues may fall into annu(a,b) and we do not explicitly know the impact of these eigenvalues on the convergence of the subspace iteration.<ref> illustrates the approximation error to the step function and the criteria ratiofor three numerical discretizations of R_k(z) with 16 poles, namely the trapezoidal quadrature, Zolotarev fourth function <cit.> on real axis,and the Gauss quadrature. As shown in <ref>, the Zolotarev fourth function on real axis is neither optimal for non-Hermitian eigenvalue problemsin the L^∞ sense nor optimal in <ref> sense when the innerand outer part no longer defined on real axis. The trapezoidal quadrature outperforms the other two. As one of the contributions in this paper, we prove in <ref> that trapezoidal quadrature provides the asymptotically optimal weights and poles for <ref>. §.§ Practical consideration Given a discretization rule, the major computational cost in applying the filter R_k(B^-1A) Y as in <ref> lies in solving the shifted linear systems, (p_i B - A)^-1 for i = 1, …, k. Such a computational cost is often determined by the condition number of (p_i B - A). Since the positions of poles are on the contour, the condition number is inversely proportional to the eigengap around the contour, which is in general large in practice. Hence, in most contour based filters, the shifted linear systems are solved by direct methods, e.g., LU factorization. The overall computational cost is then divided into two parts: the offline factorization part and the online solving part (backward substitution). The cost could be rewritten asC_factor× k + C_apply× k × n_col× n_iter + o(C_apply), where C_factor is the cost of a factorization, C_apply is the cost of a backward substitution, k is the number of poles, n_col is the number of columns in Y, n_iter is the number of subspace iterations, and o(C_apply) = o(C_apply(N)) is the rest cost of a lower order than C_apply(N). Throughout the subspace iterations, the tuneable hyperparameters are k and n_col, and n_iter is determined by k, n_col, and the stopping criteria in the algorithm. The dependence of n_iter on k and n_col could be reflected by the function value gap of R_k(λ_i), since we are essentially applying power method withR_k(B^-1A). Let σ be a permutation of 1, 2, …, N, such that|R_k(λ_σ_1)| ≥ |R_k(λ_σ_2)| ≥⋯≥ |R_k(λ_σ_N)|.Then, the number of subspace iteration n_iter mainly depends on the ratio,max_i > n_col |R_k(λ_σ_i)| / min_λ_σ_i∈ |R_k(λ_σ_i)| .When the ratio is greater or equal to one, then the subspace iteration would suffer from a divergence issue. When the ratio is smaller than one, the subspace iteration would converge and the convergence rate depends on the distance between the ratio and one. The further the distance, the faster the convergence. In the following, we discuss the practical consideration for the number of vectors n_col and the number of poles k. Number of vectors n_col To extract the entire eigenspace we are interested, it is necessary that n_col≥ s. However, the number of eigenvalues in the regionis not known a priori. Usually, a rough estimation of s, denoted as s̃, is calculated and the number of vectors is set as n_col = ⌊ρs̃⌋ for ρ being a hyperparameter greater than one. Since the estimation of s is not in the scope of this paper, we set n_col = ⌊ρ s ⌋ in all numerical experiments. Even when we have n_col≥ s, the subspace iteration may still fail to converge to all the desired eigenpairs. We provide an example of such cases in <ref>. There are 2 eigenvalues λ_1 = 0 and λ_2 = 0.75 inside , and an eigenvalue λ_3 = √(2) e^π/4 outside. As in the <ref> Left, when 4 poles are adopted, the function values obey 1 = |R_4^(T)(λ_3)| > |R_4^(T)(λ_2)| ≈ 0.7596. The ratio <ref> is greater than one, and the iteration with two columns would converge to (λ_1, x_1) and (λ_3,x_3) other than desired eigenpairs. The overall subspace iteration fails to capture all the desired eigenpairs inside . One way to deal with the issue is to increase n_col until it covers all eigenvalues whose function values are greater than R_4^(T)(λ_2) and make the ratio <ref> smaller than one. Even when convergence is guaranteed, we may still increase n_col for faster convergence. However, when there are many unwanted eigenvalues close to the contour, we need to set n_col extremely large for the subspace iteration to converge. In this case, it would be more efficient to increase the number of poles. Number of poles k In many applications, for stable convergence, adding poles is an inevitable choice. When more poles are added, i.e., f(z) has been discretized with more points, the numerical approximation of R_k(z) to f(z) is improved. The ratio <ref> is guaranteed to be smaller than one even when n_col = s. For example, as in <ref> Right, the number of poles is increased from 4 to 16. Then we have function values |R_16^(T)(λ_1)| = 1, |R_16^(T)(λ_2)| ≈ 0.9901, and |R_16^(T)(λ_3)| ≈ 0.0667, and the ratio <ref> becomes |R_16^(T)(λ_3)| / |R_16^(T)(λ_2)| ≈ 0.0594 away from one. The subspace iteration would converge efficiently even when n_col = 2. Adding the number of poles leads to a more accurate approximation to f(z), and, hence, a smaller n_col and n_iter. The drawback of increasing the number of poles is the increasing number of matrix factorizations, which is computationally more expensive than that of solving (the backward substitution). When a massive amount of computational resources are available, all k poles could be calculated independently and in parallel. Hence, in practice, we would increase k to benefit most from computational resources and then increase n_col to have an efficient and robust subspace iteration algorithm. § ASYMPTOTICALLY OPTIMAL CONTOUR DISCRETIZATIONThis section shows that the trapezoidal quadrature is an asymptotically optimal discretization for a disk region , i.e., an asymptotically optimal solution to the min-max problem <ref>.In <ref>, Zolotarev third and fourth problems are reviewed. The former serves as the theoretical foundation of the asymptotic optimality of trapezoidal quadrature. Then <ref>derives that R_k^(T)(z) = R_1^(T)(z^k), which serves as a compact formfor R_k^(T)(z). For the sake of notations, we abuse R_k(z) = R_k^(T)(z)in the rest paper, which represents the trapezoidal quadrature of the unitcircle contour whose center is located at the origin. When the radii of thecontour is r and the center is c, we denote the discretization asR_c,r,k(z). Finally, we prove that the trapezoidal quadrature is anasymptotically optimal contour discretization for a disk regionin<ref>.§.§ Zolotarev problemsWe introduce the Zolotarev third and fourth problems with their related theoretical results <cit.>. Zolotarev third problem proposes optimal rational functions for the separation of two regions, whereas Zolotarev fourth problem proposes the optimal uniform approximation to the sign function on two symmetric intervals. Since contour discretization admits the form of a rational function, it is natural to bridge the contour discretization and Zolotarev problems.The Zolotarev third and fourth problems are given in <ref> and <ref>, respectively.Let E and G be two disjoint regions of , i.e., E ∩ G = ∅. The Zolotarev third problem isZ_k(E, G) = inf_r ∈_k,ksup_z ∈ E |r(z)|/inf_z ∈ G |r(z)|. Let 0 < ℓ < 1. The Zolotarev fourth problem is inf_r ∈_k,k‖sign(x) - r(x)‖_L^∞ ([-1, -ℓ] ∪ [ℓ, 1]).The Zolotarev third problem tends to find a rational function that separates E and G most. The Zolotarev fourth problem in a special case of the third problem with the region E = [-1, -ℓ] and G = [ℓ, 1]. It can be shown that the Zolotarev third problem and Zolotarev fourth problem are equivalent via a Möbius transform <cit.>. The explicit solution to Zolotarev fourth problem is adopted in <cit.> to construct an interior eigensolver for Hermitian eigenvalue problems. More results about the Zolotarev fourth problem can be find in <cit.>. When E and G are symmetric disks as in <ref>, the solution to Zolotarev third problem is explicitly given in <ref>. <Ref> claimed in this paper takes a different parameterized form of that in <cit.>.Let S = {z ∈: |z - 1 + ℓ/2| ≤1 - ℓ/2}, 0 < ℓ < 1. Then the rational functionr_k^(Z)(z) = (z - √(ℓ )/z + √(ℓ ))^k, attains the infimum of the Zolotarev third problem Z_k(S,-S) and the infimum equals to (1 + √(ℓ )/1 - √(ℓ ))^-2k. The explicit solution to Zolotarev third problem as in <ref> is the key to prove the asymptotical optimality of the trapezoidal quadrature for contour integral. The rational function in <ref> is referred as the Zolotarev function in the rest paper. §.§ Compact form for R_k(z)In order to connect the Zolotarev function and the trapezoidal quadrature of the contour integral, and derive the composite formula in <ref>, we establish an equality relation between R_k^m(z) and R_k(z^m). The relation heavily relies on the symmetry of trapezoidal quadrature on the circle.Let us start with toy cases k = 2, 4. The trapezoidal quadrature of unit circular contour with two poles, R_2, could be rewritten asR_2(z) = 1/2( e^π/2/e^π/2 - z + e^3 π/2/e^3 π/2 - z) = 1/22 e^π/e^π - z^2 = 1/1 + z^2 = R_1(z^2).Here we use the symmetry of poles and weights with respect to the origin to derive the compact form of R_2(z) and find that R_2(z) is equivalent to R_1(z^2). Let us further derive the compact form of R_4(z),R_4(z) = 1/4( e^π/4/e^π/4 - z + e^7π/4/e^7π/4 - z + e^3π/4/e^3π/4 - z + e^5π/4/e^5π/4 - z) = 1/2( e^π/2/e^π/2 - z^2 + e^3π/2/e^3π/2 - z^2) = R_2(z^2) = R_1(z^4),where, in the second equality, we combine the first two and last two terms, and in the last equality, we adopt the compact form of R_2(z). From the derivation of the compact forms of R_2(z) and R_4(z), we could directly extend the derivation to obtain the compact form of R_k(z) = R_1(z^k) for k = 2^m, m∈_+. Fortunately, the compact form holds forany k ∈_+. The result is summarized in <ref>. For all k ∈_+, let k roots of z^k = -1 be σ^(k)_i for i = 1, …, k. Then the compact form of R_k(z) admits,R_k(z) = 1/k∑_i = 1^k σ^(k)_i/σ^(k)_i - z = 1/1 + z^k = R_1(z^k). We first prove two equalities, <ref> and <ref>, and then derive the compact form of R_k(z).The k roots of the k-th degree polyWnomial z^k + 1 are abused as σ_i for i = 1, 2, …, k. A k-th order polynomial with k roots takes form, a_k ∏_i = 1^k (z - σ_i), where a_k is the coefficient in the leading order. Comparing with the leading order coefficient in z^k + 1, we know a_k = 1 and have,z^k + 1 = ∏_i = 1^k (z - σ_i). Then we prove the second equality,-1/k∑_i=1^k σ_i ∏_j = 1, j ≠ i^k (z - σ_j) = 1.The left-hand side of <ref> is a (k - 1)-th degree polynomial. For the equality <ref> to hold, we only need to check the equality on k different points. Specially, we check that on σ_i for i = 1, …, k and obtain, -σ_i/k∏_j = 1, j ≠ i^k(σ_i - σ_j) = - σ_i/klim_z →σ_iz^k + 1/z - σ_i = - σ_i/kk σ_i^k - 1/1 = - σ_i^k = 1,where the first equality is due to <ref> and the continuity of (z^k + 1) / (z - σ_i), the second equality comes from the L'Hopital rule of complex functions, and the last equality holds since σ_i is a root of z^k + 1.Finally, we derive the compact form of R_k(z) as in <ref>.R_k(z) = 1/k∑_i = 1^k σ_i/σ_i - z = -1/k∑_i = 1^k σ_i ∏_j = 1, j ≠ i^k(z - σ_j)/∏_i = 1^k(z - σ_i) = 1/∏_i = 1^k (z - σ_i) = 1/z^k + 1 = R_1(z^k),where the second equality adopts <ref> and the fourth equality adopts <ref>. A related compact form without detailed derivation could be found in <cit.>. The compact form <ref> could be further generalized to R_c,r,k(z) and results the compact form,R_c,r,k(z) = 1/1 + (z - c/r)^k. §.§ Optimal solution and the asymptotic optimality of trapezoidal quadratureIn this section, we prove that, if we know the desired spectrum explicitly, the rational function behind the inverse iteration achieves the optimal of <ref> for E = O and G = I. On the other hand, the rational function R_k(z) from the trapezoidal quadrature discretization of the contour integral achieves asymptotic optimality of <ref>, i.e., the ratiofor R_k(z) decays in the same rate as that of the optimal rational function up to a constant prefactor 2.The rational function (1/z)^k achieves the infimum of the min-max problem <ref> for E=O and G=I. And the infimum equals to (a/b)^k. We address Zolotarev third problem with region I and O, i.e., Z_k(O,I). Define a Möbius transform T(z) = γz - α/z - β such thatT(-b) = 1,T(-a) = -1,T(a) = -ℓ,T(b) = ℓ.The parameters γ, α, β, and ℓ are determined by a and b. They satisfyα = √(ab), β = -√(ab), γ = √(b) - √(a)/√(b) + √(a), ℓ = ( √(b) - √(a)/√(b) + √(a))^2.It can be verified that T(I) = -S and T(O) = S for S in <ref>. Then the composition of the Möbius transform and the Zolotarev function r_k^(Z)(T(z)) achieves the infimum of Z_k(O, I) and is denoted as, R_k^(A)(z) = R_k^(Z)(T(z)) = (1/z)^k.The infimum of I is taken when |z|=a and the supremum of O is taken when |z|=b. Then the infimum of the ratio is (a/b)^k. <Ref> gives the optimal rational function in solving <ref>. The rational function z^-k therein combined with subspace iteration corresponds to the well-known inverse power method. Further, from <ref>, the radius ofor the diameter of the annulus is not included in the optimal rational function. Hence, we conclude that, in the sense of convergence rate, the optimal interior eigensolver is the inverse power method if we assume the center of the desired regionis explicitly known.While the optimal rational function z^-k only has a pole at the origin and could not be written in a sum of low-order rational functions form <ref>. The inverse power method then has to be executed sequentially and could not benefit from the parallelization of distinct poles. In the following, we argue that, although the trapezoidal quadrature of contour integral is not the optimal rational function, it achieves asymptotic optimality in the sense that the ratioof R_k(z) decays at the same rate as that of the optimal rational function up to a constant pre-factor of 2. We now consider that the contour is the boundary of I and the trapezoidal quadrature with k points is adopted. By <ref>, the discretization can be written as R_0,a,k(z) = 1/1 + (z/a)^k, where I is a disk centered at the origin with radius a. By maximum modulus principle, the infimum of I and the supremum of O are taken when |z|=a and |z|=b. In region I, |z/a|^k≤ 1. The absolute value of denominator can be viewed as the distance between -1 and (z/a)^k. By simple computation, the infimum is achieved when z=a. In a similar way, the supremum of O is achieved when z=√(-1)b from the fact that |z/a|^k>1 in O. The ratio <ref> is = 2/(b/a)^k-1∼ 2 ( a/b)^k,which asymptotically decays at the same rate as that in <ref>. The above discussion is summarized in the following corollary. The trapezoidal quadrature discretization of the contour integral on the boundary of G = I results in the rational function R_k(z) = 1/1 + (z/a)^k. The rational function R_k(z) achieves the ratio = 2/(b/a)^k-1, which is asymptotically equal to the infimum of the min-max problem <ref> for E=O and G=I. Although the trapezoidal quadrature of the contour integral is not the optimal rational function for <ref>, the ratio asymptotically achieves the optimal one up to a constant prefactor 2. Hence, we call the rational function from the trapezoidal quadrature of the contour the nearly optimal rational function for <ref>. The advantage of the trapezoidal quadrature over the optimal rational function is that <ref> could be efficiently parallelized in solving the shifted linear systems. Another advantage is as we will propose next that <ref> admits a composite rule and benefits from the flexible trade-off between the number of matrix factorizations and the iterative linear system solves. § COMPOSITE RULE OF TRAPEZOIDAL QUADRATUREIn this section, we will derive the composite rule of the trapezoidal quadrature discretization of the contour integral and propose eigensolvers based on the composite rule. In the eigensolvers, the composite rule is combined with the multi-shift GMRES to reduce the cost of outer iteration. The proposed eigensolver can reduce cost while preserving theasymptotically optimal ratio . §.§ Composite rule Given a positive integer k and its integer factorization k = k_1 k_2 for k_1 > 1 and k_2 > 1, we aim to rewrite the k-th order rational function R_k(z) as a composition of two k_1-th and k_2-th rational functions, R_k_1(z) and R̂_k_2(z) = R_k_2(T(z)), where T(·) is a simple transform function. Precisely, the composite function admits, R_k_1 k_2(z) = R̂_k_2(R_k_1(z)) = R_k_2(T(R_k_1(z)).We restrict T(·) to be a Möbius transform and require it satisfying T(R_k_1(z)) = z^k_1 such that the composition with R_k_2(z) becomes obvious. Luckily, we find that the resulting T(·) is easy to be incorporated into the eigensolver design part.For a Möbius transform function T(z) admitsT(z) = a z - b/c z - d,for a, b, c, d being constant coefficients. According to <ref>, we have a natural composite expression as,R_k_1k_2(z) = R_1(z^k_1 k_2) = R_k_2(z^k_1).If T(R_k_1(z)) = z^k_1, then directly we haveR_k_1k_2(z) = R_k_2(T(R_k_1(z))),which is the desired composite rule.Now we determine the coefficients such that T(R_k_1(z)) = z^k_1. Substituting R_k_1(z) = 1/1 + z^k_1 into the expression of T(z), we obtain,T(R_k_1(z)) = a - b(1 + z^k_1)/c - d(1 + z^k_1) = z^k_1 ⟺d z^2k_1 + (d-c-b) z^k_1 + (a - b) = 0.The above equality holds for all z. Hence we have solutions of coefficients satisfying d = 0 and a = b = -c. These solutions of coefficients lead to the unique Möbius transform function,T(z) = 1-z/z.The only concern for the above derivation is the case z = 0. For rational function R_k(z), zero is achieved R_k(z) = 0 if and only if |z| = ∞, which is not part of the spectrum of matrices. Hence z = 0 for T(z) would not cause any trouble in practice and our composite expression holds for all values of z. In <ref>, the mapping of R_k_1(z) and T(R_k_1(z)) are illustrated.Throughout the above derivation, we conclude that R_k_1k_2(z) = R_k_2(T(R_k_1(z))). A generalized composite rule is given in <ref> for domains with various center c and radius r. In <ref>, we compose R_k_2(·) and T(·) together and rewrite it as the sum of first-order rational functions. Such a summation form could later be used directly in the algorithm design. Given a positive integer k and its integer factorization k = k_1 k_2, the rational function R_c,r,k(z) admits the following composite rule,R_c,r,k(z) = R_0,1,k_2(T ( R_c,r,k_1(z)) ),where T(·) is the Möbius transform <ref>. When k_2 is even, the rational function R_c,r,k(z) further admits the summation form,R_c,r,k(z) = ∑_i=1^k_2 c_i(R_c,r,k_1(z) - s_i)^-1R_c,r,k_1(z),where c_i^(k_2) = -1/k_2σ_i^(k_2)/1+σ_i^(k_2), s_i^(k_2) = 1/1+σ^(k_2)_i, and {σ_i^(k_2)}_i=1^k_2 are roots of x^k_2=-1. When k_2 is odd, R_c,r,k(z) = ∑_i=1^k_2-1 c_i(R_c,r,k_1(z) - s_i)^-1 R_c,r,k_1(z) + 1/k_2R_c,r,k_1(z),where σ^(k_2)_k_2=-1. <Ref> could be proved through direct calculation. The detailed proof can be found in <ref>. Besides the composite rule, there is a connection between the poles of R_k(z) and the poles of the composite rule. The poles of the original rational function are transferred into the poles of the inner operator. The connection is detailed in <ref>, whose proof is in <ref>.For any p_i^(k) being a pole of R_c,r,k(z), there exist a s_j^(k_2) for 1≤ j≤ k_2, such thatR_c,r,k_1(p_i^(k))=s_j^(k_2),where s_k_2^(k_2) could be infinite when k_2 is odd.§.§ Interior eigensolver with subspace iterationUsing R_c, r, k(z) as the filter in subspace iteration for a matrix pencil (A, B) requires the evaluation of R_c, r, k(B^-1 A) Y for Y being a matrix of size N × n_col. By the composite rule for R_c, r, k(z) in <ref>, the evaluation of R_c, r, k(B^-1A)Y could be rewritten as,R_c, r, k(B^-1A)Y = ( ∑_i=1^k_2 c_i(R_c,r,k_1(B^-1 A) - s_i I)^-1) ( R_c,r,k_1(B^-1 A) Y ).where the operation R_c,r,k_1(B^-1 A) Y admits,R_c, r, k_1(B^-1A)Y = ∑_i=1^k_1w_i(p_iB-A)^-1BYfor {w_i} and {p_i} being the weights and poles of R_c, r, k_1(·).In <ref>, there are inner and outer parts of rational function evaluations. For the inner part, as in <ref>, the poles are on the contour and the width of the annulus is determined by the eigengap, which is small in many practical applications. Hence we conclude that linear systems p_i B - A are in general of bad condition numbers. Iterative linear system solvers would often take too many iterations before convergence. Therefore, a direct solver is adopted for all these linear systems. We pre-factorize all linear systems and denote them as K_i = p_i B - A for i = 1, …, k_1. [Throughout the numerical section of this paper, dense LU factorization is used by default for dense matrices A and B. If A and B are sparse matrices, we adopt the default sparse LU factorization methods in MATLAB.] Once the factorizations K_is are available, the inner part could be addressed efficiently. The inner part <ref> essentially applies a rational filter of the matrix pencil (A, B) and multiplies it to a matrix Y. Without loss of generality, we treat the inner part as a matrix or an operator G acting on Y. Since <ref> could be evaluated efficiently after the pre-factorization, we know that G could be applied to any Y efficiently.For the outer part, we first rewrite <ref> using the operator G,R_c, r, k(B^-1A) Y = ∑_i=1^k_2 c_i(G - s_i I)^-1Yfor Y = G(Y). Then it is obvious that <ref> is in the same form as <ref> with the matrix pencil replaced by (G, I). Hence, if we have the explicit matrix representation for G, we could also apply a direct solver to address <ref>. On the other hand, we notice that the condition numbers of linear systems in <ref> are much smaller than that in <ref>. For linear systems in <ref>, a rational filter with order k_1 has already been applied and the eigengap is enlarged. As shown in <ref> and also later numerical experiments, the relative eigengap for (G, I) is much enlarged compared to that of (A, B). Hence, iterative linear system solvers in this case are expected to converge fast. Throughout this paper, we adopt GMRES <cit.> as the default iterative linear system solver for <ref> with G been applied as an operator. Recall that GMRES is a Krylov subspace method. By the shift-invariant property of the Krylov subspace, all k_2 shifts in <ref> could be addressed simultaneously in the same Krylov subspace, i.e.,_n(G-s_iI,y)=_n(G,y),(G-s_iI)V_n=V_n(H_n,n+1-s_iI_n,n+1),for i = 1, …, k_2 and V_n denoting the basis of _n(G, y). The multi-shift GMRES <cit.> applies the operator G once per iteration. In all of our numerical experiments, the multi-shift GMRES converges in less than one hundred iterations, and no restarting is needed.Using a direct solver and an iterative solver for the inner and outer part of <ref>, we obtain an effective algorithm for the rational matrix function filter. Combining this filter with subspace iteration leads to our eigensolver based on the composite rule of the contour integral based rational function, where the contour integral is discretized via the trapezoidal quadrature. <Ref> gives the overall pseudocode.We now estimate the computational cost for <ref>. Let C_factor and C_apply be the computational complexities of the factorization and backward substitution (solving) of an N × N matrix. For almost all dense and sparse linear system solvers, the solving complexity is the same as its memory cost. Hence, C_apply is also used as the memory cost in storing a factorization.In the preparation phase before subspace iteration, the weights and poles are computed independent of the matrix, whose computational cost is then O(1). For the pre-factorization of k_1 linear systems, the computation complexity is k_1 C_factor and the memory required is k_1 C_apply.In the subspace iteration phase, the per-iteration computational cost is dominated by the multi-shift GMRES. In the multi-shift GMRES, there are two parts of major computational costs, the construction of orthonormal bases for the Krylov subspace and solving the reduced problems. Since the iteration number for GMRES is bounded by a small constant, the cost in solving the reduced problems is of lower order compared to that of the basis construction part. Hence, we only count the cost for the basis construction part for the multi-shift GMRES. If we denote n_iter^(j,t) as the GMRES iteration number for j-th column in the t-th subspace iteration, the dominant computational cost in the GMRES is∑_t = 1^T ∑_j = 1^n_col n_iter^(j,t)· k_1 C_apply,where T is the subspace iteration number, k_1 C_apply is the cost in applying G(·) to a vector. The leading memory cost ismax_t=1^T ∑_j=1^n_col n_iter^(j,t)· N. The overall dominant computational and memory costs for <ref> are summarized in <ref>. In the same table, we also list the computational and memory costs for subspace iteration with k_1k_2-th order rational filter without using the composite rule. Another row of ratio is added to indicate the acceleration from <ref>. Clearly, both the computation and memory costs in the pre-factorization phase are reduced by a factor of k_2. While the comparison for the subspace iteration part is less clear. The ratio depends on the iteration numbers of both the subspace iteration and the multi-shifted GMRES. We emphasize that as the subspace iteration goes, the columns of Y become closer and closer aligned with the eigenvectors. The Krylov spaces will converge faster and faster and so is the GMRES, i.e., ∑_j=1^n_coln_iter^(j,t) will decrease fast as t increase. §.§ Composite rule eigensolver without subspace iterationThe above comparison assumes that we use the same order trapezoidal quadrature with different implements, where the composite rule substitutes the cost of pre-factorization into the solving phase. We can also fix the number of pre-factorizations which in most cases is limited by the memory. <Ref> shows the comparison of approximation ratio and the numbers of applying the filter operator G. The simple rule with subspace iteration achieves the asymptotically optimal ratio only if both k_1 and T are large enough. While in practice the k_1 is often limited due to the expensive memory cost in storing the factorizations. In the worst case, k_1 is limited and not large enough to make the ratio smaller than 1. On the other hand, for the composite rule, k_2 is not limited. Increasing k_2, the GMRES iteration number n_iter would increase as well. Such an increase in the iteration number is due to the new shifts. However, as we will show in <ref>, n_iter is not sensitive to k_2 and increases mildly. With k_2 large enough, the composite rule will achieve the asymptotically optimal ratio in approximation. Importantly, T_c = 1 is sufficient for the composite rule to achieve the target precision when k_2 is sufficiently large. That is a numerically attractive feature of the composite rule. With this feature, we may discard the subspace iteration and only increase k_2 when needed. Furthermore, the shifts s_j^(k_2) are parts of the shifts s_j^(2k_2) and their weights satisfy c_j^(k_2)/2=c_j^(2k_2). Therefore, we propose an algorithm parallel to <ref> to double k_2 sequentially as in <ref>. The key difference between composite rule eigensolvers with and without subspace iteration is the 9th line in <ref>. We do not need to regenerate the Krylov subspace from scratch. Instead, we could reuse the Krylov subspace and further expand it if the new shifts require a larger Krylov subspace to converge. When k_2 is doubled, the only cost for shifts in the previous iteration is the recalculation of the weights, which is negligible. Hence if k_2 = K_2 is sufficient to achieve the desired accuracy for a given problem, the cost of <ref> starting with k_2 = 1 is almost the same as that starting with k_2 = K_2 for K_2 being a power of two. Comparing <ref> with the simple rule, T_c is always one. Although <ref> could be combined with subspace iteration as well, numerically we found it not necessary. <Ref> turns the convergence cost of the subspace iteration into the convergence cost of the multi-shift GMRES. We find that the idea of reusing Krylov subspace for algorithm design is similar to that in <cit.>, where they use a single Cayley transform for preconditioning. Instead, we use trapezoidal quadrature with k_1 poles for preconditioning. § NUMERICAL EXPERIMENT In this section, we will demonstrate the efficiency and stability of the algorithm we proposed through three experiments. The first experiment shows the advantage of the trapezoidal quadrature discretized contour integrals over another contour integral discretization, Gauss quadrature. The latter two experiments show the computational benefit of applying <ref> and <ref>. We test some medium-to-large-scale matrices for illustration purposes. This paper focuseson the design of an efficient filter rather than proposing a novel projection technique. Hence the projection techniques used in <ref>, <ref> and HFEAST remain identical.Throughout the numerical experiments, the relative error of eigenpair is defined as e(λ̃_̃ĩ, x̃_i) = ‖ A x̃_i - B x̃_i λ̃_i ‖_2/(|c| + r)‖ B x̃‖_2,where c and r is the center and radius of the region . For the non-Hermitian interior eigenvalue problem, a phenomenon called ghost eigenvalue often appears. The ghost eigenvalue is the one that appears as a computed eigenvalue but not of the original matrix pencil (A, B). The ghost eigenvalue would make the subspace iteration difficult to converge. There are a lot of practical strategies to address this issue. One of them as in <cit.> is setting a tolerance τ_g, which is much larger than the target relative error τ. As the iteration goes, the true eigenvalues will converge to a small relative error, while the ghost eigenvalues will not converge to the same precision. After a few steps, there is a gap in the relative errors between true eigenvalues and ghost eigenvalues. When the relative error of an approximate eigenpair (λ̃_i,x̃_i) inside 𝒟 is smaller than τ_g, we view them as the filtered eigenpairs and denote the number of filtered eigenpairs as p. When p is not changed and all relative errors of the filtered eigenpairs are smaller than τ, we terminate the algorithm. In our experiments, we set τ_g = 10^-2 and τ = 10^-8. The direct solver is thefunction in MATLAB with four outputs under the default setting, which leads to a sparse LU factorization for sparse input matrices. All programs are implemented and executed with MATLAB R2022b. All of the experiments are performed on a server with Intel(R) Xeon(R) Gold 6226R CPU at 2.90 GHz and 1 TB memory. In performance experiments, we report the single-thread wall time.§.§ Asymptotically optimal rational filter First we show the ratio <ref> for the trapezoidal quadrature, Gauss quadrature and the optimal ratio in <ref>. The numerical results are illustrated in <ref>. Here, we set a = 1 and b = 1.1. The infimum of I and supremum of O for Gauss quadrature is not known as a close form, so we use the discretization of 1000 points in both directions of real and imag part on [-1.5, 1.5] + [-1.5, 1.5] · to esitmate <ref> of Gauss quadrature. Only even k is adopted as we perform the Gauss quadrature on the upper semicircle and lower semicircle separately, but not on the full circle directly. Such a Gauss quadrature discretization preserves the symmetry and would perform better than the one that breaks symmetry.From <ref>, we can find that trapezoidal quadrature always performs better than Gauss quadrature and shows the same decrease order as the optimal ratio comes from the result of Zolotarev. While the Gauss quadrature has a different decrease order from the optimal one.Now we verify the convergence rate mainly depends on <ref> via a toy example. On the same region as <ref>, the eigenvalues of test matrices are always keeping four points that the infima of I and suprema of O for trapezoidal and Gauss quadrature is achieved. We had 16 random eigenvalues inside the circle |z| = 1/b and had 80 random eigenvalues on the circle |z| = b · 1.01. The eigenvectors matrices are X = X_1 +X_2, where X_1 and X_2 are standard Gauss random matrices. We adopt n_col = 20, which means the ratio <ref> is exactly <ref>. We denote k = 16, 32 and set the limit of subspace iteration t to be 50. When the number of poles is 16, the ratios of trapezoidal quadrature and Gauss quadrature are 0.5563 and 1.0685, respectively. In this case, the Gauss quadrature fails to capture all the eigenvalues in O while trapezoidal quadrature works. The residual shown is the maximum relative error of filtered eigenpairs, i.e., the approximated eigenpairs whose relative errors are smaller than τ_g.We remark that the convergence behavior depends on the distribution of eigenvalues. Our analysis in <ref> views the desired spectrum and undesired spectrum as a disk and the complement of a disk. While the eigenvalues of a matrix are discrete points in these regions. It could be the case that the discrete eigenvalues avoid all bad areas in both the numerator and denominator of <ref> with Gauss quadrature and have a small ratio . In such a case, the rational filter with Gauss quadrature could outperform the rational filter with trapezoidal quadrature for some matrices. While without prior knowledge of the distribution of eigenvalues, the trapezoidal quadrature based filter is a near-optimal choice.§.§ Composite rule with subspace iterationThe numerical experiment in this section is for <ref>, where we set k_1 = k_2 = 8 and compare the performance against HFEAST (the subspace iteration with a filter with 64 points). The composite rule can achieve the same accuracy and convergence rate in subspace iteration as that of the simple rule. The composite rule outperforms the simple rule when the cost of factorizations is much more expensive than the cost of solving.The class of non-Hermitian generalized eigenvalue problems comes from the model order reduction tasks <cit.> in the circuit simulation <cit.>. Matrices are constructed based on quasi-two-dimensional square power grids of size n_x × n_x × 10. The non-Hermitian matrix pencil is (G, C) taking the block form as,G=[ G_11 G_12; G_210 ],C = [ C_c 0; 0 L ].In particular, G_11 represents the conductance matrix as G_11 = L_n_x⊗ I_n_x⊗ I_10 + I_n_x⊗ L_n_x⊗ I_10 + 1/10 I_n_x⊗ I_n_x⊗ L_10, where L_n is a weighted one-dimensional Laplacian matrix of size n × n as L_n = n/100[1 -1 ; -12 -1; ⋱⋱⋱ ; -12 -1;-11 ]_n × nand I_n is an identity matrix of size n × n. The off-diagonal blocks of G admit G_12 = -G_21^⊤∈^10 n_x^2 × (20 + 2n_x^2) with entries being ± 1 or zero. The first 20 columns of G_12 correspond to 20 input ports at (·, 1, 1) and (·, n_x, 10) two edges, where the corresponding rows have a one. The later 2n_x^2 columns of G_12 correspond to inductors. We uniformly randomly pick 2n_x^2 interior nodes from grid nodes and add an inductor with their neighbor nodes on the same layer. The corresponding G_12 part is the incidence matrix of the inductor graph. Matrix L is a diagonal matrix of size 20 + 2n_x^2. The first 20 × 20 block of L is zero. The later 2n_x^2 × 2n_x^2 block has diagonal entries uniformly randomly sampled from [0.5,1.5] · n_x · 10^-4 being the inductance of inductors. The submatrix C_c represents capacitors in the circuit. We add grounded capacitor with 10^-3 capacitance for each node, which means C_c is a diagonal matrix whose elements are all 10^-3. The matrix patterns of G and C are shown in <ref> for n_x = 10. <Ref> lists detail information about matrices used in our numerical experiments as well as their target regions. The last column of <ref> includes the runtime ratio of the matrix factorization and solving, where the solving runtime is the averaged cost of backward substitutions on a single vector. In all cases, there are 20 eigenvalues in their target regions and we adopt n_col = 24. Reference eigenvalues are calculated byin MATLAB. The stopping criteria of GMRES is 10^-9. Numerical results are reported in <ref>. The italic values therein are estimated numbers since the simple rule runs out of memory for those settings. The convergence behaviors are illustrated in <ref>. The composite rule establishes a trade-off between the number of matrix factorizations and the number of solving in GMRES. When setting k_2 = 1, the composite rule falls back to the simple rule. <Ref> shows a comparison of the simple rule and the composite rule in two folds: runtime and memory. As shown in the last column of <ref>, the runtime ratio between the factorization and the solving grows as the matrix size increases, which is due to the fact that the matrix factorization is of higher complexity compared to that of the solving. Hence, reducing the number of factorizations as in the composite rule would be beneficial. However, as shown in <ref>, the simple rule outperforms the composite rule in runtime since the solving dominates the total runtime. When the matrix size further increases, we would see the composite rule outperforms the simple rule in runtime. Regarding the memory cost, the simple rule costs about k_2 times more than that of the composite rule. In this example, we find that the simple rule with n_x = 400 already exceeds the memory limit of our computing platform, whereas the composite rule could solve eigenvalue problems with n_x = 400 or even larger.<Ref> shows that the composite rule converges identically as that of the simple rule. This indicates that both the GMRES and the direct solver achieve sufficiently good accuracy. In most cases we have tested, the subspace iteration converges effectively, i.e., usually in a few iterations.§.§ Composite rule without subspace iteration This experiment aims to show that with large k_2, the composite rule will converge without subspace iteration, and the GMRES iteration number does not increase dramatically when k_2 increases. Such an observation means the strategy doubling k_2 each time in <ref> would be affordable compared to the case with optimal k_2. Throughout this section, we reuse matrix pencils in <ref>. We perform three algorithms in this section: the simple rule with k = 8, the composite rule with k_1 = 8 and various choices of fixed k_2 (<ref>), and <ref> with k_1 = 8. Also, various choices of n_cols are explored.<Ref> reports the runtime of the simple rule and <ref>, and <ref> illustrates the relative runtime of <ref> for various k_2. The relative runtime of <ref> could be read from <ref> from those first triangle marks at k_2 being a power of two.In <ref>, all three choices of n_col overestimates the actual number of eigenvalues in the region. The simple rule with a fixed k = 8 fails to converge when n_col is not sufficiently large. In contrast, <ref> converges in all scenarios. Based on this experiment and other experiments we tried but not listed in the current paper, the convergence of the simple rule is sensitive to the choice of two hyperparameters, k and n_col.While the convergence of <ref> is not sensitive to the choiceof k_1 and n_col. [The requirement for n_col is that n_col is an overestimation of the number of eigenvalues in the region.] In the worst-case scenario, when the given region is enclosed by many unwanted eigenvalues, mildly increasing n_col would not resolve the convergence issue in the simple rule. However, <ref> can converge robustly. When both the simple rule and <ref> converge, we notice that <ref> outperforms the simple rule for small n_col. When n_col increases, these two methods become comparable on runtime.<Ref> explores the optimal choice of k_2 without subspace iteration, i.e., the first triangle marks on each curve. We find that the optimal k_2 is not necessary 2^p k_1 as in <ref>. Besides the factorization cost, the dominating computing cost of the composite rule is the multi-shift GMRES iterations, i.e., the number of applying G <ref>. Increasing k_2 would add more shifts to the multi-shift GMRES but not necessarily increase iteration number, and the extra cost of orthogonalization is negligible compared to that of applying G. In all curves in <ref>, we observe that, after their first triangle marks, the relative runtime mostly stays flat and increases extremely slowly. Hence, even if <ref> is not using the optimal k_2, the runtime of <ref> is almost the same as that with optimal k_2. We conclude that <ref> is an efficient and robust eigensolver and is more preferred than <ref>. We remark on the hyperparameter choices in <ref>. Given a matrix pencil and a region, an overestimation n_col of the number of eigenvalues is required. If we perform factorizations and solvings sequentially, we may need to choose a proper k_1 depending on whether factorizations are more expansive than that of solving. In the view of parallel computing, the k_1 factorizations and solvings are ideally parallelizable. Hence, we would set k_1 as large as possible to fully use the computation resource and reduce the GMRES iterations.§ CONCLUSIONThis paper finds the optimal separation rational function via the Zolotarev function. The optimal rational function leads to the traditional inverse power method in numerical linear algebra. Discretizing the contour integral with the standard trapezoidal quadrature results in an asymptotically optimal separation rational function. The numerical algorithm based on the trapezoidal quadrature (the simple rule) admits natural parallel computing property, while the inverse power method is sequential. Hence, the simple rule would benefit more from modern multi-core computer architecture. Further, we show the composite rule of the trapezoidal quadrature, i.e., R_k_1k_2(z) = R_k_2(T(R_k_1(z))) for R_k(·) being the simple rule of order k and T(·) being a simple Möbius transform.Based on the composite rule, we propose two eigensolvers for the generalized non-Hermitian eigenvalue problems, <ref> and <ref>. Both algorithms adopt direct matrix factorization for the inner rational function evaluation and multi-shift GMRES for the outer rational function. Compared to the simple rule with the same number of poles, both composite-rule-based algorithms reduce the number of factorizations and reduce the memory requirement in solving eigenvalue problems. This is of fundamental importance when matrices are of large scale. The difference between the two composite algorithms is the subspace iteration. In <ref>, both k_1 and k_2 are hyperparameters, and the algorithm adopts the subspace iteration to converge to desired eigenpairs. In contrast, <ref> is designed without subspace iteration. <ref> adopts k_1 as a hyperparameter and gradually increases k_2 until the rational function approximation is accurate enough and the algorithm converges to desired eigenpairs without subspace iteration. As k_2 increases in <ref>, by the property of multi-shift GMRES, the number of GMRES iterations, i.e., the number of applying G, increases very mildly. Hence, compared to the simple rule and <ref>, <ref> is a robust and efficient eigensolver.We demonstrate the efficiency of proposed algorithms via both small-scale and large-scale, synthetic and practical generalized non-Hermitian eigenvalue problems. Numerical results show that <ref> outperforms the simple rule only if the matrix factorization is much more expensive than the solving. The convergence of <Ref> is not sensitive to hyperparameter n_col and k_1. In terms of the runtime, <ref> either outperforms or is comparable to the simple rule. A suggestion for the hyperparameter choices of <ref> is also provided based on both the analysis and numerical results.§ ACKNOWLEDGEMENT This work is supported in part by the National Natural Science Foundation of China (12271109) and Shanghai Pilot Program for Basic Research - Fudan University 21TQ1400100 (22TQ017). § PROOF OF <REF>We can use the equation z = ry + c to transfer the contour discretization on an arbitrary circle into the case of the unit circle around origin. The rational function then admits,R_c,r,k(z)=R_0,1,k(y).Combining with the composite rule <ref>, we haveR_c,r,k(z) = R_0,1,k_1k_2(y) = R_0,1,k_2(T(R_0,1,k_1(y))) = R_0,1,k_2(T(R_c,r,k_1(z))).Now we turn to prove the summation form. We use the convention R_k(z) = R_0,1,k(z) that corresponds to the illustration in the body. When k_2 is even, there is σ_i^(k_2)≠ -1. With <ref>, the summation form is, R_c,r,k_1k_2(z)=R_k_2(T(R_c,r,k_1(y))) = 1/k_2∑_i=1^k_2σ_i^(k_2)/σ_i^(k_2) - 1 -R_c,r,k_1(y)/R_c,r,k_1(y)=1/k_2∑_i = 1^k_2σ_i^(k_2) R_c,r,k_1(y)/(1+σ_i^(k_2))R_c,r,k_1(y)-1=1/k_2∑_i = 1^k_2σ_i^(k_2)/1+σ_i^(k_2) (R_c, r, k_1(z) - 1/1 + σ_i^(k_2))^-1 R_c, r, k_1(x)=∑_i = 1^k_2 c_i(s_i^(k_2) - R_c, r, k_1(z))^-1 R_c, r, k_1(z),wherec_i^(k_2) = -1/k_2σ_i^(k_2)/1+σ_i^(k_2),s_i^(k_2) = 1/1+σ_i^(k_2).When k_2 is odd, the term associated with σ_k_2^(k_2)=-1 in summation form is equal to 1/k_1R_k_1. § PROOF OF <REF> By <ref>, we knowR_c,r,k_1(p_i^(k)) = R_0,1,k_1(σ_i^(k)) = 1/1+(σ_i^(k))^k_1 = 1/1+σ_j^(k_2) = s_j^(k_2). § GMRES ITERATION NUMBER As we mentioned in <ref>, the multi-shift GMRES will converge faster as the subspace iteration converges. <Ref> reports the number of solving in both the simple and the composite rules and its GMRES iteration number. The normalized last column of <ref> is visualized in <ref>. <Ref> shows that the number of solving in each subspace iteration in the simple rule stays constant, whereas that for the composite rule decreases. Notice that the n_iter decays much slower than the number of solving in the composite rule. We have two remark points. The first point is that different shifts converge in different numbers of iterations, and n_iter is the maximum number of GMRES iterations among all shifts. The second point is that different columns converge to eigenvectors with different rates, and the n_iter shown here is the maximum number of iterations among all columns. The difference in the decays is mainly due to the second point. siamplain
http://arxiv.org/abs/2310.18043v1
{ "authors": [ "Yuer Chen", "Yingzhou Li" ], "categories": [ "math.NA", "cs.NA", "65F15" ], "primary_category": "math.NA", "published": "20231027104107", "title": "Interior Eigensolver Based on Rational Filter with Composite rule" }
From Transcripts to Insights:Uncovering Corporate Risks Using Generative AI Alex G. KimThe University of Chicago, Booth School of Business, [email protected] Maximilian MuhnThe University of Chicago, Booth School of Business, [email protected] Valeri V. NikolaevThe University of Chicago, Booth School of Business, [email protected] First Draft: October 5, 2023 ==========================================================================================================================================================================================================================================================================================================We thank Saketh Aleti, Laurence van Lent, Yin Luo, Yinan Su, and workshop participants at the 7th Annual Global Quantitative and Macro Investment Conference by Wolfe Research for helpful comments. Irene Tan and Yijing Zhang provided excellent research assistance. This research is funded by the Fama-Miller Center for Research in Finance and the Stevens Doctoral Program at the University of Chicago Booth School of Business. Financial Support from the University of Chicago Booth School of Business is gratefully acknowledged.We explore the value of generative AI tools, such as ChatGPT, in helping investors uncover dimensions of corporate risk. We develop and validate firm-level measures of risk exposure to political, climate, and AI-related risks. Using the GPT 3.5 model to generate risk summaries and assessments from the context provided by earnings call transcripts, we show that GPT-based measures possess significant information content and outperform the existing risk measures in predicting (abnormal) firm-level volatility and firms' choices such as investment and innovation. Importantly, information in risk assessments dominates that in risk summaries, establishing the value of general AI knowledge. We also find that generative AI is effective at detecting emerging risks, such as AI risk, which has soared in recent quarters. Our measures perform well both within and outside the GPT's training window and are priced in equity markets. Taken together, an AI-based approach to risk measurement provides useful insights to users of corporate disclosures at a low cost.Keywords: GPT, ChatGPT, large language models, generative AI, risk information, firm-level risk exposure, conference call, political risk, AI risk, climate change riskJEL Codes: C45, D81, G12, G30, G32, M41empty fancy Uncovering Corporate Risks Using Generative AI § INTRODUCTION In the era of global political instability, climate uncertainty, and rapid technological change, corporations face multifaceted risks that extend far beyond traditional financial metrics. Among these are the emergent and swiftly evolving spheres of regulatory, environmental, and AI-related risks, each of which carries substantial implications for long-term growth and stakeholder value. This study aims to bridge the gap between generative AI technology and risk assessment methodologies by examining the potential of large language models (LLMs) to detect and analyze these critical aspects of corporate risk. By leveraging recent advances in language modeling, we seek to understand the capabilities of AI in navigating the complex corporate risk landscape and, ultimately, helping stakeholders to make more informed decisions in the face of growing uncertainties.The evaluation of firm risks through textual analysis of corporate disclosures received substantial attention in recent literature <cit.>. A distinctive feature of these studies is the utilization of dictionary-based bigram (n-gram) frequencies to quantify various risk types.[This approach counts the presence of risk-related words mentioned in the vicinity of risk-topic-specific bigrams from a pre-constructed dictionary (e.g., the algorithm searches for instances where the bigram “economic policy" is used along with the word “risk").] This literature has laid important groundwork that improves our understanding of corporate risks. However, the recent developments in AI technology provide a useful opportunity to delve deeper into textual data and extract a richer and more nuanced understanding of corporate risks. The new generation of language models is capable of understanding complex relationships within a text, incorporating the context within which relevant topics are discussed, and even making inferences. These aspects are critical for a comprehensive analysis of complex corporate risks. Furthermore, rapidly evolving changes in the political, environmental, and technological landscape in recent years quickly render existing dictionaries outdated or incomplete.Two additional features make generative language models particularly attractive in the analysis of corporate risks. First, their general nature allows them to go beyond the context of a given text. Unlike traditional methods that analyze risks based on a single document, such as a conference call transcript, generative language models are trained on vast corpora that enable them to leverage general knowledge acquired from similar documents or documents featuring related topics. This “general AI" feature holds promise in improving the measurement of corporate risks because companies need not explicitly discuss risks in their disclosures.[For example, political risk is implicitly implied if an executive states that a firm is subject to a new regulation.] Second, an important advantage of the new generation of language models is that they synthesize the extracted information into coherent, understandable narratives, thus providing not only a quantitative assessment but also an explanation to support it.To illustrate the usefulness of the AI-based approach, consider the example of SK Telecom Inc., a Korean cell phone service provider cross-listed and operating in the US. Figure [fig1]1 illustrates a significant wedge between the bigram-based measure of political risk, which implies no risk in 2018, and our GPT-based measure, which places the company towards the top of risk distribution. What is causing this difference? Around 2018, an active discussion took place worldwide regarding regulating the bundling of phone handset contracts and telecom services.[Indeed, Japan banned the bundling in 2019. See these two articles: https://www.mobileworldlive.com/asia/asia-news/japan-steps-up-efforts-to-ban-device-subsidies/(link 1) and https://www.mobileworldlive.com/asia/asia-news/japan-bans-bundled-mobile-offerings/(link 2). Such bundling is considered detrimental for consumers since it makes pricing opaque <cit.>.] During SK Telecom's 2018 earnings call, analysts actively asked questions concerning the bundled sales. The transcript, however, does not contain an explicit discussion of political or regulatory risks, and hence, these risks are not captured by bigrams. In contrast, GPT identifies "the separation of handsets and telecom services" as a potential source of regulatory uncertainties and provides a rich explanation of this issue (see Appendix A). The example illustrates that LLMs can not only connect diverse pieces of information in a given text but also make judgments by interacting the context with the model's general knowledge. Despite this potential, LLMs' ability to evaluate firm-level risks is yet to be understood. In this study, we address this question by developing firm-level risk exposure measures using OpenAI's GPT3.5-Turbo LLM. In particular, we extract pertinent risk-related information from earnings conference call transcripts spanning January 2018 to March 2023. We then evaluate how GPT-based risk exposure measures compare to the existing measures in predicting stock market volatility and related economic outcomes.We focus on the sources of corporate risks that are of the highest significance to firms' stakeholders: political risk <cit.>, climate-related risk <cit.>, and AI-related risk. Because AI risk is a new phenomenon, it enables us to probe the ability of language models to assess emerging risks. For each of the three measures, we induce additional variation by generating two types of output: (1) risk summaries and (2) risk assessments, as discussed next. For risk summaries, we specifically instruct GPT to focus solely on the document contents and avoid making judgments. These instructions minimize GPT's reliance on its general knowledge. The risk summaries are thus human-readable reorganizations of risk-related discussions in conference call transcripts. In contrast, risk assessments utilize the unique ability of LLMs to integrate the documents' context with their general knowledge and make judgments. In this task, we develop a prompt that instructs GPT to generate an assessment of a given risk, which is not limited to the information included in the given transcript. There are a number of possible approaches to converting readable summaries into quantitative risk exposures. One could, for example, rank the summaries based on their contents or train a model to construct risk scores predictive of future uncertainty. While such approaches are likely to add value, we follow a simple approach that computes the ratio of the length of risk summaries (assessments) to the length of the entire transcript. Higher ratios are interpreted as higher risk exposure.Our measures show intuitive variation. The tobacco industry exhibits the highest political risk, coal mining exhibits the highest climate risk, and business services exhibit the highest AI risk exposures. The aggregate time series of our political and climate risks covary positively with the corresponding bigram-based exposure measures, culminating in the first quarter of 2020.[This spike is likely to be attributable to the presidential election of the US and the outbreak of COVID-19. As the Biden administration is known to put more emphasis on global environmental issues than the previous administration, we see an elevated level of climate change risk after the first quarter of 2020. Furthermore, in line with the Russia-Ukraine war, we also observe an increase in political risk (and also climate change risk) in the first quarter of 2022.]^,[For the AI-related risk exposure measures, there is no corresponding bigram-based measure.] AI-related risks exhibit a different pattern. Instead of an uptick around 2020, we observe a sharp increase in the most recent years, consistent with the emergence of AI technologies. Variance decomposition analysis shows that time- and industry-time-invariant characteristics can only explain 10-15% of the total variation across the three risk measures. In line with <cit.>, about 90% of variance in political risk is firm-specific. This number goes down somewhat (is similar) for climate risk (AI risk) proxies. We further decompose firm-level residual variation and find that firm-invariant factors explain only 20-30%. Thus, time-varying firm-specific components account for the bulk of the variation in firm-specific exposures to political, climate, and AI risks.Our main analysis uses a market-based approach to evaluate whether GPT-based proxies are effective at measuring firm-level risks. Specifically, we examine whether risk exposure measures explain future stock price volatility <cit.>. We use two forward-looking firm-level volatility metrics: implied volatility derived from option prices <cit.>, and abnormal realized volatility that builds on <cit.>. We begin by examining political and climate risk exposures as they received the most attention in prior literature. Focusing on the main sample period from 2018 to 2021, we show a strong and robust positive relation between the GPT-based risk exposure measures and stock price volatility. Across different fixed effects structures, GPT-based political and climate risk measures exhibit positive associations with both of our volatility proxies. We also consistently find evidence that GPT-based proxies are more informative compared to the bigram-based proxies in explaining stock price volatility for these two types of risks, indicating the significant value derived from the new technology. In particular, GPT-based measures subsume information in bigram-based measures. More importantly, when comparing our GPT-based measures against each other, risk assessments perform better than risk summaries both in the case of political and climate risks. This result implies that AI-generated insights are useful in uncovering corporate risks. We also examine our risk proxies using the time period unseen by the language model during its training phase (pure out-of-sample period). Since GPT3.5 is trained on texts that precede September 2021, we focus on a sample of transcripts from January 2022 until March 2023 for additional tests. Even within this limited sample, we find strong and robust evidence that GPT-based risk exposure measures are positively associated with volatility variables, suggesting that our results are unaffected by GPT's possible ex-post knowledge. Turning to the analysis of AI-related risks, based on our main sample, we find limited evidence that AI risk is predictive of stock market volatility. It shows up only in one of the two models featuring risk assessment-based proxies. This result is not unexpected, given the recency of AI disruptions <cit.>. Indeed, we show that the AI-related risk proxy becomes significant in explaining volatility in the most recent two years.Having established the validity of the GPT-based risk proxies, we turn our attention to firms' actions predicted to change as a result of each risk exposure. First, we investigate whether our risk measures can explain capital investments.[We use recursive capital expenditure following <cit.> and find that climate change risk exposure and political risk exposure are associated with lower investments.] In theory, riskier companies experience higher financing costs and value the option of waiting <cit.>. Thus, ceteris paribus, they are less likely to make investments. This force is expected to be pronounced for political and environmental risks. However, for technology-related risks, the effect is less clear because addressing AI challenges requires significant investments in new technology. Indeed, the political and (less so) climate-risk exposures exhibit negative associations with investment, whereas AI risk exhibits a positive albeit insignificant relation. Furthermore, the positive effect of AI risk becomes significant during the 2022-2023 period.We find that firms further adjust their behavior in response to specific risks they are facing. They increase (1) lobbying activity in response to political risk, (2) green patent filings in response to climate risk, and (3) AI-related patent filings in response to AI risk. These findings continue to hold in the period outside of GPT's training window.Our additional analysis indicates that while political and climate risks vary up and down in their importance over our sample period, which includes the Covid pandemic, the importance of AI-related risk has been steadily growing. Finally, we show that environmental and AI risks command significant equity risk premia when assessed based on the traditional asset pricing methodology. We make the following contributions to the literature. First, we probe the economic usefulness of AI-powered large language models in risk assessment. Although generative LLMs have much potential for assisting investors in analyzing complex, unstructured information, their economic usefulness in risk assessment and risk management is yet to be understood. We contribute to a nascent and actively developing body of work on the value of LLMs <cit.>, by showing that AI tools are effective at distilling disclosures to extract information about diverse risk categories.[Generative AI tools are effective at generating informative summaries of corporate disclosures <cit.>. They can also assess corporate policies <cit.>, innovation success <cit.>, and job substitutability <cit.>.]Second, we contribute to the recent literature that uses corporate disclosures to construct firm-level measures of risk exposure: political risk <cit.>, country risk <cit.>, climate risk <cit.>, inflation risk <cit.>, and pandemic risk <cit.>. We complement and build on this influential work by adopting AI-based technology to analyze risks. In contrast to existing studies that rely on topic-based bigram dictionaries, LLMs' are trained to understand the deeper context in which bigrams are encountered.[<cit.> employ a deep-learning approach to classify the topic of each sentence and measure firm-level inflation exposure.] Indeed, we document that GPT-based measures are more informative than bigram-based measures and generally subsume their information content. Last but not least, we contribute by establishing the value of general AI for understanding complex topics like risk. We show that LLMs successfully leverage their general knowledge to derive insights about corporate risks from a given context. These insights go beyond the information discussed in the processed document. The AI-knowledge-based evaluations of risks, unrestricted to the document context, are incrementally informative and generally outperform summaries that are based on localized knowledge. § LLMS' THEORETICAL USEFULNESS IN MEASURING RISK EXPOSUREThe recent emergence of Large Language Models (LLMs) has fundamentally transformed our ability to understand and generate text. AI tools that rely on these models, such as ChatGPT, have demonstrated exceptional abilities across various domains, from natural language processing to content creation. In this section, we discuss the foundations that make LLMs particularly well-suited for the analysis of multifaceted corporate risks. Leveraging the vast knowledge embedded within these models, they can be used to uncover valuable insights into the intricate corporate risk landscape. Indeed, corporate risk exposures are often subtly implied in conference call discussions rather than explicitly stated. Evaluating these exposures requires bridging the information found in call transcripts with users' prior knowledge, and LLMs have the potential to be a transformative force in this task. We delve into why LLMs outshine traditional linguistic approaches and offer unique advantages for extracting corporate risk insights from unstructured text data. §.§ How does an LLM encode textual information?Generative AI models, such as GPT by OpenAI and LLaMa by Meta, are deep neural networks trained on a large corpus of text data with the purpose of predicting the next word in a sentence within a larger text context. These models make use of word embeddings (high dimensional word vectors that encode word meaning) and rely on the revolutionary Transformer architecture <cit.>, trained to transform the meaning of words depending on their context (e.g., position in the sentence). Transformer employs a so-called attention mechanism, which effectively directs the model to focus attention on relevant words. For example, sentences that appear early in the call may be relevant when encoding the meaning of words later on. This ability to recognize related words is instrumental in analyzing sparsely distributed (risk-related) information in a long text sequence.[The transformer consists of multiple layers stacked on top of each other. As information progresses through these layers, the model refines its understanding of the content, allowing for more complex and nuanced interpretations.] When a user prompts a query requesting to summarize or assess risk exposure, LLMs choose which words and, consequently, sentences are most relevant to the query without confining the analysis to the vicinity of specific words (encompassing the entire text). In contrast, dictionary-based algorithms focus on specific bigrams and typically employ a relatively narrow "attention" window to determine their relevance. For example, bigrams that encode political or regulatory topics (e.g., environmental regulation) are counted as long as they are within a ten-word window relative to the word "risk." Furthermore, many companies' executives may avoid the explicit mention of "risk" or related words while, at the same time, a mention of, e.g., environmental regulation or similar topics implies an important form of uncertainty for the company.§.§ The Value of General KnowledgeBesides a context-based interpretation of the (risk-related) text, a key feature that distinguishes LLMs is their extensive general knowledge that facilitates the model's logical reasoning. Because the models are pre-trained on vast amounts of textual data and feature billions of parameters, in addition to learning language structure <cit.>, they also acquire massive general knowledge <cit.> and reasoning abilities <cit.>. This general knowledge forms a "prior" that the model uses when presented with a new text (context), and it affects how the model interprets the new data.Fundamentally, the presence of extensive general knowledge allows the model not only to effectively summarize risk-related content but also to make its own assessment or judgment of risks based on the context provided by the conference call transcript. Such assessment is thus a blend of the model's knowledge and the context provided by the document. The Transformer's architecture enables this integration of knowledge at every step (layer), allowing it to generate coherent and contextually relevant insights.§ METHODOLOGY AND IMPLEMENTATIONEarnings calls are an attractive source of new information about corporate risks because they contain an informational exchange between the demand (analysts representing stakeholders) and the supply (executives) sides. This choice follows a number of prior studies measuring firm-level risk exposures based on earnings call transcripts <cit.>. In contrast to these studies, we employ AI techniques to uncover risk exposures. Specifically, we use OpenAI's GPT3.5-Turbo LLM to analyze the transcripts. A visual representation of our GPT processing pipeline is provided in Figure [fig2]2. GPT limits combined (input and output) tokens to 4,000, whereas call transcripts are 7,000 tokens on average. Thus, we chunk transcripts into several parts, which is a common practice in the literature <cit.>. Chunking can improve the quality of GPT's output because the model struggles to generate detailed summaries when processing long documents, whereas it produces summaries on par with humans when processing shorter essays <cit.>. Chunking also significantly reduces the computational burden of calculating self-attention scores, which increases quadratically in document length.[For example, consider two documents A and B, with similar complexity. Assuming all other things equal, A has 10 tokens and B has 100 tokens. When calculating positional encodings and self-attention, the model has to consider 100 (=10^2) pairwise relations. However, when it comes to document B, the model needs to consider 10,000 (=100^2) pairwise relations.] To ensure the model's flexibility, we allocate 2,000 tokens for the input text and the remainder for the output text. To maximize chunk quality, we divide the transcript into a presentation and Q&A sections. In the presentation part, we avoid splitting the speech by the same executive into different chunks. Similarly, we do not split Q&A in the middle of the answer to an analyst's question.[When a question from an analyst prompts responses from multiple executives, we consolidate these answers into a single chunk to maintain coherence. Very occasionally, chunks may exceed the assigned input token limit. In such cases, we further divide the chunk into smaller chunks. Further chunking accounts for less than 3% of the total GPT processing.] We concatenate the model's output at the call level (pooling across chunks).[We do not apply a summarization layer when concatenating the output texts, although this is an option since our purpose is to measure the degree of risk exposure. Even though some contents are repeated in different chunks, we interpret this as the topic (risk) being important for the firm. Similarly, bigram-based measures count the total number of occurrences rather than isolating the count of unique mentions.] In the absence of risk-related information, we instruct GPT to print "NA" and subsequently purge NAs from the output.[Doing so ensures zero values for the exposure measures when calls have no risk-related information.] We design separate prompts for risk assessments and risk summaries. For summaries, we instruct the model to ignore external information sources. For assessments, in contrast, the model is instructed to make judgments accompanied by narrative reasoning. In both cases, we provide the model with a context specifying that the input text is excerpted from an earnings call transcript. The prompt also provides an explanation of each risk and a list of sample questions that are relevant for understanding risk exposures.The risk exposures corresponding to summaries, RiskSum, and assessments, RiskAssess, are constructed as follows: RiskSum_it= ∑_l=1^K_itlen(S(c_it^l))/len(c_it)RiskAssess_it= ∑_l=1^K_itlen(A(c_it^l))/len(c_it)where c_it is earnings call transcript for a company i in quarter t divided into K_it chunks c_it^1, c_it^2, ⋯, c_it^K_it. S(·) is a GPT-based function that generates risk summaries and A(·) is the corresponding risk assessment function. len(·) measures the number of words in a given text.[Note that len(c_it) might not equal ∑_l=1^K_itlen(c_it^l) since we drop several chunks such as operator instructions.] We set the temperature parameter for the text generator to zero and do not restrict the maximum output length.[High temperature might generate creative yet less replicable answers. While low-temperature values are generally appropriate, we set the temperature to zero (its minimum value) to keep the summaries as close as possible to the actual content of the transcript.] Our prompts are designed to capture three different types of RiskSum and RiskAssess. Specifically, we measure (i) political risk summary, PRiskSum, and assessment, PRiskAssess, (ii) climate risk summary, CRiskSum, and assessment, CRiskAssess, and (iii) AI risk summary, AIRiskSum, and assessment, AIRiskAssess. For political risk, GPT's output generally includes explanations of political or regulatory uncertainties, e.g., whether the company is likely to be affected by a new regulation. For climate risks, GPT's answers typically encompass how the company's operations might be impacted by extreme weather or environmental policy changes. Finally, for AI risks imposed by AI itself, GPT often discusses how a firm's primary operations might be replaced or assisted by AI, as well as whether the company's business is dependent on AI technologies.§ DATA§.§ Earnings Call TranscriptsWe source quarterly earnings call transcripts from Capital IQ S&P Global Transcript database. Our analysis focuses on US firms' transcripts available between January 2018 and March 2023. We selected this period because (1) generating risk summaries and assessments for each risk metric is costly and time-consuming, (2) a considerable part of the sample is outside of GPT's training window (allowing for pure out-of-sample tests), and (3) this time period is characterized by significant changes in political, climate and AI uncertainty. A typical earnings call, which lasts approximately an hour, contains two parts. A presentation session, during which executives describe the company's performance, and a Q&A session, during which analysts ask questions. Our data identifies portions of the transcript attributable to a given speaker, allowing us to distinguish between presentation and discussion sessions and between questions by different analysts.As our analysis focuses on capital market outcomes, we restrict our sample to publicly traded companies. We exclude very short calls and calls without a discussion session <cit.>.[Such calls account for 1.4% of the total transcript sample. We also drop operator instructions and chunks shorter than 50 tokens as they are more likely to be greetings or irrelevant conversations.] We exclude calls that are conducted in languages other than English, including the ones that are machine-translated into English. After applying these filters, our final sample consists of 69,969 transcripts from 4,983 distinct firms. We further split the sample into two periods: our baseline tests use calls from January 2018 to December 2021, and our post-GPT-training sample includes calls from January 2022 to March 2023.[Our benchmarks are bigram-based risk exposure measures from <cit.> and <cit.>. Their sample of released risk exposure measures ends in early 2022, and we match our sample period to theirs.] §.§ Capital Market VariablesOur primary validity test for firm-level risk measures is based on the association between these risk measures and stock price volatility <cit.>. Specifically, we use two forward-looking volatility proxies: implied volatility and abnormal volatility, discussed next. Stock return data is sourced from the Center for Research in Security Prices (CRSP), and market returns and risk-free rates are from Ken French's website. Implied Volatility. We obtain firm-quarter implied volatility from OptionMetrics, following <cit.> and <cit.>. OptionMetrics calculates implied volatility based on the Black-Scholes model for European options and the Cox-Ross-Rubinstein model for American options. We use the implied volatility derived from the 90-day at-the-money options measured as of the end of each fiscal quarter.Abnormal Volatility. Since implied volatility hinges on option models' assumptions, which may not reflect the most recent risk-related information (it is also limited to stocks with actively traded options), we also use realized abnormal volatility. Following <cit.>, we measure volatility as the root mean squared errors (RMSE) from the market model residuals. Abnormal volatility is the ratio of post-conference call RMSE to pre-call RMSE calculated as follows. Post-call RMSE is from a market model estimated over the period starting six and ending 28 trading days after the conference call. Specifically, the market model is: r_it = β_0 + β_1 r_mt + ε_it, where r_it is the daily stock return for company i on day t and r_mt is the market return on day t. Similarly, using returns data from -257 to -6 days relative to the conference call date, we estimate pre-conference call RMSE.[We require at least ten valid observations to estimate the post-conference RMSE, and at least 60 valid observations for pre-conference RMSE. Additionally, in untabulated analyses, we also use alternative estimation windows for post-conference call RMSE estimation and obtained quantitatively and qualitatively similar results.] We then take the ratio of the two RMSE values to calculate the abnormal volatility:Abnormal Volatility = Post-Conference RMSE/Pre-Conference RMSE-1.§.§ Economic VariablesBesides focusing on market-based outcomes, we also study firms' responses to risks by examining their real activities, discussed next. Investments. We follow <cit.> to construct a capital investment measure using the ratio of current quarter capital expenditures (CapEx_t) to the recursively updated cumulative capital taken as of the previous quarter end (K_t-1). For the initial time period in our sample, K_1 is set to the quarter-end value of property, plant, and equipment from the Compustat Quarterly database. Subsequently, capital is recursively updated as follows: K_t = K_t-1× (1-δ) × (1+ρ_t)+ CapEx_t, where δ is the depreciation rate assumed to be 10%, and ρ_t is the inflation rate measured by the change in the monthly Producer Price Index published by the Federal Reserve Economic Data (FRED). Accordingly, capital investment in period t is measured as CapEx_t/K_t-1.Lobbying Activity. The Lobbying Disclosure Act of 1995 mandates that each lobbying firm reports its lobbying expenditures and their recipients. We obtain lobbying data from the Center for Responsive Politics, which maintains an archive of lobbying records (as long as lobbying amounts exceed the mandated disclosure threshold). We fuzzy match the names disclosed in lobbying reports to firm names on Compustat. Because lobbying amounts are highly skewed, we use a lobbying indicator 1($ Lobby Amount>0), which takes the value of one when a firm engages in any lobbying, and zero otherwise.Green Patents and AI-Related Patents. We obtain patent data from the United States Patent and Trademark Office (USPTO). We then match the assignee's firm name to Compustat following the approach in <cit.>. We use patent filings to measure a firm's patenting activities during a given quarter. Following <cit.>, we use International Patent Classification (IPC) codes to classify patents related to green technology. Similarly, by following <cit.>, we identify IPC codes related to AI technology.[The IPC codes that we use to classify green and AI-related patents are available upon request.] As with the lobbying amounts, the distribution of patent filings is heavily skewed (the 90th percentile constitutes one filing per quarter). Therefore, we use an indicator variable 1(# Green Patent>0) that takes the value of one when a firm files at least one green patent in the following quarter and zero otherwise.[For patent and lobby variables, unmatched firm-quarter observations are assumed to be zero. Nonetheless, we find that our results are robust to excluding unmatched observations or zero values (untabulated).] The variable 1(# AI Patent>0) is defined similarly. § DESCRIPTIVE STATISTICSTo provide preliminary evidence that GPT-based risk exposure measures capture risks in the intended ways, we start by examining descriptive statistics. We explore cross-industry variation in our risk scores and compare the time trends in our measures with those of bigram-based measures. We also investigate the face validity of our measures. §.§ Descriptive Statistics and Industry-Level AveragesIn Table [t1]1, Panel A, we report descriptive statistics for risk measures as well as for the variables used in our regression analyses. We observe that risk exposure assessments are more than double in length compared to risk exposure summaries. Consistent with <cit.> and <cit.>, for most firms, the levels of risk exposure are relatively close to zero. In particular, the median values of PRiskAssess, CRiskAssess, and AIRiskAssess are 0.011, 0.001, and 0.000, respectively. However, there are also observations with large values. This result implies that in the majority of earnings calls, some risk information is disclosed even though it does not account for a large portion of the call. In addition, political risk is the most commonly mentioned risk, followed by climate risk. In Panel B, we present Pearson correlations among the risk measures, including the bigram-based political and climate risk exposures. As expected, bigram-based measures and GPT-based measures are positively correlated, although the correlation coefficients are relatively low (ranging from 0.116 to 0.147). Our corresponding assessment and summary measures are highly correlated with the correlation coefficients of 0.742 (political risk), 0.66 (climate risk), and 0.512 (AI risk). The political and climate risks also exhibit a correlation, with the correlation coefficients of 0.349 for summaries and 0.483 for assessments. These moderate correlations are expected because environmental and regulatory risks are inherently intertwined. The positive correlation is potentially further exacerbated by our sample period, during which a number of politically and environmentally significant events happen to coincide.We verify that the political and environmental risk measures are not capturing the same content.[In Section 7.3, we conduct additional placebo tests and show that both measures capture a distinct risk dimension.] First, we estimate the correlation between the two assessment measures after demeaning them by the quarter average. The correlation decreases by a considerable amount from 0.483 (overall) to 0.351 (within a quarter). Second, we present word clouds in Figure [fig5]5, which display very different keyword patterns for each risk type. Third, for non-zero risk exposure documents, we estimate the pairwise cosine similarity between the two risk assessments. Their average similarity is only 0.421.[As a benchmark, the cosine similarity between two continuous MD&A reports is 0.8 to 0.9 <cit.>. Also, for firms in the same industry, the pairwise similarity is, on average, 0.55 <cit.>.] Lastly, we manually check contents for a random sample of generated summaries and assessments and verify that the two risk categories deal with substantially different topics.We further validate our measures by examining the two-digit SIC industries with the highest risk assessment scores (Figure [fig3]3). For political risk, the high-risk industries include, for example, tobacco products and heavy construction. For climate risk, the high-risk sectors include coal mining, electricity and gas, textile, and paper products. In line with observed positive correlations between the two risk topics, there is some (expected) overlap in the high-risk industries (e.g., textile mill products, electricity & gas). Lastly, for AI risk, the highest risk industry composition is quite different. Business services, engineering services, and electronic equipment are among the highest-ranked industries. We provide a heat map for one-digit SIC industry risk exposure in the Online Appendix, Figure 1. The results are consistent with those in Figure [fig2]2 and Figure [fig3]3. §.§ Time Series VariationWe next explore the time series properties of our risk measures, which are depicted in Figure [fig4]4. Panel A presents political risk exposure based on GPT summaries and assessments, as well as a bigram-based measure based on <cit.>. GPT-based and bigram-based measures exhibit similar time trends. Irrespective of the metric, political risk spikes in 2020, which is the year of the COVID-19 pandemic and the US Presidential Election. We also see another increase in political risk in 2022, during which the outbreak of Russia-Ukraine war took place. Thus, GPT-based measures appear to reflect notable political events in a timely and intuitive manner and are aligned with bigram-based measures.Panel B of Figure [fig4]4 plots GPT-based and bigram-based climate risk exposure measures. All three climate risk measures co-move over time, although they are not as closely aligned as the political risk measures. We observe spikes in 2020, the first quarter of 2021, and the first quarter of 2022. The 2020 spike follows the UN Climate Action Summit,[During which climate activist Greta Thunberg delivered her widely resonating speech accusing world leaders of failing to tackle climate change.] and coincides with the outbreak of COVID-19 that introduced uncertainty regarding climate action. The first quarter of 2021 was impacted by the Texas power crisis, where a severe winter storm brought to light the vulnerability of the electricity supply to extreme weather events. Again, observable time trends in our GPT measures line up with notable environmental events.Lastly, in Panel C, we plot AI-related risk summaries and assessments. Because no public bigram-based AI risk measure is available from prior studies, we do not have a natural benchmark. AI risk exposure exhibits an entirely different time trend than the two preceding risks. We observe a steady increase over time reflecting the heightened significance of AI technologies. A notable increase took place between the third quarter of 2019 and the second quarter of 2020. This period coincides with the release of Transformer-based BERT (Google) and GPT-2 (OpenAI) models, which have since been widely adopted by companies. The highest increase in AI risk occurred in the first quarter of 2023, immediately following the release of GPT3.5, which became viral around the world.[See https://fortune.com/2023/03/01/a-i-earnings-calls-mentions-skyrocket-companies-say-search-cybersecurity-medicine-customer-service/this article published in Fortune for anecdotal evidence.] §.§ Face Validity To examine the face validity of our measures, we read a number of generated summaries and evaluations for the three types of risk and confirm that they address the relevant topic. Appendix B presents several snippets from GPT-processed risk exposure documents. Each summary and assessment pair is from the same respective earnings call transcript. Assessments, as expected, contain richer contents that often include GPT's judgments based on the given context.In example B1 (Appendix B), the political risk summary expresses a cautionary note that the recent government auction of HS1 project (high-speed railway line) in the UK creates exposure to political and regulatory uncertainties in Europe. In turn, the corresponding political risk assessment (see example B2) highlights that “deficit reduction in these regions may lead to an increased flow of government disposals and potentially PFI (Private Finance Initiative) opportunities." In the earnings call transcript, there is no mention of government disposals or PFI in conjunction with deficit reductions. Rather, PFI is mentioned in the later part of the executive's presentation. This example highlights GPT's ability to connect information pieces into a logical assessment.The examples of climate risk summaries (B3) and assessments (B4) display a starker difference. The text of the climate risk summary only contains "NA," implying no direct mention of climate-related uncertainties. However, despite the lack of explicit discussion of climate risks, the corresponding climate risk assessment states that the company is subject to risks related to significant energy consumption and electronic waste. It also mentions a potentially high carbon footprint and the need for compliance with environmental regulations. Turning to the examples of AI Risk summaries (B5) and assessments (B6), we observe that the summary identifies the firm's active efforts to incorporate AI into its operations and provides more specific details. In contrast, risk assessment (B6) takes a more holistic view of whether the company's business is dependent on AI technologies.In sum, the above-mentioned examples illustrate that the generated risk summaries and assessments are readable and generally contain relevant information when it comes to understanding firm-level risks. Further, risk summaries and assessments are distinct from each other in logically expected ways. §.§ Variance DecompositionIn this subsection, we perform a variance decomposition of our proposed firm-level risk exposure measures into time, industry, and firm-level variation.[We use two-digit SIC industry classification. In untabulated analyses, we also use three-digit and one-digit level industry classifications and find consistent results.] The results are presented in Table [t2]2, Panel A1, B1, and C1. The table shows the incremental R-squared values attributable to each set of fixed effects. While the descriptive evidence above reveals clear time trends and industry variation in our risk exposure measures, they jointly explain only a small portion of the variation in risk exposures. These results are closely in line with the findings in <cit.>, suggesting that macroeconomic and sector-level risk variance is only the tip of the iceberg. There are some differences in the explained portion of the variance depending on the specific risk type and the measurement approach, but the general consistency is apparent and noteworthy. The results also imply that the variation attributable to firm-level factors is in the range of 80% to 90%, highlighting the importance of firm-level risk measurement.We further decompose firm-level variance into time-varying and invariant portions. Specifically, we zoom in on residual variance by including firm-level fixed effects and report their corresponding R-squared values. Firm fixed effects explain between 21% and 35% of firm-level (residual) variance, thus implying that almost 65-79% of variation is time-varying.[To further validate our measures, we estimate the measurement errors in our variables following <cit.> (Online Appendix Table 1). Measurement errors for PRiskAssess and CRiskAssess are 2.71% and 8.50%, respectively. AIRiskAssess has a relatively higher measurement error of 27.63%, which is due to the high proportion of zero values in earlier years. Once we exclude zeros from analysis, the measurement error decreases to 6.59%.] Pairwise comparisons of different types of risk measures (summary-, assessment-, and bigram-based) reveal that the assessment-based measures tend to be the more stable within a firm and across time.In sum, although the risk measures exhibit notable aggregate time trends (as seen in Figure [fig4]4), rich firm-level heterogeneity exists in exposures to these aggregate trends across different types of risks. § CAPITAL MARKET CONSEQUENCESA valid measure of risk must exhibit association with volatility <cit.>. In this section, we examine associations between our risk proxies and two stock price volatility variables introduced previously, namely, implied volatility and abnormal volatility (described in Section 4.2). Specifically, we estimate the following OLS regression:Volatility_it+1 = β Risk_it + γX_it + δ_x + ε_it,where Volatility_it+1 is a proxy for the volatility of a firm i's stock price in quarter t+1.[In the Online Appendix, we also use realized volatility, which is measured as the standard deviation of stock returns over a 90-day window, as an additional volatility variable.] Risk_it is one of the firm-level risk exposure measures for firm i in quarter t. Depending on the specification, we include a GPT summary-based risk exposure, RiskSum, and a GPT assessment-based risk exposure, RiskAssess, as well as a bigram-based measure, RiskBigram. Prefix P, C, or AI in front of Risk denotes political, climate, and AI-related risks, respectively. X_it is a vector of firm-quarter controls. Following <cit.>, we control for the log of total assets in our main analysis. We also include an extensive set of additional controls in the Online Appendix and verify the robustness of our results. δ_x denotes various fixed effects. Specifically, we estimate two different sets of fixed effects in our main tests: (1) time (δ_q) and industry (δ_s) fixed effects, and (2) the interactions of time and industry fixed effects (δ_q ×δ_s). We do not control for firm-fixed effects, as firm-level variation in our measures is of interest. However, in the Online Appendix, we confirm that our results are robust to controlling for firm fixed effects. Robust standard errors are clustered at the firm level. All continuous variables are winsorized at 1% and 99% levels. §.§ Political Risks We start discussing our main results with respect to political and climate risk because they received considerable attention in prior studies (and because bigram-based benchmarks are available for comparison purposes). Thus, we confine our main sample to the 2018-2021 period as the bigram-based measures are updated until early 2022 <cit.>.The estimates for political risk are reported in Table [t3]3. As shown in columns (1) and (2), both GPT-based measures of political risk, PRiskSum and PRiskAssess, exhibit positive and highly statistically significant associations with both implied volatility and abnormal volatility. This result holds with industry fixed effects (Panel A) and with industry-time fixed effects (Panel B). In columns (3) and (4), we add the bigram-based risk measure as an additional control variable to the regressions.[This measure is statistically significant when used on its own.] We find that the effect of our GPT-based measures does not attenuate and that GPT-based measures subsume information in PRiskBigram. Overall, this evidence suggests that LLM-based risk measures are effective at capturing political risk and that they dominate dictionary-based measures.We next turn to the comparison of GPT-based summaries vs. assessments in explaining volatility. When adding the risk measures simultaneously to the same regression, as we do in column (5), we find that the risk assessment-based measure dominates the summary-based measure for both implied and abnormal volatility. This result is important as it implies that GPT's ability to synthesize general knowledge acquired from its training is valuable when assessing political risk exposure.[For abnormal volatility, we observe negative and significant coefficient for PRiskSum when including industry-time fixed effects. This phenomenon arises due to the high positive correlation between PRiskAssess and PRiskSum. The effect is expected when both proxies contain correlated measurement errors and one of the proxies is a dominant one.] Overall, these findings reinforce the notion that LLMs can produce insights by synthesizing the input text and general knowledge. In terms of the economic significance of the results, a one standard deviation increase in PRiskAssess translates to 0.04 standard deviations increase in implied volatility (using column (5) of Panel A), which is a non-trivial amount. Similarly, one standard deviation increase in PRiskAssess is associated with 0.06 standard deviations increase in abnormal volatility (using column (5) of Panel A). §.§ Climate RisksNext, we estimate the same model using the climate risk proxies as our independent variables. The results are reported in Table [t4]4. GPT-based measures bear a positive and statistically significant association with stock price volatility both on a standalone basis or when used jointly with CRiskBigram. The statistical power to detect climate risk is highest in the case of GPT-based assessments, CRiskAssess, which exhibit several times higher t-values compared to the summary-based measure CRiskSum. Furthermore, when the three proxies are included in the model simultaneously, CRiskAssess is the only one that consistently remains statistically significant and exhibits the expected sign. These results are consistent regardless of the fixed effect structures and hold for both implied volatility and abnormal volatility. Overall, LLMs perform well at identifying climate-related concerns and subsume bigram-based measures. Furthermore, the analysis also confirms the value of LLM's general knowledge – GPT assessments – in evaluating climate risks, going beyond the information discussed during the conference calls.§.§ AI-Related Risks Lastly, Table [t5]5 examines the association between AI-related risk exposure and stock price volatility. As there is no established bigram-based AI risk measure in the literature, we only analyze the GPT-based measures. In our primary sample, we find limited evidence that AI-related risk is associated with stock price volatility. For example, we do not find a statistically significant relation between AI risk proxies and implied volatility. In the case of abnormal volatility, we only find significant associations in the case of AIRiskSum. This result is not surprising as AI risk has emerged as an economically important phenomenon only very recently. Indeed, from 2018 to 2021, nearly 60% of the observations have zero AI-related risk exposure. In the following section, we explore whether AI risk becomes significant in explaining stock price volatility in the most recent period. §.§ Analysis Outside GPT's Training Period We note that all of the analyses in our study are effectively out-of-sample as the language model is not trained to predict any of the outcome variables that we examine. Nevertheless, one potential concern is that our main sample overlaps with the GPT model's training period, which extends to September 2021. It is possible (though not known) that GPT had seen the transcripts of earnings calls during its training, which may give the model an edge in generating risk summaries and assessments. To address this concern, we conduct “true" out-of-sample tests. In particular, we use a sample of conference calls from January 2022 to March 2023, which is outside of the GPT's training sample. We present the results of this analysis in Table [t6]6. Panels A and B use GPT-based political risk and climate risk proxies, respectively, as their main independent variables. Even in this more limited sample, we continue to find positive and highly significant associations between GPT-based risk measures and firms' stock price volatility. The only exception is that PRiskSum and CRiskSum lose statistical significance when examining abnormal volatility (column (3) of both panels). Nevertheless, they remain positive and retain their economic magnitudes. Most importantly, the assessment-based exposure measures are significantly associated with both volatility across all model specifications.For AI Risk, reported in Panel C, in fact, we find stronger results during this more recent time window. Specifically, we observe that both AIRiskSum and AIRiskAssess become positive and significantly significant determinants of volatility. This finding directly maps into our evidence in Figure [fig4]4C, which indicates that AI risk has soared in recent quarters. In terms of economic magnitudes, a one standard deviation increase in AIRiskAssess is associated with 0.03 standard deviations increase in implied volatility and 0.03 standard deviations increase in abnormal volatility, which is once again a sizable effect.These findings lead to two important insights. First, GPT's ability to produce valid risk exposure measures cannot be attributed to its in-sample knowledge. Rather, it suggests that large language models are useful for investors' decision-making process when used outside of the training window. Second, the fact that AI risk is correlated with volatility only in the period after it soars constitutes further evidence of the construct validity of our measures.§.§ Robustness checks We also conduct a battery of robustness tests in the Online Appendix. First, motivated by our findings in Section 5.4, we include firm fixed effects into the model (Online Appendix Table 2). Second, we expand the set of our control variables to include leverage, cash holdings, tangible assets, profitability, and capital expenditures (Online Appendix Table 3). In both sets of robustness tests, we continue to find strong results for the GPT-based assessment measures, which outperform the dictionary-based proxies. Third, we use realized volatility (instead of implied volatility) as another dependent variable. We show that PRiskAssess and CRiskAssess are positively associated with this volatility measure (Online Appendix Table 4). Fourth, to mitigate the concern that extreme risk exposure values are driving our results, we plot the association between the risk assessment variables and implied volatility in Online Appendix Figure 2. We randomly sample observations whose risk assessment values are within 10% and 90% percentile and show that the positive association remains.Overall, our analysis lends support to the notion that GPT-based firm-level political risk exposure and climate change risk exposure are useful in capturing corporate risks. These measures are superior to traditional bigram-based measures, indicating that GPT can produce substantially informative risk assessments. § RISK EXPOSURES AND FIRM DECISIONS An alternate approach to evaluate the economic usefulness of GPT-based risk measures involves examining their ability to explain firms' economic decisions. We focus on capital investment decisions, followed by the examination of corporate actions aimed at diminishing risk exposures, such as lobbying or developing intellectual property. §.§ Investment decisions Theoretical considerations suggest that heightened uncertainty regarding a firm's future commands a risk premium that investors require in exchange for capital <cit.>. Consequently, higher risk exposure should make investments in capital projects harder to finance. Supporting this notion, <cit.> and <cit.> show a decline in capital expenditures in response to escalating political uncertainty.[In theory, investments are also expected to decline in response to increased uncertainty because the option to wait becomes more valuable <cit.>. Although the cost of capital effect on investment applies to all risk types, there is a countervailing effect when capital expenditures become an important part of a firm's strategy to counteract a specific risk. For example, companies might be less inclined to rely on tangible investments to alleviate political risk, favoring methods like lobbying or political donations instead. However, to address risks posed by AI, significant infrastructure investments are likely to be needed to bolster resilience to AI risk.]To examine the association between risk exposures and investment, we estimate the following OLS regression:Investment_it = β Risk_it + γX_it + δ + εwhere Investment_it is capital expenditure intensity (see Section 4.3 for more details). We include quarter and industry fixed effects. The remaining variables are the same as in Equation (3). We winsorize continuous variables at 1% and 99%. Standard errors are clustered at the firm level. Table [t7]7 presents the results. In Panel A, columns (1) and (2) show that PRiskSum and PRiskAssess have a negative and statistically significant association with capital expenditures. The coefficient of PRiskAssess is not only larger than that of PRiskSum but also more significant. When we jointly include GPT-based and bigram-based measures in columns (3) and (4), the GPT-based measures remain significantly associated with investments and bigram-based measures are not. In line with prior evidence, we also find that PRiskAssess is the dominant proxy, as can be seen from column (5). In terms of economic significance, a one standard deviation increase in PRiskAssess translates to a 1.3% decrease in investment, on average (based on estimates in column (5)).In Panel B, we present results for climate risk and observe a similar pattern as in Panel A. CRiskAssess bears a negative relation with investments, which remains significant with and without the bigram-based measure or CRiskSum in the same regression. In terms of economic magnitude, a one standard deviation increase in CRiskAssess is associated with a 0.8% decrease in average investment.In Panel C, we do not find a significant relation between AI risk and investments. In both columns, we find positive yet not statistically significant coefficients on both risk exposure measures. To shed more light on the relationship between AI-related risk and capital expenditure, we examine the most recent period starting in January 2022 and ending in March 2023, during which AI risks skyrocketed, and we perform several cross-sectional tests. In particular, we examine whether companies that are financially constrained, i.e., lack internal funds to finance investments, have a decrease in their capital expenditures in response to increasing AI risks (cost of capital channels). In contrast, financially unconstrained firms are expected to increase their investment to tackle the AI challenges.We partition the sample based on the levels of cash holdings and leverage measured as of 2021 and report the analysis in the Online Appendix, Table 11. We observe that the least financially constrained firms exhibit the largest increase in capital expenditures in response to AIRiskAssess. In contrast, we find that the most constrained firms reduce their investments. These findings help to understand the seemingly counterintuitive impact of AI risk on investments. Overall, our results indicate that GPT-based risk exposure measures are useful in explaining corporate investment decisions and do so in line with theoretical priors. However, the effect on investments is nuanced and depends on the type of risk. §.§ Firm Responses to Mitigate Risk ExposuresFirms are expected to proactively mitigate risks by implementing risk-specific countermeasures. In this subsection, we turn to the examination of firms' actions to mitigate their risk exposures. To do so, we focus on three outcome variables that correspond to three different risk types. First, we gauge our political risk measures by probing their ability to explain companies' lobbying activities <cit.>. Second, we investigate firms' reactions to climate threats by examining the issuance of green patents <cit.>. Finally, we measure firms' responses to AI risk by focusing on AI-related innovation (the filing of AI-related patents).We estimate the following OLS regression:Response_it+1 = β Risk_it + γX_it + δ + εwhere Response is one of the three indicator variables discussed in Section 4.3 in more detail: 1($ Lobby Amount>0) is an indicator variable that equals one if firm i lobbying expenditures are greater than zero in the quarter t+1 and zero otherwise, 1(# Green Patent >0) is an indicator variable that equals one when firm i files at least one green patent in the quarter t+1 and zero otherwise, and 1(# AI Patent >0) is an indicator that equals one when firm i files at least one AI-related patent in the quarter t+1 and zero otherwise. We expect a positive association between these variables and their corresponding risk exposure measures. As before, we use quarter and industry fixed effects and the same set of control variables as in Equation (3). Table [t8]8 presents how firms respond to a change in firm-level risk exposure. Panel A focuses on lobbying activity in response to political risk. We find a positive and significant association between PRiskAssess and lobbying, whereas PRiskSum only shows a limited association. When PRiskAssess is included jointly with PRiskBigram, the two measures both remain incrementally informative. A one standard deviation increase in PRiskAssess is associated with a 1.40% point increase in the likelihood of lobbying in the following quarter (based on estimates in column (5)).[Note that PRiskSum is negative and statistically significant in column (5). To the extent the same information is repeated in the assessment summaries, the model performs better by filtering it out from the assessment-based measure.]Panel B studies the association between green patents and climate risk exposure. In column (2), we find a positive and significant association between CRiskAssess and green patent filings. When we include all three risk proxies in one regression, CRiskAssess comes out as a dominant proxy. In terms of economic magnitude, based on our estimates in column (5), a one standard deviation increase in CRiskAssess is associated with a 0.73% point increase in the likelihood of filing a green patent in the following quarter.Finally, in Panel C, we repeat the analysis for AI risk and AI patent filings. We find that one standard deviation increase in AIRiskAssess is associated with a 2.39% point increase in the likelihood of filing at least one AI patent in the subsequent quarter.Overall, we find that companies respond to risks captured by our measures by taking actions to mitigate them. We view this finding as further evidence of the validity and economic usefulness of GPT-based risk exposures.§.§ Further Out-of-Sample Analysis Similar to Section 6.4, we perform a subsample analysis from January 2022 to March 2023 to ensure that our results are not attributable to GPT seeing the underlying data during its training phase. Table [t9]9 presents the results of this analysis. Overall, our results are consistent with those in Tables [t7]7 and [t8]8, addressing concerns about possible in-sample bias. In fact, for lobbying activity, PRiskSum becomes even more significant in the out-of-sample test.One notable difference in this analysis is that AI risk, on average, shows a positive and significant association with investments in the most recent year. This finding reconciles with the additional analysis in section <ref> and is noteworthy because standard theory would predict that their investments should decrease with risks. As discussed previously, the likely explanation for this finding is that companies respond to soaring AI risks by making investments in AI-related technology and infrastructure.§.§ Robustness ChecksWe perform several robustness checks. We gauge the robustness of our results by including firm fixed effects (Online Appendix Table 6), as well as by including an extensive set of controls (Online Appendix Table 7).[In Tables [t7]7 and [t8]8, we only report the results using industry and quarter-fixed effects. However, the results are almost identical when we allow for the interactions of quarter and industry fixed effects.] Our results are similar in these alternative specifications. To mitigate the concern that extreme values are driving our results, we visualize the linear association between risk exposure measures and investments after excluding extreme values (Online Appendix Figure 3).To bolster our findings that each risk is exclusively associated with its corresponding corporate action (and not just proxies for general risk), we perform several placebo tests. Specifically, we regress a lobbying activity indicator on CRiskAssess and AIRiskAssess, a green patent indicator on PRiskAssess and AIRiskAssess, and an AI patent indicator on PRiskAssess and CRiskAssess. We report the results in Online Appendix Table 9. In general, each economic outcome is most significantly associated with its corresponding risk measure. One exception is AIRiskAssess, which is also positively associated with lobbying and green patent activities. Finally, because patenting and lobbying activity variables are heavily skewed, we use their natural log as dependent variables. Additionally, following <cit.>, we also use Poisson regressions for patent counts (Online Appendix Table 10). Our results remain qualitatively similar. Overall, our findings highlight that firms respond to risk by subsequently mitigating their risk exposure. § RELATIVE IMPORTANCE OF RISKS OVER TIMEIn this section, we explore the relative importance of different types of risk over time. To do so, we use the four-quarter rolling window to estimate the following model:[Our first estimation period starts in the first quarter of 2018 and ends in the fourth quarter of 2018, and so on. The last estimation period ends in the first quarter of 2023.] Implied_Volatility_it =β_1t PRiskAssess_it + β_2t CRiskAssess_it + β_3t AIRiskAssess_it + γ_t X_it + δ_q + δ_s + ε_it where the risk proxies are based on risk assessments and are included either simultaneously or one at a time; δ_q is quarter fixed effect and δ_s is industry fixed effect. For each rolling regression, we report coefficient β_t and the corresponding t-values. Table [t10]10 presents the estimates. In Panel A, we estimate the importance of each risk type on a stand-alone basis, i.e., include the risk proxies one at a time. We also visualize the time-series variation in t-values in Figure [fig6]6A. In Panel B, however, we compare the relative importance of each risk type after controlling for the effect of the other types, i.e., include the three risk proxies simultaneously. We also visualize the t-value time trends in Figure [fig5]6B.Overall, the two figures display similar patterns. Consistent with Figure [fig4]4, we observe a clear upward trend in t-values associated with AIRiskAssess. AIRiskAssess is insignificant in both panels based on a 10% significance level until the second quarter of 2022. However, in the last four rolling windows, AIRiskAssess becomes increasingly important. We also show that climate change risk and political risk are both highly significant during 2020. However, in both panels, climate change risk exhibits higher statistical significance than political risk in 2020. This trend reverses in 2021, during which political risk becomes relatively more significant than climate change risks. In sum, while AI risk is a clearly emerging force, the other types of risks exhibit high and low periods throughout our sample period.§ EQUITY MARKET PRICINGIn our last set of tests, we probe the asset pricing implications of our GPT-based risk exposure measures. In theory, higher risk exposures should be associated with higher expected equity returns. Establishing this link is challenging in our setting as the asset pricing methodology requires a relatively long time series. We use our entire sample period, starting in January 2018 and ending in March 2023, for this test. As this period is still relatively short, our results (t-statistics) are likely to understate the significance of risks picked up by our proxies. We test the pricing of GPT-based proxies by running <cit.> regressions and performing portfolio analysis for our risk variables. In this analysis, we focus on risk assessment-based measures as our prior tests imply their dominance. As <cit.> regressions typically use annual characteristics, we annualize our risk proxies by taking their average across four quarters each year (PRiskAssess^ann, CRiskAssess^ann, and AIRiskAssess^ann).[We exclude observations with zero annualized risk exposures since they are likely not to feature any discussion of political, climate change, or AI-related risks during earnings calls (or, alternatively, they might be instances where the model fails to generate meaningful output).] We construct our portfolios on March 31 of the subsequent year to allow for three or more months for stock prices to incorporate information disclosed during earnings calls (e.g., we compute annualized risk exposure over 2021 and form the portfolios on March 31, 2022). We regress monthly stock returns on the natural log of each risk exposure, log of the market value of equity (log(ME)), log of book-to-market ratio (log(BE/ME)), operating profitability (Profitability), investment (Investment), lagged one-month return (r_0,1), and lagged annual return after excluding the most recent month (r_2,12) <cit.>.[Investment is given by growth in total assets; Profitability is (total revenue – cost of goods sold – (sales, administrative expense – R&D expense)) scaled by total assets; log(ME) is the natural logarithm of the market value, and log(BE/ME) is the natural logarithm of the book-to-market ratio.] We report Newey-West t-values with a lag of 3. All continuous independent variables are trimmed at 1% and 99% levels.We report <cit.> analysis in Table [t11]11, Panel A. After controlling for asset characteristics, all three risk exposure proxies exhibit positive coefficients. The coefficients on CRiskAssess^ann (0.211) and AIRiskAssess^ann (0.317) are positive and statistically significant at conventional levels. The coefficient on PRiskAssess^ann is positive (0.077), yet not statistically significant (t-value = 1.36). Overall, the positive associations support the pricing of the corresponding risks in equity markets.For the portfolio analysis, we construct quintile portfolios based on the values of each annualized firm-level risk exposure measure. As earlier, we form portfolios on March 31 of the subsequent year relative to risk exposure and hold them for one year. Table [t11]11, Panel B presents equal-weighted high-minus-low portfolio alphas based on <cit.>’s five-factor model. Columns (1), (2), and (3) report monthly alphas from PRiskAssess^ann, CRiskAssess^ann, and AIRiskAssess^ann, respectively. We observe that portfolio alphas almost monotonically increase across risk exposure quintiles. Accordingly, all three columns report positive high-minus-low alphas. For political risk exposure, the annualized alpha is 5.28%, which is economically sizable yet does not attain statistical insignificance (t-value = 1.51). For climate change risk exposure, the annualized alpha is 6.72% and is statistically significant (t-value = 1.90). For AI-related risk exposure, the annualized alpha is 6.36% and is also statistically significant (t-value = 2.31).Overall, our results suggest that GPT-based risk exposure measures successfully extract firm-level risks that are priced in equity markets. Further, we show that such risk exposure measures are associated with sizable alphas unexplained by the five-factor asset pricing model.§ CONCLUSIONIn this paper, we evaluate whether recent advances in AI technology can help investors assess critical aspects of corporate risks. We evaluate these risks based on information disclosed during companies' earnings calls. More specifically, we use a generative Large Language Model, GPT 3.5 Turbo, to develop and validate three proxies for firm-level exposure to political, climate, and AI-related risks, all of which have been of primary concern to firms' stakeholders in recent years. We also investigate LLM's ability to leverage general knowledge to enhance summaries of risk-related content from conference calls by adding their own insights. Our measures of firm-level risk exposures exhibit large within-firm variation and move across industries and over time in intuitive ways. More importantly, each of the three measures is a powerful predictor of future stock price volatility and helps in explaining firms' policies, such as investment and innovation. Furthermore, GPT-based assessments are consistently more informative about firms' risks as compared to GPT summaries, highlighting the value of LLM's general knowledge. We also find that GPT-based risk measures consistently subsume the existing bigram-based risk proxies when subjected to the task of predicting stock market volatility and explaining economic decisions. Our findings are robust to a number of research design choices and hold outside of the training window of the GPT model.We conclude that generative AI technology enables users to obtain valuable insights about firm-level risks at a relatively low cost. The generated risk summaries and evaluations are not only human-readable but also easily convertible to risk exposure scores. Overall, our study highlights the economic usefulness of generative AI tools, such as ChatGPT, in measuring in providing useful input into investors' decisions based on unstructured textual information <cit.>.We also recognize limitations that come with the reliance on generative LLMs. GPT outcomes can be sensitive to prompt quality, and prompt engineering becomes an important starting point. Unless provided with relevant context, GPT may generate incorrect evaluations or provide random answers (known as hallucinations).[Even with perfect prompts, GPT might still generate incorrect information. This is because GPT is a language model whose primary purpose is completing a sentence in the most natural manner. This process is inherently not deterministic, and GPT might produce incorrect answers.] We recommended that users compare and reconcile GPT's answers with the corresponding source documents when making decisions. Nevertheless, in a large sample, the usefulness of generative AI in assessing risks is apparent. elsarticle-harv§ APPENDIX A. EXAMPLE OF SK TELECOMThe following is an excerpt from the GPT-processed political risk exposure assessment from SK Telecom's 2018 Q1 earnings call transcript. The firm is subject to political and regulatory risks and uncertainties related to its 5G CapEx plans and the discussions within the National Assembly regarding the separation of handset contracts and telecom service contracts.[…] Regarding the discussions on the separation of handset contracts and telecom service contracts, the company states that it is difficult to predict whether it will be introduced and the impact it may have. However, the company believes that if introduced, it could resolve issues related to the blending of service fees and handset pricing. The company also expects a shift in the market competition paradigm towards prices and service quality, leading to greater user benefits. The company emphasizes the need for measures that minimize negative effects on the overall ecosystem, including the distribution structure. The company looks forward to a reasonable conclusion being generated through the discussion and reviewing process, with a focus on market competition and enhancing customer convenience.[…] The analyst mentioned the government's talk about the separation of device contracts and service contracts, which could potentially affect the company's business model. However, the executives mentioned that there are still ongoing discussions and it is uncertain what will happen. The company has its own stance on the issue but is open to various options before making a decision. This indicates that the company may be affected by new regulations in the future, but the extent of the impact is uncertain at this point.§ APPENDIX B. SAMPLE SNIPPETS §.§ B1. Political Risk SummaryThe company is subject to political and regulatory risks and uncertainties in Europe and North America. The recent government auction of HS1 in the U.K. is mentioned as an example. §.§ B2. Political Risk AssessmentThe firm is subject to political and regulatory risks and uncertainties in Europe and North America. The focus on deficit reduction in these regions may lead to an increased flow of government disposals and potentially PFI (Private Finance Initiative) opportunities. The recent government auction of HS1 in the U.K. is mentioned as an example. Additionally, the flow of non-core disposals by corporate and financial institutions is continuing, as evidenced by the firm's recent investment in Eversholt, which was purchased from HSBC. These political and regulatory factors could impact the firm's operations and investment opportunities in these regions.§.§ B3. Climate Change Risk SummaryNA §.§ B4. Climate Change Risk AssessmentIt is worth noting that the company's use of leading-edge technologies in wafers, silicon wafers, substrates, and packaging may have implications for their environmental footprint. These technologies often require significant energy and resource consumption during production and may generate electronic waste at the end of their lifecycle. Additionally, as the company's networking products are being used in the creation of Ethernet fabric for AI clusters, there may be indirect environmental risks associated with the energy consumption and carbon footprint of these clusters. It is important for the company to consider the sustainability of their networking products and ensure they are aligned with environmental regulations and standards. §.§ B5. AI-Related Risk SummaryBased on the given information, the firm is heavily dependent on AI technologies and is actively incorporating AI into every layer of its stack, including productivity and consumer services. The executives mention that they believe the next big platform wave is AI and that they are working on building training supercomputers and inference infrastructure. They also mention specific AI capabilities in their products, such as robotic process automation and workflow automation, as well as the incorporation of AI in their consumer services. The firm has a partnership with OpenAI and is excited about their innovation and commercialization of products. §.§ B6. AI-Related Risk AssessmentBased on the given information, it is clear that Microsoft is heavily invested in emerging technologies, particularly artificial intelligence (AI). The company highlights its leadership in the AI era and its commitment to developing AI-powered products and services. Microsoft's Azure platform is being used by customers and partners to train state-of-the-art AI models and services, and the company is positioning itself as a leader in AI with its powerful AI supercomputing infrastructure. Additionally, Microsoft's AI services, such as Azure ML, have seen significant revenue growth, indicating a strong demand for AI capabilities. The company is also leveraging AI in its developer tools, such as GitHub Copilot, which is an AI-powered product that transforms developer productivity. Furthermore, Microsoft is integrating AI into its business applications, such as Dynamics 365, to help businesses digitize their operations and improve efficiency. Overall, Microsoft's business is heavily dependent on AI technologies, and the company is at the forefront of AI innovation.§ FIGURE 1. TIME TREND IN SK TELECOM'S POLITICAL RISK EXPOSURE0.9 This figure shows the time trend in SK Telecom's political risk exposure from 2018 to 2022. The solid line represents GPT-based risk exposure assessment (PRiskAssess) and the dotted line represents the bigram-based risk exposure score by <cit.> (PRiskBigram). § FIGURE 2. MEASURING RISKS WITH GENERATIVE AI0.9 This figure summarizes how we process earnings call transcripts with GPT to generate firm-level exposure measures. Refer to Section 3 for detailed explanation.§ FIGURE 3. INDUSTRY AVERAGES OF RISK EXPOSURE ASSESSMENTS0.9 This figure shows the SIC two-digit level industry averages of GPT-based risk exposure assessments (RiskAssess). We regress RiskAssess on dummy variables that represent each industry and report eight industries with the largest coefficients. We also plot 95% confidence intervals. Standard errors are clustered at the firm-level. 3A shows industry-level averages of political risk exposure, 3B shows climate change risk exposure, and 3C shows AI-related risk exposure.§ FIGURE 4. TIME TREND OF RISK MEASURES0.9 This figure shows the time series variation in firm-level risk exposure measures. 4A shows political risk exposure measures. 4B shows climate change risk exposure measures. We include the bigram measure, GPT-based summary measure, and GPT-based assessment measure. 4C shows AI-related risk exposure measures. For 4C only, we include GPT-based summary measure and GPT-based assessment measure. Shaded areas denote notable economy-wide events related to each risk.§ FIGURE 5. WORD CLOUDS0.9 This figure shows the word clouds extracted from the underlying documents of PRiskAssess, CRiskAssess, and AIRiskAssess. § FIGURE 6. RELATIVE IMPORTANCE OF RISK INFORMATION0.9 This figure shows the relative importance of each risk measure over time. We set four-quarter rolling estimation windows. In 5A, we estimate Implied_Volatility_it= β PRiskAssess_it + γX_it + δ_q + δ_s + ε_it, Implied_Volatility_it= β CRiskAssess_it + γX_it + δ_q + δ_s + ε_it, and Implied_Volatility_it= β AIRiskAssess_it + γX_it + δ_q + δ_s + ε_it separately and plot the t-values of each β. In 5B, we estimate Implied_Volatility_it =β_1 PRiskAssess_it + β_2 CRiskAssess_it + β_3 AIRiskAssess_it+ γX_it + δ_q + δ_s + ε_it and plot the t-values of β_1, β_2, and β_3. t-values are clustered at the firm-level. § TABLE 1. DESCRIPTIVE STATISTICS0.9 This table reports the descriptive statistics for the GPT-generated firm-level risk exposure variables and the key dependent variables for our full sample from 2018 to March 2023. Refer to Section 3 and Section 4 for variable descriptions. Panel A reports summary statistics. Panel B reports Pearson correlations among the variables. PRB is an abbreviation for PRiskBigram, PRS for PRiskSum, PRA for PRiskAssess, CRB for CRiskBigram, CRS for CRiskSum, CRA for CRiskAssess, AIRS for AIRiskSum, and AIRA for AIRiskAssess.§ TABLE 2. VARIANCE DECOMPOSITION0.9 This table the variance decomposition results for each risk exposure variable for our full sample from 2018 to March 2023. We report political risk exposure measures in Panel A, climate change risk exposure measures in Panel B, and AI-related risk exposure measures in Panel C. The first subpanel reports incremental R-squared values by adding time, industry, and time×industry fixed effects. The second subpanel reports R-squared values by adding firm fixed effects.§ TABLE 3. POLITICAL RISK EXPOSURE AND VOLATILITY0.9 This table reports the association between firm-level political risk exposure measures and volatility using our main sample period from 2018 to 2021. We use industry and time fixed effects in Panel A, and industry×time fixed effects in Panel B. All continuous variables are winsorized at 1% and 99% levels. Standard errors are clustered at firm-level. ***, **, and * denote significance at 1%, 5%, and 10% levels, respectively. § TABLE 4. CLIMATE CHANGE RISK EXPOSURE AND VOLATILITY0.9 This table reports the association between firm-level climate change risk exposure measures and volatility using our main sample period from 2018 to 2021. We use industry and time fixed effects in Panel A, and industry×time fixed effects in Panel B. All continuous variables are winsorized at 1% and 99% levels. Standard errors are clustered at firm-level. ***, **, and * denote significance at 1%, 5%, and 10% levels, respectively.§ TABLE 5. AI-RELATED RISK EXPOSURE AND VOLATILITY0.9 This table reports the association between firm-level AI-related risk exposure measures and volatility using our main sample period from 2018 to 2021. We use industry and time fixed effects in Panel A, and industry×time fixed effects in Panel B. All continuous variables are winsorized at 1% and 99% levels. Standard errors are clustered at firm-level. ***, **, and * denote significance at 1%, 5%, and 10% levels, respectively. § TABLE 6. OUT-OF-SAMPLE ANALYSIS: CAPITAL MARKET CONSEQUENCES0.9 This table repeats the analysis of Tables 3, 4, and 5 with a sample from 2022 to March 2023. We report political risk exposure measures in Panel A, climate change risk exposure measures in Panel B, and AI-related risk exposure measures in Panel C. We use industry and time fixed effects. All continuous variables are winsorized at 1% and 99% levels. Standard errors are clustered at firm-level. ***, **, and * denote significance at 1%, 5%, and 10% levels, respectively.§ TABLE 7. RISK EXPOSURE AND INVESTMENTS0.9 This table reports the association between firm-level risk exposure variables and capital investments using our main sample period from 2018 to 2021. We report political risk exposure measures in Panel A, climate change risk exposure measures in Panel B, and AI-related risk exposure measures in Panel C. We use capital expenditure scaled by recursive total capital as a dependent variable. We use industry and time fixed effects. All continuous variables are winsorized at 1% and 99% levels. Standard errors are clustered at firm-level. ***, **, and * denote significance at 1%, 5%, and 10% levels, respectively. § TABLE 8. RISK EXPOSURE AND FIRM RESPONSES0.9 This table reports how firms respond to different risk exposures using our main sample period from 2018 to 2021. We report political risk exposure measures in Panel A, climate change risk exposure measures in Panel B, and AI-related risk exposure measures in Panel C. For Panel A, we use lobbying activity indicator. For Panel B, we use green patent filing indicator and, for Panel C, we use AI patent filing indicator. We use industry and time fixed effects. All continuous variables are winsorized at 1% and 99% levels. Standard errors are clustered at firm-level. ***, **, and * denote significance at 1%, 5%, and 10% levels, respectively. § TABLE 9. OUT-OF-SAMPLE ANALYSIS: ECONOMIC OUTCOMES0.9 This table repeats the analysis of Tables 7 and 8 with a sample from 2022 to March 2023. We replicate the findings of Table 7 in Panel A and Table 8 in Panel B. We use industry and time fixed effects. All continuous variables are winsorized at 1% and 99% levels. Standard errors are clustered at firm-level. ***, **, and * denote significance at 1%, 5%, and 10% levels, respectively. § TABLE 10. RELATIVE IMPORTANCE OF RISK INFORMATION0.9 This table reports the relative importance of each risk measure over time. We set four-quarter rolling estimation windows. In Panel A, we estimate Implied_Volatility_it= β RiskAssess_it + γX_it + δ_q + δ_s + ε_it separately for PRiskAssess, CRiskAssess, and AIRiskAssess, and report the coefficients and their t-values. In Panel B, we estimate Implied_Volatility_it =β_1 PRiskAssess_it + β_2 CRiskAssess_it + β_3 AIRiskAssess_it+ γX_it + δ_q + δ_s + ε_it and report β_1, β_2, and β_3, and their corresponding t-values. t-values are clustered at the firm-level. All continuous variables are winsorized at 1% and 99% levels. § TABLE 11. ASSET PRICING ON FIRM-LEVEL RISK ASSESSMENT0.9 In Panel A, we present <cit.> regression results and Newey-West t-values (with lag=3). We regress monthly stock returns on our firm-level risk exposure measures and the following control variables: stock return for the prior month, stock return for the prior year (skipping the most recent month), log of the market value of equity, log of book-to-market ratio, operating profitability, and investment. We use returns from January 2018 to March 2023. r_0,1 is the lagged monthly return, r_2,12 is the lagged yearly return after skipping a month. Investment is the assets growth ratio. Profitability is (total revenue – cost of goods sold – (sales, administrative expense – R&D expense)) scaled by total assets. log(ME) is the natural logarithm of the market value, and log(BE/ME) is the natural logarithm of the book-to-market ratio. We use the natural logarithm of annualized risk assessment measures. Annualized risk assessment is the average value of quarterly risk assessment values. All continuous independent variables are trimmed at 1% and 99%. In Panel B, we form portfolios on March 31 of the subsequent year and delete observations with zero annualized risk exposure. We report quintile portfolio alphas using <cit.>. We then report monthly high-minus-low alphas and their corresponding t-values. t-values are Newey-West adjusted with a lag 3. *, **, and *** denote statistical significance at 10%, 5%, and 1% levels, respectively.
http://arxiv.org/abs/2310.17721v1
{ "authors": [ "Alex Kim", "Maximilian Muhn", "Valeri Nikolaev" ], "categories": [ "econ.GN", "cs.AI", "cs.CL", "q-fin.EC" ], "primary_category": "econ.GN", "published": "20231026183037", "title": "From Transcripts to Insights: Uncovering Corporate Risks Using Generative AI" }
A class of fractional differential equations via power non-local and non-singular kernels: existence, uniqueness and numerical approximationsThis is a preprintof a paper whose final form is published in Physica D: Nonlinear Phenomena (ISSN 0167-2789). Submitted 19-Jan-2023; revised 15-May-2023; accepted for publication 11-Oct-2023. Hanaa Zitane Delfim F. M. TorresCorresponding author. Center for Research and Development in Mathematics and Applications (CIDMA), Department of Mathematics, University of Aveiro, 3810-193 Aveiro, Portugal =================================================================================================================================================================================================================================================================================================================================================== We prove a useful formula and new propertiesfor the recently introduced power fractional calculuswith non-local and non-singular kernels. In particular,we prove a new version of Gronwall's inequality involvingthe power fractional integral; and we establish existenceand uniqueness results for nonlinear power fractionaldifferential equations using fixed point techniques. Moreover,based on Lagrange polynomial interpolation, we developa new explicit numerical method in order to approximatethe solutions of a rich class of fractional differential equations.The approximation error of the proposed numerical scheme is analyzed.For illustrative purposes, we apply our method to a fractional differentialequation for which the exact solution is computed, as well as to anonlinear problem for which no exact solution is known. The numerical simulations show that the proposed method is very efficient,highly accurate and converges quickly.Keywords: fractional initial value problems;Gronwall's inequality;non-singular kernels;numerical methods;power fractional calculus. 2020 Mathematics Subject Classification:26A33, 26D15, 34A08, 34A12. § INTRODUCTIONOver the last decades, fractional differential equations (FDEs)have been used to model a large variety of physical, biological,and engineering problems <cit.>. Often,since most dynamical systems involve memory or hereditary effects,the non-locality properties of the fractional derivativesmake them more accurate in modeling when compared withthe classical local operators. That gave rise to theintroduction of different kinds of non-local fractional derivativeswith non-singular kernels <cit.>, e.g., Caputo–Fabrizio <cit.>, Atangana–Baleanu <cit.>,   weighted Atangana–Baleanu <cit.>, and Hattaf fractionalderivatives <cit.>. In 2022, a generalized version of all the previous non-local fractional derivatives with non-singular kernels was introduced: the so-calledpower fractional derivative (PFD) <cit.>.PFDs are based on the  generalized power Mittag–Leffler function,which contains a key “power” parameter p that plays a very importantrole by enabling researchers, engineers and scientists, to selectthe adequate fractional derivative that models more accuratelythe real world phenomena under study. The authors of <cit.> presented the basic properties of the new power fractional derivative and integral.Moreover, they provided the Laplace transform corresponding to the PFD,which is then applied to solve a class of linear fractional differential equations.The question of existence and uniqueness of nonlinear FDEs,as well as their various applications, have beendiscussed by many researchers: see, for instance,<cit.>and references cited therein. Analyzing the literature, one may conclude that Gronwall's inequalityand its extensions are one of the most fundamental tools in all such results.Indeed, several versions of this classical inequality, involving fractional integralswith non-singular kernels, have been provided in order to develop the quantitativeand qualitative properties of the fractional differential equations to be investigated<cit.>. For example, in <cit.>, Hattaf et al. establisha Gronwall's inequality in the framework of generalized Hattaf fractional integrals,while in <cit.> Alzabut et al. prove a Gronwall's inequality via Atangana–Baleanufractional integrals. Motivated by the proceeding, the first main purpose of the present workis to derive a new version of Gronwall's inequality, as well as to studythe existence and uniqueness of solutions for nonlinear fractional differentialequations in the framework of more general power fractional operatorswith non-local and non-singular kernels. On the other hand, we develop an appropriate numerical method to deal with power differential equations.Numerical methods have been recognized as indispensablein fractional calculus <cit.>. They providepowerful mathematical tools to solve nonlinear ordinarydifferential equations and fractional differential equationsmodeling complex real phenomena. Numerical methods are generallyapplied to predict the behavior of dynamical systems when allthe used analytical methods fail, as it often the case.Various numerical schemes have been developed to approximatethe solutions of different types of fractional differential equationswith singular and non-singular kernels<cit.>.For example, in <cit.> a numerical scheme, that recovers theclassical Euler's method for ordinary differential equations, is proposed,in order to obtain numerical solutions of FDEs with generalizedHattaf fractional derivatives; in <cit.> collocation andpredictor-corrector methods on piece-wise polynomial spacesare developed to solve tempered FDEs with Caputo fractionalderivatives; while in <cit.> a numerical approximationfor FDEs with Atangana–Baleanu fractional derivatives is investigated.However, to the best of our knowledge, no numerical methods have yetbeen developed to solve FDEs in the framework of power fractional derivatives.Consequently, the second main purpose of our work is to developa new numerical scheme for approximating the solutions of such general and powerful differential equations.The remainder of this article is organized as follows.Section <ref> states the necessary preliminaries,including the definitions of power fractional derivativeand integral in the Caputo sense. In Section <ref>,we establish new and important formula and propertiesfor the power fractional operators with non-localand non-singular kernels that we will need in the sequel.Section <ref> deals with a new more general versionof Gronwall's inequality for the power fractional integral.Then we proceed with Section <ref>, which is devotedto the existence and uniqueness of solutions to FDEs involving PFDs.Section <ref> introduces a new numerical scheme with itserror analysis, allowing one to investigate, in practical terms, power FDEs. Applications and numerical simulationsof our main results are given in Section <ref>. We end with Section <ref> of conclusions.§ ESSENTIAL PRELIMINARIES AND NOTATIONSIn this section, we recall necessary definitions and resultsfrom the literature that will be useful in the sequel.Throughout this paper, g∈ H^1(a,b) is a sufficientlysmooth function on [a, b], with a, b ∈ℝ, and H^1(a,b) is the Sobolev space of order one. Also,AC([ a, b]) denotes the space of absolutely continuous functionsu on [ a, b] endowed with the normu = t∈[ a, b ]sup|u(t)|.In addition, we adopt the notationsϕ(α):=1-αN(α),ψ(α):=αN(α), where α∈ [0, 1) and N(α) is a normalization positivefunction obeying N(0)=N(1^-)=1 withN(1^-)=α→ 1^-limN(α). The power Mittag–Leffler function is given by^pE_k,l(s)=∑_n=0^+∞(sln p)^nΓ(k n+l),s∈ℂ,where k >0, l>0, p>0, and Γ(·) is the Gamma function <cit.>.The term ln(p) that is introduced in Definition <ref>of power Mittag-Leffler function ^pE_k,l(·) allows, by taking particular cases, to obtain several importantfunctions available in the literature, for example, the Mittag–Leffler functionof one parameter ^eE_k,1(·) <cit.>,the Wiman function ^eE_k,l(·) <cit.>,and those introduced by Prabhakar <cit.> and <cit.>. Let α∈ [0, 1), β > 0, p>0, and g∈ H^1(a,b).The power fractional derivative (PFD) of order α, in the Caputo sense,of a function g with respect to the weight function ω, is defined by^p^C D_a,t,ω^α,β,pg(t)=1ϕ(α)1ω(t)∫_a^t^pE_β,1(-μ_α(t-s)^β)(ω g)'(s)ds,where μ_α:=α1-αand ω∈ C^1([a,b]) with ω>0 on [a,b].PFD is a fractional derivative with non-singular kernelwhile the classical Caputo fractional derivative is a fractional operator with singular kernel. Therefore, PFDs belongto a different family and do not include Caputo derivatives as special cases.Note that the PFD (<ref>) includes manyinteresting fractionalderivatives that exist in the literature, such as: * if p=e, then we retrieve the generalized Hattaf fractional derivative <cit.> given by ^p^C D_a,t,ω^α,β,eg(t)=1ϕ(α)1ω(t)∫_a^t E_β,1(-μ_α (t-s)^β)(ω g)'(s) ds; * if β=α, p=e and ω(t) ≡ 1,then we obtain the Atangana–Baleanu fractional derivative<cit.> defined as^p^C D_a,t,1^α,α,eg(t) =1ϕ(α)∫_a^tE_α,1(-μ_α (t-s)^α) g'(s) ds; * if β=1, p=e and ω(t) ≡ 1, thenwe get the Caputo–Fabrizio fractional derivative <cit.> given by ^p^C D_a,t,1^α,1,eg(t)=1ϕ(α)∫_a^texp(-μ_α(t-s)) g'(s) ds. The power fractional integral associated with the power fractional derivative^p^C D_a,t,ω^α,β,p is givenin Definition <ref>. The power fractional integral (PFI) of order α,of a function g with respect to the weight function ω, is given by^pI_a,t,ω^α,β,pg(t) =ϕ(α)g(t)+ln p·ψ(α) ^R^LI_a,ω^βg(t),where ^R^LI_a,ω^β denotes the standard weightedRiemann–Liouville fractional integral of order β given by^R^LI_a,ω^βg(t) =1Γ(β)1ω(t)∫_a^t(t-s)^β-1(ω g)(s) ds. For p=e, the PFI (<ref>) coincides with the generalizedfractional integral introduced in <cit.>. The Gronwall's inequality in the framework of the weighted Riemann–Liouvillefractional integral is given in <cit.>.Suppose β>0, h and u are non-negativeand locally integrable functions on [a,b),and v is a non-negative, non-decreasing, and continuous function on [a,b) satisfyingv(t) ≤λ, where λ is a constant. Ifh(t)≤ u(t)+v(t)^R^LI_a,ω^βh(t),thenh(t)≤ u(t)+∫_a^t∑_n=1^+∞(v(t))^nΓ(nβ) (t-s)^nβ-1u(s) ds. § NEW PROPERTIES OF THE POWER FRACTIONAL OPERATORSIn this section, we establish a new important formula and propertiesfor the power fractional operators. They will be useful in the sequel to achieve the main goals formulated in Section <ref>.The power Mittag–Leffler function ^pE_k,l(s)is locally uniformly convergent for any s∈ℂ.The proof is similar to the proof of Theorem 1 of <cit.>. We prove a new formula for the power fractional derivativein the form of an infinite series of the standard weightedRiemann–Liouville fractional integral, which brings outmore clearly the non-locality properties of the fractionalderivative and, for certain computational purposes,is easier to handle than the original formula (<ref>).The power fractional derivative^p^C D_a,t,ω^α,β,pcan be expressed as follows:^p^C D_a,t,ω^α,β,pg(t) =1ϕ(α)∑_n=0^+∞(-μ_αln p)^n^R^LI_a,ω^β n+1((ω g)'ω)(t),where the series converges locally and uniformly in t for anya, α, β, p, ω and g verifyingthe conditions laid out in Definition <ref>.The power Mittag–Leffler function ^pE_k,l(s)is an entire function of s. Since it is locallyuniformlyconvergent in the whole complex plane (see Lemma <ref>),then the PFD may be rewritten as follows:^p^C D_a,t,ω^α,β,pg(t)=1ϕ(α)1ω(t)∑_n=0^+∞(-μ_αln p)^nΓ(β n+1)∫_a^t (t-x)^β n(ω g)'(x) dx =1ϕ(α)∑_n=0^+∞(-μ_αln p)^n1Γ(β n+1)1ω(t)∫_a^t (t-x)^β n(ω g)'(x) dx=1ϕ(α)∑_n=0^+∞(-μ_αln p)^n^R^LI_a,ω^β n+1((ω g)'ω)(t), which completes the proof.Let α∈ [0, 1), β > 0, p>0, and g ∈ H^1(a,b). Then,(i) ^p^C D_a,t,ω^α,β,p(^pI_a,t,ω^α,β,pg)(t) =g(t)-(ω g)(a)ω(t);(ii) ^pI_a,t,ω^α,β,p(^p^C D_a,t,ω^α,β,pg)(t) =g(t)-(ω g)(a)ω(t). We begin by proving (i). According to Lemma <ref>, one has^p^C D_a,t,ω^α,β,p(^pI_a,t,ω^α,β,pg)(t) =1ϕ(α)∑_n=0^+∞(-μ_αln p)^n^R^LI_a,ω^β n+1((ω(^pI_a,t,ω^α,β,p g))'ω)(t).From Definition <ref>, it follows that^p^C D_a,t,ω^α,β,p(^pI_a,t,ω^α,β,pg)(t)=1ϕ(α)∑_n=0^+∞(-μ_αln p)^n^R^LI_a,ω^β n+1[ϕ(α)(ω g)'ω+ln p ·ψ(α)(ω^R^LI_a,ω^βg)'ω](t)=∑_n=0^+∞(-μ_αln p)^n^R^LI_a,ω^β n+1((ω g)'ω)(t)+ μ_αln p∑_n=0^+∞(-μ_αln p)^n^R^LI_a,ω^β n+1((ω^R^LI_a,ω^βg)'ω)(t).Therefore,^p^C D_a,t,ω^α,β,p(^pI_a,t,ω^α,β,pg)(t)=∑_n=0^+∞(-μ_αln p)^n[^R^LI_a,ω^β n g(t)-(ω g)(a)^R^LI_a,ω^β n(1ω)(t)]-∑_n=0^+∞(-μ_αln p)^n+1[^R^LI_a,ω^β (n+1) g(t)-(ω g)(a)^R^LI_a,ω^β (n+1)(1ω)(t)]=∑_n=0^+∞(-μ_αln p)^n[^R^LI_a,ω^β n g(t)-(ω g)(a)^R^LI_a,ω^β n(1ω)(t)]-∑_n=1^+∞(-μ_αln p)^n[^R^LI_a,ω^β n g(t)-(ω g)(a)^R^LI_a,ω^β n(1ω)(t)]=^R^LI_a,ω^0g(t) -(ω g)(a)^R^LI_a,ω^0(1ω)(t)=g(t)-(ω g)(a)ω(t).Now, we prove (ii). According to Definition <ref>, one has^pI_a,t,ω^α,β,p(^p^CD_a,t,ω^α,β,pg)(t) =ϕ(α)^p^C D_a,t,ω^α,β,pg(t) +ln p·ψ(α) ^R^LI_a,ω^β(^p^C D_a,t,ω^α,β,pg)(t).By applying Lemma <ref>, we obtain that^pI_a,t,ω^α,β,p (^p^CD_a,t,ω^α,β,pg)(t)=∑_n=0^+∞(-μ_αln p)^n^R^LI_a,ω^β n+1((ω g)'ω)(t)+μ_αln p ^R^LI_a,ω^β[∑_n=0^+∞(-μ_αln p)^n^R^LI_a,ω^β n+1((ω g)'ω)(t)]=∑_n=0^+∞(-μ_αln p)^n^R^LI_a,ω^β n+1((ω g)'ω)(t) - ∑_n=0^+∞(-μ_αln p)^n+1^R^LI_a,ω^β (n+1)+1((ω g)'ω)(t)=∑_n=0^+∞(-μ_αln p)^n^R^LI_a,ω^β n+1((ω g)'ω)(t) - ∑_n=1^+∞(-μ_αln p)^n^R^LI_a,ω^β n+1((ω g)'ω)(t)=^R^LI_a,ω^1((ω g)'ω)(t)=1ω(t)∫_a^t(ω g)'(x) dx=g(t)-(ω g)(a)ω(t). The proof is complete.Theorem <ref> proves that the power fractional derivativeand integral are commutative operators. If we let p=e in Theorem <ref>,then we obtain the results presented in Theorem 3 of <cit.>for the generalized Hattaf fractional operators. As a corollary of our Theorem <ref>, we extend the Newton–Leibniz formulaproved in <cit.>. The power fractional derivative and integral satisfy the Newton–Leibniz formula^p^C D_a,t,1^α,β,p(^pI_a,t,1^α,β,pg)(t) =^pI_a,t,1^α,β,p(^p^CD_a,t,1^α,β,pg)(t) =g(t)-g(a). Follows from Theorem <ref>with ω(t) ≡ 1. § GRONWALL'S INEQUALITY VIA PFIIn this section we establish a Gronwall's inequality in the frameworkof the power fractional integral. Our proof uses Lemma <ref>. Let α∈ [0, 1), β>0, and p>0.Suppose h and u are non-negativeand locally integrable functions on [a,b),and v is a non-negative, non-decreasing, and continuous function on [a,b) satisfyingv(t) ≤λ, where λ is a constantsuch that 1-ϕ(α)λ>0. Ifh(t)≤ u(t)+v(t)^pI_a,t,ω^α,β,ph(t),then h(t)≤u(t)1-ϕ(α)v(t) + ∫_a^t∑_n=1^+∞( ln p·ψ(α)v(t))^nu(s)(t-s)^nβ-1Γ(nβ)(1-ϕ(α)v(t))^n(1-ϕ(α)v(s)) ds. By virtue of condition (<ref>)and the PFI formula (<ref>), one has h(t)≤ u(t)+ϕ(α)v(t)h(t)+ln p ·ψ(α)v(t) ^R^LI_a,ω^βh(t),which leads to h(t)≤u(t)1-ϕ(α)v(t) +ln p·ψ(α)v(t)1-ϕ(α) v(t)^R^LI_a,ω^βh(t).Let V(t)=ln p·ψ(α)v(t)1-ϕ(α)v(t).This function is non-negative and non-decreasing and, by applyingthe result of Lemma <ref> withU(t)=u(t)1-ϕ(α)v(t), it follows thath(t)≤ U(t)+∫_a^t∑_n=1^+∞(V(t))^nΓ(nβ) (t-s)^nβ-1U(s) ds. Hence,h(t)≤u(t)1-ϕ(α)v(t) + ∫_a^t∑_n=1^+∞( ln p·ψ(α)v(t))^nu(s) (t-s)^nβ-1Γ(nβ)(1-ϕ(α) v(t))^n(1-ϕ(α)v(s)) ds,and the proof is complete. Under the hypotheses of Theorem <ref>,assume further that v(t) is a non-decreasingfunction on [a, b). Then,h(t)≤u(t)1-ϕ(α)v(t)^pE_α,β( ψ(α)v(t) (t-a)^β1-ϕ(α)v(t)). By virtue of inequality (<ref>) and the assumptionthat u(t) is a non-decreasing function on [a, b),one may write thath(t) ≤u(t)1-ϕ(α)v(t) + u(t)1-ϕ(α)v(t)∫_a^t∑_n=1^+∞(ln p·ψ(α)v(t))^n (t-s)^nβ-1Γ(nβ)(1-ϕ(α)v(t))^n ds≤u(t)1-ϕ(α)v(t)(1 +∑_n=1^+∞( ln p·ψ(α)v(t))^nΓ(nβ)(1-ϕ(α)v(t))^n∫_a^t(t-s)^nβ-1 ds)≤u(t)1-ϕ(α)v(t)(1+ ∑_n=1^+∞( ln p·ψ(α)v(t))^n (t-a)^nβΓ(nβ)(1-ϕ(α)v(t))^n).Therefore, h(t)≤u(t)1-ϕ(α)v(t)^pE_α,β(ψ(α)v(t) (t-a)^β1-ϕ(α)v(t)),which completes the proof. Our Gronwall's inequality for the power fractional integral,as given in Corollary <ref>, includes,as particular cases, most of existing Gronwall's inequalitiesfound in the literature that involve integralswith non-local and non-singular kernel, such us * the Gronwall's inequality in the frameworkof the Atangana–Baleanu integral <cit.>,obtained when p=e, ω≡ 1 and β=α;* the Gronwall's inequality in the framework ofthe generalized Hattaf fractional derivative <cit.>,obtained when p=e.Let α∈ [0, 1), β > 0, and p>0.Suppose that h and u are non-negative and locallyintegrable functions on [a,b) and v(t) ≡λ be such that1-λϕ(α)>0. Ifh(t)≤ u(t)+λ^pI_a,t,ω^α,β,ph(t),then h(t)≤u(t)1-λϕ(α)^pE_α,β(λψ(α) (t-a)^β1-λϕ(α)).§ EXISTENCE AND UNIQUENESS OF SOLUTIONS FOR POWER FDESIn this section we study sufficient conditions for the existence and uniqueness of solution to the power fractional initial value problem^p^C D_a,t,ω^α,β,py(t) =f(t,y(t)),t∈ [a,b]withy(a)=y_0,where ^p^C D_a,t,ω^α,β,p denotes the PFDof order α, defined by (<ref>), f:[a,b]×ℝ⟶ℝ is a continuous nonlinear function withf(a, y(a))=0 and y_0∈ℝ is the initial condition.A function y∈ C([a,b]) is a solution of (<ref>)–(<ref>)if, and only if, it satisfies the integral equation y(t)=ω(a)ω(t)y_0+^pI_a,t,ω^α,β,pf(t,y(t)). First, suppose that y fulfills the integral formula (<ref>). Then,y(a)=y_0+^pI_a,t,ω^α,β,pf(a,y(a)).Since f(a,y(a))=0, we obtain that y(a)=y_0.Moreover, using the fact that y(t) satisfies (<ref>)and (i) of Theorem <ref>, it follows that^p^C D_a,t,ω^α,β,py(t) =^p^C D_a,t,ω^α,β,p( ω(a)ω(t)y_0) -ω(a)f(a,y(a))ω(t)+f(t,y(t)),which implies that ^p^C D_a,t,ω^α,β,py(t)= f(t,y(t)). Then y(t) satisfies (<ref>)–(<ref>). Now, let us suppose that y is a solution of the Cauchy problem(<ref>)–(<ref>). Applying the power fractionalintegration operator to both sides of (<ref>),and using formula (ii) of Theorem <ref>, we gety(t)=ω(a)ω(t)y(a) +^pI_a,t,ω^α,β,pf(t,y(t)).Therefore, since y(a)=y_0, we obtainformula (<ref>).Let y and z be two solutions of system (<ref>)–(<ref>).Assume that the function f∈ C([a,b]×ℝ,ℝ)is Lipschitz in its second variable, that is,there exists a constant L>0 such that |f(t,y)-f(t,z)|≤ L |y-z |,∀ y,z ∈ℝ  and   t∈ [a,b].If in addition L<1ϕ(α), then y=z. Let y and z be two solutions of problem (<ref>)–(<ref>).By virtue of Lemma <ref>, one hasy(t)-z(t)=^pI_a,t,ω^α,β,p(f(t,y(t)) -f(t,z(t))).Taking into account condition (<ref>), it yields that|y(t)-z(t)|≤ L ^pI_a,t,ω^α,β,p|y(t)-z(t)|. By applying the result of Corollary <ref>, one obtains that|y(t)-z(t)|≤01-Lϕ(α)^pE_α,β(Lψ(α) (t-a)^β1-Lϕ(α)).It follows that y=z for all t ∈ [a,b]. Assume that the function f∈ C([a,b]×ℝ,ℝ)is Lipschitz in its second variable such that condition (<ref>) holds. If L (ϕ(α)+ln p·ψ(α) (b-a)^βΓ(β+1))<1, then the Cauchy problem (<ref>)–(<ref>)has a unique solution. Let us define the operatorΛ: AC([ a, b])⟶ AC([ a, b]) as follows:(Λ y)(t)=ω(a)ω(t)y(a) +^pI_a,t,ω^α,β,pf(t,y(t)), t∈ [a, b].For all y , z ∈ AC([ a, b]) and t ∈ [a, b ], one has |(Λ y)(t)-(Λ z)(t)|=|^pI_a,t,ω^α,β,pf(t,y(t)) -^pI_a,t,ω^α,β,pf(t,z(t))|≤ | ϕ(α)(f(t,y(t))-f(t,z(t))) + ln p·ψ(α)(^R^LI_a,ω^β f(t,y(t))-^R^LI_a,ω^βf(t,z(t)))|≤ϕ(α)|f(t,y(t))-f(t,z(t))| + ln p·ψ(α)^R^LI_a,ω^β|f(t,y(t))-f(t,z(t))|.Using the fact that f satisfies the Lipschitz condition(<ref>), we obtain that|(Λ y)(t)-(Λ z)(t)|≤ Lϕ(α)|y-z|+ L ln p ·ψ(α)|y-z|^R^LI_a,ω^β(1)(t)≤ Lϕ(α)|y-z|+ L ln p ·ψ(α)(t-a)^βΓ(β+1)|y-z|.Therefore,(Λ y)(t)-(Λ z)(t)≤ L( ϕ(α)+ln p·ψ(α) (b-a)^βΓ(β+1))y-z.Hence, by virtue of (<ref>), we conclude that Λis a contraction mapping. As a consequence of the Banach contractionprinciple, we conclude that system (<ref>) has a unique solution. § NUMERICAL ANALYSISNow we shall present a numerical method to approximatethe solution of the nonlinear fractional differential equation (<ref>) subject to (<ref>), which is predicted by Theorem <ref>. Moreover, we also analyzethe approximation error obtained from the new introducedscheme. Our main tool is the two-step Lagrange interpolation polynomial. §.§ Numerical scheme Consider the power nonlinear fractional differential equation ^p^C D_a,t,ω^α,β,py(t) =f(t,y(t))subject to the given initial conditiony(a)=y_0.From Theorem <ref>, equation (<ref>)can be converted into the fractional integral equationy(t)- ω(a)ω(t)y(a) =ϕ(α)f(t,y(t))+ln p·ψ(α)^p I_a,t,ω^α,β,pf(t,y(t)),which implies thaty(t)=ω(a)ω(t)y(a)+ϕ(α)f(t,y(t)) +ln p·ψ(α)Γ(β)1ω(t)∫_a^t(t-s)^β-1ω(s)f(s,y(s)) ds.Let t_n=a+nh with n∈ℕand h the discretization step. One hasy(t_n+1)=ω(a)ω(t_n)y(a) +ϕ(α)f(t_n,y(t_n))+ln p ·ψ(α)Γ(β)1ω(t_n)∫_a^t_n+1(t_n+1-s)^β-1ω(s) f(s,y(s)) ds,which yieldsy(t_n+1)=ω(a)ω(t_n)y(a)+ϕ(α)f(t_n,y(t_n)) +ln p·ψ(α)Γ(β)1ω(t_n)∑_k=0^n∫_t_k^t_k+1(t_n+1-s)^β-1g(s,y(s))dswith g(s,y(s))=ω(s)f(s,y(s)). Function g maybe approximated over [t_k-1, t_k], k = 1, 2, …, n,by using the Lagrange interpolating polynomial that passes throughthe points (t_k-1, g(t_k-1, y_k-1)) and (t_k, g(t_k, y_k)), as follows: P_k(s) =s-t_kt_k-1-t_kg(t_k-1, y(t_k-1)) +s-t_k-1t_k-t_k-1g(t_k, y(t_k))≈g(t_k-1, y_k-1)h(t_k-s) +g(t_k, y_k)h(s-t_k-1).Replacing the approximation (<ref>)in equation (<ref>), we obtain thaty_n+1 =ω(a)ω(t_n)y_0 +ϕ(α)ω(t_n)g(t_n,y_n)+ln p·ψ(α)Γ(β)1ω(t_n)∑_k=1^n[g(t_k-1, y_k-1)h∫_t_k^t_k+1 (t_n+1-s)^β-1(t_k-s) ds.+.g(t_k, y_k)h∫_t_k^t_k+1(t_n+1-s)^β-1(s-t_k-1) ds ].Moreover, we have∫_t_k^t_k+1(t_n+1-s)^β-1(t_k-s)ds=h^β+1β(β+1)[(n-k)^β(n-k+1+β) -(n-k+1)^β+1]and∫_t_k^t_k+1(t_n+1-s)^β-1(s-t_k-1) ds = h^β+1β(β+1)[(n-k+1)^β(n-k+2+β)-(n-k)^β(n-k+2+2β)].The above equations (<ref>) and (<ref>) can then be includedin equation (<ref>) to produce the following numerical scheme:y_n+1=ω(a)ω(t_n)y_0 +ϕ(α)f(t_n,y_n)+ln p ·ψ(α)h^βΓ(β+2)ω(t_n)∑_k=1^nω(t_k-1)f(t_k-1, y_k-1)A^β_n,k +ω(t_k)f(t_k, y_k) B^β_n,kwithA^β_n,k=(n-k)^β(n-k+1+β) -(n-k+1)^β+1andB^β_n,k=(n-k+1)^β(n-k+2+β)-(n-k)^β(n-k+2+2β).The techniques used in this section are similar to the ones in <cit.>for the generalized Hattaf fractional derivative and in <cit.>for the Atangana–Baleanu fractional derivative.§.§ Error analysis We now examine the numerical error of our developed approximation scheme (<ref>).Let (<ref>) be a nonlinear power fractional differential equation,such that g=ω f has a bounded second derivative. Then,the approximation error is estimated to verify| R^α,β,p_n|≤ln p·ψ(α)h^β+24Γ(β+2) ω(t_n)(n+1)(n+4+2β)[(n+1)^β-β n^β] s∈ [a,t_n+1]max| g^(2)(s,y(s))|.From (<ref>), one hasy(t_n+1)=ω(a)ω(t_n)y(a) +ϕ(α)f(t_n,y(t_n))+ln p ·ψ(α)Γ(β)1ω(t_n)∑_k=0^n∫_t_k^t_k+1 (t_n+1-s)^β-1g(s,y(s)) ds.Therefore,y(t_n+1) =ω(a)ω(t_n)y(a)+ϕ(α)f(t_n,y(t_n))+ln p·ψ(α)Γ(β)1ω(t_n)∑_k=0^n∫_t_k^t_k+1(t_n+1-s)^β-1[P_k(s) +(s-t_k)(s-t_k-1)2![g^(2)(s,y(s))]_s=ξ_s]ds,which implies thaty(t_n+1)=ω(a)ω(t_n)y(a)+ϕ(α)f(t_n,y(t_n))+ϕ(α)f(t_n,y_n)+ln p·ψ(α) h^βΓ(β+2)ω(t_n)∑_k=0^n g(t_k-1, y_k-1)A^β_n,k+g(t_k, y_k) B^β_n,k+R^α,β,p_nwith the remainderR^α,β,p_n =ln p·ψ(α)Γ(β)1ω(t_n)∑_k=0^n∫_t_k^t_k+1(t_n+1-s)^β-1(s-t_k)(s-t_k-1)2![g^(2)(s,y(s))]_s=ξ_s ds.Using the fact that function s ↦ (s-t_k-1)(t_n+1-s)is positive on the interval [t_k, t_k+1], it follows thatthere exists a ξ_k∈ [t_k, t_k+1] such thatR^α,β,p_n =ln p·ψ(α)Γ(β)1ω(t_n)∑_k=0^ng^(2)(ξ_k,y(ξ_k)) (ξ_k-t_k)2∫_t_k^t_k+1(t_n+1-s)^β-1(s-t_k-1)ds.Using (<ref>), we obtain thatR^α,β,p_n=ln p ·ψ(α)h^β+12Γ(β+2)ω(t_n)∑_k=0^ng^(2)(ξ_k,y(ξ_k))(ξ_k-t_k)B^β_n,k.Therefore,| R^α,β,p_n|≤ln p·ψ(α) h^β+22Γ(β+2)ω(t_n)s∈ [a,t_n+1]max| g^(2)(s,y(s)) |·|∑_k=0^nB^β_n,k|.Then, from formulasB^β_n,k =(n-k+2+β)[(n-k+1)^β-β(n-k)^β]≤(n-k+2+β)[(n+1)^β-β n^β]and∑_k=0^n(n-k+2+β)=(n+1)(n+4+2β)2,we deduce that| R^α,β,p_n|≤ln p·ψ(α)h^β+24Γ(β+2) ω(t_n)(n+1)(n+4+2β)[(n+1)^β -β n^β]s∈ [a,t_n+1]max| g^(2)(s,y(s))|,which completes the proof. § EXAMPLES AND SIMULATION RESULTSIn this section, we begin by illustrating the suggested numerical methodof Section <ref> with a power FDE for which we can computeits exact solution. Then, as a second example, we apply our main analyticaland numerical results to a nonlinear power FDEfor which no exact solution is known. Let us consider the following power fractional equation: ^p^C D_0,t,ω^α,β,py(t) = t^2, t∈ [0,10]subject toy(0)=0, where ω(t) ≡ 1. By applying the power fractional integralto both sides of (<ref>) and using formula (ii)of Theorem <ref>, we obtain the exact solutionof (<ref>)–(<ref>), which is given byy(t)=ϕ(α)t^2 + 2ln p·ψ(α)/Γ(β+3)t^β+2.We now apply the developed numerical scheme (<ref>)to approximate the solution of (<ref>)–(<ref>).For numerical simulations, we choose the normalization function N(α)=1-α+αΓ(α).The comparison between the exact and approximate solutions of(<ref>)–(<ref>) is depicted in Figures <ref>and <ref>.The maximum error of the numerical approximationsis given in Table <ref>, for α=0.1,β=0.2, p=1.1 and different valuesof the discretization step h. From Figures <ref> and <ref>, we observe thatthe proposed numerical method gives a good agreement betweenthe exact and approximate solutions for different value ofα, β, p and the discretization step h.Table <ref> shows that the convergenceof the numerical approximation depends on the step of discretization h.By comparing the exact and approximate solutions, we concludethat the new proposed numerical scheme is very efficientand converges quickly to the exact solution.Consider the following nonlinear power fractional differential equation:^p^C D_0,t,ω^α,1,ey(t) =t^215( cos(2t)1+|y(t)|), t∈ [0,4]subject toy(0)=√(π).This example is a particular case of problem (<ref>)–(<ref>)with β=1, p=e, y_0=√(π), a=0, b=4 and f(t,y(t)) =t^215(cos(2t)1+|y(t)|) with f(0,y(0))=0. Here, we choose the normalization function N(α)=1. For ally, z ∈ℝ and t∈[0, 4], one has |f(t,y)-f(t,z)|=t^215|cos(3t^2)|(|11+|y| -11+|z||)≤115(|z|-|y|)≤115|y-z|.Thus, function f is continuous and satisfies the Lipschitz condition(<ref>) with L=115. Moreover, for any α∈ [0,1),we have ϕ(α)=1-α, ψ(α)=α and L (ϕ(α)+ln p·ψ(α) (b-a)^βΓ(β+1)) =115(1+3α)<1. Hence, condition (<ref>) holds. Then, by applying Theorem <ref>,it follows that problem (<ref>)–(<ref>)has a unique solution on [0, 4]. We now use our proposed method to solve the system (<ref>)–(<ref>).For numerical simulations, we take the weight function ω(t)=t+2. The approximate solution of (<ref>)–(<ref>) is displayedin Figures <ref> and <ref> for different values ofα, β=1 and p=e, using two discretization steps:h=0.1 and h=0.01.§ CONCLUSIONIn this paper, (i) we established a new formulafor the power fractional derivative with a non-local and non-singularkernel in the form of an infinite series of the standard weighted Riemann–Liouvillefractional integral. This brings out more clearly the non-locality propertiesof the fractional derivative and makes it easier to handle certain computational aspects.By means of the proposed formula, we derived useful properties of the power fractional operators,for example the Newton–Leibniz formula has been rigorously extended.(ii) We presented a new version of Gronwall's inequality via the power fractionalintegral, which includes many versions of Gronwall's inequalityfound in the literature, such us the generalized Hattaf and Atangana–Baleanufractional Gronwall's inequalities. (iii) We proved the existence and uniquenessof solutions to nonlinear power fractional differential equationsusing the fixed point principle; and, based on Lagrangepolynomial interpolation, (iv) we provided a new explicitnumerical method to approximate the solutions of power FDEswith the approximation error being also examined. However,we only presented a bound for the error and the proof of the convergence of the numerical scheme is still an open problem. Numerical examples and simulation results were discussed and showthat our developed method is very efficient, highly accurate,and converges quickly.As future work, we aim to apply our obtained analyticaland numerical results to develop power fractional models describingreal world phenomena such us the world population growthand the dynamics of an epidemic disease. This issue is currentlyunder investigation and will appear elsewhere.§ ACKNOWLEDGEMENTS Zitane and Torres are supported by The Centerfor Research and Development in Mathematics and Applications (CIDMA) through the Portuguese Foundation for Science and Technology(FCT – Fundação para a Ciência e a Tecnologia), project UIDB/04106/2020. Zitane is also grateful to the post-docfellowship at CIDMA-DMat-UA, reference UIDP/04106/2020.99 ALRefai1M. Al-Refai,On weighted Atangana-Baleanu fractional operators,Adv. Difference Equ. 2020, Paper No. 3, 11 pp.ALRefaiM. Al-Refai and A. M. Jarrah,Fundamental results on weighted Caputo-Fabrizio fractional derivative,Chaos Solitons Fractals 126 (2019), 7–11. GronwallJ. Alzabut, T. Abdeljawad, F. Jarad and W. Sudsutad, A Gronwall inequality via the generalized proportionalfractional derivative with applications,J. Inequal. Appl. 2019, Paper No. 101, 12 pp. MR3727142 G. A. Anastassiou and I. K. Argyros,Functional numerical methods: applications to abstract fractional calculus,Studies in Systems, Decision and Control, 130, Springer, Cham, 2018. AtanBalA. AtanganaandD. Baleanu, New fractional derivatives with non-local and non-singular kernel:Theory and application to heat transfer model, Therm. Sci. 20 (2016), no. 2, 763–769. Baleanu1D. Baleanu, K. Diethelm, E. Scalasand J. J. Trujillo, Fractional calculus, Series on Complexity, Nonlinearity and Chaos, 3,World Scientific Publishing Co. Pte. Ltd., Hackensack, NJ, 2012. Leibniz D. Baleanu and A. Fernandez,On some new properties of fractional derivativeswith Mittag-Leffler kernel,Commun. Nonlinear Sci. Numer. Simul. 59 (2018), 444–462. CapFabM. Caputo and M. Fabrizio, A new definition of fractional derivative without singular kernel,Progr. Fract. Differ. Appl. 1 (2015), no. 2, 73–85. ChenekeK. R. Cheneke, K. Purnachandra Rao and G. Kenassa Edessa,Application of a new generalized fractional derivativeand rank of control measures on cholera transmission dynamics,Int. J. Math. Math. Sci. 2021 (2021), Art. ID 2104051, 9 pp. HattafK. Hattaf, A new generalized definition of fractional derivative with non-singular kernel,Computation 8 (2020), no. 2, Paper No. 49, 9 pp.Hattaf1K. Hattaf,On some properties of the new generalized fractional derivative with non-singular kernel,Math. Probl. Eng. 2021, Art. ID 1580396, 6 pp.Hattaf2K. Hattaf, On the stability and numerical scheme of fractional differential equations with application to Biology,Computation 10 (2022), no. 6, Paper No. 97, 12 pp.HatNumK. Hattaf, Z. Hajhouji, M. A. Ichou and N. Yousfi, A numerical method for fractional differential equationswith new generalized Hattaf fractional derivative,Mathematical Problems in Engineering 2022 (2022), Article ID 3358071, 9 pp. KHattaf K. Hattaf, A. A. Mohsen and H. F. Al-Husseinye, Gronwall inequality and existence of solutions fordifferential equations with generalized Hattaf fractional derivative, J. Math. Computer Sci. 27 (2022), 18–27.JaradF. Jarad, T. Abdeljawad and Z. Hammouch,On a class of ordinary differential equationsin the frame of Atangana-Baleanu fractional derivative,Chaos Solitons Fractals 117 (2018), 16–20. mittagA. A. Kilbas, H. M. Srivastava and J. J. Trujillo,Theory and applications of fractional differential equations,North-Holland Mathematics Studies, 204,Elsevier Science B.V., Amsterdam, 2006.PowerDerivative E. M. Lotfi, H. Zine, D. F. M. Torres and N. Yousfi, The power fractional calculus: First definitions and properties with applications to power fractional differential equations,Mathematics 10 (2022), no. 19, Art. 3594, 10 pp.Prabhakar:71 T. R. Prabhakar,A singular integral equation with a generalized Mittag Leffler function in the kernel,Yokohama Math. J. 19 (1971), 7–15.Salim:2009 T. O. Salim, Some properties relating to the generalized Mittag-Leffler function,Adv. Appl. Math. Anal. 4 (2009), 21–30.SeneN. Sene,SIR epidemic model with Mittag-Leffler fractional derivative,Chaos Solitons Fractals 137 (2020), 109833, 9 pp. ShiriB. Shiri, G.-C. Wu and D. Baleanu,Collocation methods for terminal value problemsof tempered fractional differential equations,Appl. Numer. Math. 156 (2020), 385–395.SrivastavaH. M. Srivastavaand K. M. Saad, Some new models of the time-fractional gas dynamics equation,Adv. Math. Models Appl. 3 (2018), 5–17.ToufAtanM. Toufikand A. Atangana, New numerical approximation of fractional derivativewith non-local and non-singular kernel: Application to chaotic models,European Physical Journal Plus 132 (2017), no. 10, Paper No. 444, 16 pp.WimanA. Wiman, Über den fundamental satz in der theorie der functionen E_α(x),Acta Mathematica 29 (1905), 191–201.
http://arxiv.org/abs/2312.00014v1
{ "authors": [ "Hanaa Zitane", "Delfim F. M. Torres" ], "categories": [ "math.NA", "cs.NA", "26A33, 26D15, 34A08, 34A12" ], "primary_category": "math.NA", "published": "20231027213531", "title": "A class of fractional differential equations via power non-local and non-singular kernels: existence, uniqueness and numerical approximations" }
From Generative AI to Generative Internet of Things: Fundamentals, Framework, and Outlooks Jinbo Wen, Jiangtian Nie, Jiawen Kang, Dusit Niyato, IEEE Fellow, Hongyang Du,Yang Zhang, Mohsen Guizani, IEEE FellowJ. Wen and Y. Zhang are with the College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, China (e-mail: [email protected]; [email protected]). J. Nie, D. Niyato, and H. Du are with the School of Computer Science and Engineering, Nanyang Technological University, Singapore (e-mail: [email protected]; [email protected]; [email protected]). J. Kang is with the School of Automation, Guangdong University of Technology, China (e-mail: [email protected]). M. Guizani is with the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), UAE (e-mail: [email protected]). January 14, 2024v2.2 - notao G_p^n==========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================Generative Artificial Intelligence (GAI) possesses the capabilities of generating realistic data and facilitating advanced decision-making. By integrating GAI into modern Internet of Things (IoT), Generative Internet of Things (GIoT) is emerging and holds immense potential to revolutionize various aspects of society, enabling more efficient and intelligent IoT applications, such as smart surveillance and voice assistants. In this article, we present the concept of GIoT and conduct an exploration of its potential prospects. Specifically, we first overview four GAI techniques and investigate promising GIoT applications. Then, we elaborate on the main challenges in enabling GIoT and propose a general GAI-based secure incentive mechanism framework to address them, in which we adopt Generative Diffusion Models (GDMs) for incentive mechanism designs and apply blockchain technologies for secure GIoT management. Moreover, we conduct a case study on modern Internet of Vehicle traffic monitoring, which utilizes GDMs to generate effective contracts for incentivizing users to contribute sensing data with high quality. Finally, we suggest several open directions worth investigating for the future popularity of GIoT.Modern IoT, generative AI, contract theory, generative diffusion model. § INTRODUCTIONThe advent of Generative Artificial Intelligence (GAI) represents a significant milestone in the field of AI<cit.>. In contrast to traditional AI models that primarily classify or analyze existing data, GAI possesses the incredible ability to generate novel content such as digital films, audio, photos, or codes, thus exerting profound impacts across various domains<cit.>. For instance, in the healthcare domain, GAI can assist physicians in diagnosing conditions based on medical records and images, and generate tailored treatment plans for patients. In the tourism and hospitality domain, GAI can generate hyper-personalized content for tourists, fostering changes in tourism strategies such as destination planning and hotel booking. Simultaneously, the potential of GAI for network optimization has been explored<cit.>, contributing to the optimization of network management and performance, thereby enhancing the efficiency of decision-making in complex networks<cit.>.Recent advances in cutting-edge technologies, such as sixth-generation (6G) wireless communications, Artificial Intelligence (AI), and edge computing, are bringing modern Internet of Things (IoT) technologies to maturity<cit.>. Modern IoT is considered an intelligent and autonomous ecosystem that revolutionizes device connectivity, data analytics, and intelligent decision-making. With its capabilities of ultra-low latency communications, seamless connectivity, and ubiquitous computing<cit.>, modern IoT has the potential to enable novel and advanced applications across various industries and domains, including intelligent healthcare monitoring and smart homes. For instance, in smart homes, IoT devices can automate and control various aspects of home living, such as smart lighting systems and temperature control, providing enhanced convenience and comfort for users.Given the remarkable capabilities of GAI in generating realistic data and facilitating advanced decision-making processes<cit.>, we envision that GAI-empowered modern IoT will become more creative and proactive as such the term Generative IoT (GIoT) emerges. By leveraging the advanced data generation and decision-making capabilities of GAI, GIoT has the capacity to drive the progression of IoT-enabled environments. For instance, by processing historical sensing data and real-time sensor readings, GAI can forecast future events, predict system failures, and generate effective resource management to improve overall system performance. Although GIoT holds significant potential to revolutionize various domains, there exist several critical challenges that need to be addressed when integrating GAI with modern IoT to enable GIoT, including the impact of IoT resource consumption on the performance of GAI model fine-tuning, the dynamic nature of GIoT networks complicating the identification of optimal decision strategies, and security concerns for GAI models in GIoT networks. To the best of our knowledge, this is the first work that presents the concept of GIoT and systemically provides foresight research on the integration of GAI with modern IoT for enabling GIoT. The contributions of this article can be summarized as follows: * We first provide a comprehensive overview of GAI techniques that have been widely adopted in the field of computer vision. Then, we systematically discuss the potential GIoT applications and the main challenges of synergy between GAI and modern IoT to enable GIoT.* We present a general incentive mechanism framework to address the main challenges in enabling GIoT. We utilize blockchain technologies to manage and secure GIoT and adopt Generative Diffusion Models (GDMs) to derive optimal incentive mechanism design.* We conduct a case study on modern Internet of Vehicle (IoV) traffic monitoring, in which we develop a GDM-based contract theory model for incentivizing users to contribute high-quality sensing traffic data. Numerical results demonstrate that our scheme outperforms the Deep Reinforcement Learning (DRL)-based scheme.§ GENERATIVE INTERNET OF THINGSIn this section, we introduce several widely adopted GAI techniques, especially in the field of computer vision, involving their basic architectures and applications for modern IoT. Then, we systematically explore the potential GIoT applications. Finally, we discuss the main challenges posed by enabling GIoT, as shown in Fig. <ref>.§.§ Generative AI TechnologiesAs a powerful branch of AI, GAI focuses on creating new content in various modalities, such as videos, images, text, and audios<cit.>. GAI can leverage pre-trained models to generate new content by fine-tuning the model parameters based on user-provided input, i.e., prompts. Moreover, GAI can utilize learning algorithms to automate content generation from existing data. Motivated by recent studies<cit.>, we introduce four widely adopted model-based GAI techniques<cit.> and explore their potential for IoT applications.* Variational Autoencoders (VAEs): VAEs consist of the encoder and decoder networks<cit.>. The encoder network compresses the input data to a latent representation. Then, the decoder network learns to reconstruct synthetic data that closely aligns with the original distribution<cit.>. Due to their ability to effectively represent data in a probabilistic latent space, VAEs can be applied to various IoT applications, such as energy optimization and equipment maintenance. For instance, by training on real-time sensor readings, VAEs can capture complex data distributions and generate more robust predictions on future equipment conditions than traditional AI methods<cit.>. * Generative Adversarial Networks (GANs): GANs have been applied widely in IoT data synthesis, consisting of generator and discriminator networks. The generator network aims to generate new data by learning real data distribution, while the discriminator network aims to distinguish synthetic data from real data<cit.>. The two networks are trained together in interactive and competitive manners, resulting in continuous enhancement of synthesis performance. With good performance in generating realistic samples<cit.>, GANs can be utilized not only for data augmentation but also for IoT anomaly detection<cit.>. Notably, unlike traditional AI methods that typically require retraining on labeled data to adapt to changes, GANs can learn the underlying data distribution in an unsupervised manner, enabling the adaptation to evolving anomalies without explicit labeling. * Flow-based Generative Models (FGMs): FGMs can transform input data distributions from simple to complex through a series of differentiable and invertible transformations that are implemented as neural networks<cit.>. Unlike VAEs and GANs, FGMs possess the distinctive capability to learn explicitly the data distribution and directly compute the probability density function during generation <cit.>. Therefore, FGMs can circumvent resource-intensive computation and directly model complex probability distributions, which can be effectively applied in IoT domains such as traffic flow optimization<cit.> and anomaly detection in network traffic. * Generative Diffusion Models (GDMs):With the state-of-the-art performance of image synthesis, GDMs are emerging generative models<cit.>, consisting of forward diffusion and denoising processes inspired by non-equilibrium thermodynamics theory<cit.>. Because of their recent advancements in training and sampling efficiency, GDMs have been used not only for image generation but also for IoT network optimizations<cit.>. Specifically, GDMs exhibit the ability to capture complex and high-dimensional structures, effectively addressing network optimization problems and decision-making processes<cit.>, while traditional AI methods often converge slowly and stuck in locally optimal solutions.§.§ Generative IoT ApplicationsUnlike traditional AI, GAI with its capability of generating realistic and context-aware data has the potential to revolutionize various industries, such as healthcare, manufacturing, transportation, and smart cities. Note that the detailed advantage comparison of GAI and traditional AI for significant IoT applications is listed in Table <ref>. With the incorporation of GAI into modern IoT systems, a new paradigm called GIoT is emerging. GIoT holds significant potential for transformative applications across various domains. By capitalizing on advanced IoT and GAI technologies, GIoT can enable intelligent systems, optimize resource utilization, enhance decision-making processes, and improve overall efficiency and sustainability in diverse sectors. 1) Vision-based applications: Vision-based GIoT applications leverage the power of GAI technologies, particularly GAI techniques for computer vision like GDMs and GANs, to enable IoT devices to perceive and interpret visual information. Vision-based applications can be applied in various types of GIoT contexts, from real-time monitoring to remote diagnostic and maintenance<cit.>, such as smart surveillance, smart agriculture, and health monitoring. For example, in smart surveillance systems, unlike the limited generalization capability of traditional AI, GAI can leverage data collected by IoT devices equipped with cameras and smart sensors to track objects and detect variable suspicious activities in various domains.Besides, in smart agriculture, GAI-empowered IoT devices can revolutionize farming practices. Specifically, since traditional AI lacks adaptability to changing environmental conditions, GAI models can be adopted to predict the growth of crops based on the crop data captured by cameras mounted on drones or ground-based sensors, and the predictions can be shown in the form of images or videos<cit.>, which facilitates data-driven decision-making for farmers. 2) Audio-based applications: GAI technologies can advance our interactions with audio applications in GIoT networks. One of the most notable examples of audio applications based on GAI is voice assistant. Voice assistants, such as Amazon Alexa[<https://www.aboutamazon.com/news/devices/amazon-alexa-generative-ai>] and Apple Siri[<https://appleinsider.com/articles/23/09/06/>] that utilize audio-based GAI techniques, can understand and respond to voice commands like adjusting room thermostats and turning on and off lamps. Another important application of GAI in the audio domain is creating personalized audio systems for avatars that are highly accurate digital replicas of users, such as Resemble AI[<https://www.resemble.ai/>]. By analyzing the preferences and characteristics of users, GAI models can generate tailored audio systems for their avatars, seamlessly immersing users, especially in metaverses.3) Text-based applications:Text-based GIoT applications involve leveraging GAI models like ChatGPT to process and analyze text data in the context of IoT devices and systems. Specifically, GAI-empowered chatbots can process textual data generated by IoT devices and provide real-time insights. These conversational agents employ natural language processing algorithms to interpret natural language input, generate responses to users, and perform specific tasks like controlling IoT devices. Additionally, text-based GIoT applications also involve automated code generation. Specifically, by analyzing high-level specifications of desired functionality, GAI models, such as Codex[<https://openai.com/blog/openai-codex>] as a general-purpose programming model created by OpenAI, can automatically generate corresponding codes for specific IoT applications <cit.>.4) Other applications: There are also other novel IoT applications in different modalities based on GAI technologies. One potential application is the automated generation of software programs, which can be downloaded to various IoT devices, enabling users to efficiently control and monitor multiple components within the GIoT ecosystem. Another potential application is the generation of secure communication protocols<cit.>, such as the 6G wireless communication protocol. Since wireless communications between IoT devices are vulnerable to being compromised by malicious attackers, GAI techniques can be utilized to develop robust communication protocols that encrypt the data transmitted between devices, making attackers more difficult to access critical data in transit. §.§ Main Challenges in Integrating GAI with Modern IoTAlthough GAI technologies hold great potential for transforming the modern IoT ecosystem, the convergence of GAI technologies with modern IoT still suffers from the following challenges, which should be resolved for the future popularization and development of GIoT.C1) IoT resource consumption for GAI models: In GIoT networks, GAI models require a modest quantity of extra data to perform model fine-tuning and direct inference at the edge, minimizing service latency and enhancing user experiences<cit.>. The data for model fine-tuning can be generated in the cloud or collected by IoT devices and subsequently uploaded by mobile users<cit.>. However, if the dataset contains deviations and inaccuracies in information, pre-trained GAI models cannot be accurately fine-tuned to specific tasks or domains, leading to inaccurate and biased inferences. Therefore, the dataset needs to be high-quality to avoid incorrect learning patterns in the GAI model fine-tuning<cit.>. Since data collection and transmission lead to high costs, users may not be reluctant to contribute high-quality data to the edge due to the resource constraints of IoT devices, affecting the performance of GAI model fine-tuning. C2) Dynamic states of GIoT networks: Due to the scale and complexity of interconnected devices as well as the dynamic and real-time nature of the network, GIoT can be considered as a heterogeneous and large-scale system<cit.>. Consequently, intricate decision-making processes arise, such as the optimal allocation of limited IoT network resources and the identification of optimal incentive mechanism strategies. Generally, the optimal decision-making strategies are determined by employing the traditional optimization principle and tools<cit.>. However, these approaches often rely on accurate and comprehensive network information, which are not feasible in complex GIoT network scenarios. Additionally, while DRL has shown promise in various network optimization and decision-making tasks, the dynamics of IoT networks can significantly impact the state and action spaces of DRL models. This necessitates the complete retraining of DRL models<cit.>, which may inefficiently discover the optimal decision-making strategies in dynamic GIoT network scenarios.C3) Security issues for GAI models in GIoT networks: The heterogeneity of GIoT networks, exemplified by the ability of IoT devices to dynamically join or leave the networks as required<cit.>, poses a significant difficulty in secure management for GIoT networks. Ensuring the quality and diversity of collected data by IoT devices is one of the key challenges. Specifically, malicious users equipped with IoT devices would deliberately upload low-quality data to the edge to obtain more benefits. Additionally, malicious users can issue model inversion attacks to steal sensitive information behind GAI models by extracting the training data from trained models<cit.>. For example, based on the text generated by ChatGPT, malicious users can deduce private information from either the fine-tuning data or the data employed to pre-train the foundation model of ChatGPT, which may lead to serious security threats to other normal users. Motivated by the above analysis, it is necessary to develop a reliable and secure incentive mechanism framework, thereby enabling more intelligent and autonomous GIoT ecosystems. The proposed framework is discussed in Section <ref>.§ GENERATIVE AI-BASED INCENTIVE MECHANISM FRAMEWORK FOR GENERATIVE IOTIn this section, we introduce several representative techniques for designing incentive mechanisms for GIoT. To address the aforementioned challenges, we propose a general GAI-based secure incentive mechanism framework. §.§ Incentive Mechanism Design for Generative IoTIn IoT network optimizations, incentive mechanisms play a crucial role in incentivizing network users to actively contribute their resources, share data, or collaborate, thereby improving the performance and reliability of the network<cit.>. In the following part, we discuss several representative techniques for developing incentive mechanisms, i.e., Stackelberg game<cit.>, contract theory<cit.>, and auction theory<cit.>, which have been widely adopted in IoT network optimizations<cit.>. §.§.§ Stackelberg gameAs a non-cooperative game theory, the Stackelberg game focuses on analyzing strategic interactions between a leader and followers, especially in IoT networks, where the leader first determines resource prices, and then the followers determine their resource demands based on the selling prices, until reaching the utility equilibrium<cit.>. For example, the authors in <cit.> focused on reliable vehicle twin migrations in vehicular metaverses and proposed a Stackelberg model between vehicular metaverse users and the roadside unit coalition with the highest utility.§.§.§ Contract theoryContract theory is a powerful tool for incentive mechanism design under information asymmetry, which has been effectively applied in IoT network optimizations<cit.>. Specifically, an employer, typically a service provider, designs contracts for specific tasks, and employees, i.e., the network users, engage in a contractual agreement<cit.>. For instance, the authors in <cit.> studied Unmanned Aerial Vehicle (UAV)-enabled AI generative content and proposed a contract model. This model aimed to provide incentives for UAVs to contribute fresh data for GAI model fine-tuning under asymmetric information.§.§.§ Auction theoryAuction theory focuses on studying the behavior and design of auctions for allocating resources through competitive bidding<cit.>. As an interdisciplinary technology, auction theory has been widely adopted for incentivizing resource trading in IoT networks, which can be implemented in asymmetric or incomplete information scenarios<cit.>.For example, the authors in <cit.> proposed an auction-based optimization problem for the multichannel cooperative spectrum sharing in hybrid satellite-terrestrial IoT networks. §.§ Framework DesignAs shown in Fig. <ref>, we introduce the GAI-based secure incentive mechanism framework for GIoT, which consists of a physical layer, an incentive layer, and a blockchain layer. We provide more details of the framework as follows: * Step 1. Design suitable incentive mechanisms: To address the reluctance of users to provide high-quality data for GAI model fine-tuning, edge servers as service providers would design suitable incentive mechanisms by considering the current conditions of GIoT networks, including network structures, performance metrics, resource constraints and so on.* Step 2. Adopt GDMs to derive optimal incentive mechanism design: Due to the ability to model intricate environments, GDMs can be adopted to derive optimal incentive mechanism strategies that can maximize the utilities of edge servers<cit.>. The specific process of utilizing GDMs for designing efficient and robust incentive mechanisms is introduced in <cit.>.* Step 3. Issue the optimal incentive mechanism strategies: After finding the optimal incentive mechanism strategies, edge servers issue the strategies to the physical layer. Moreover, the resource trading involved in executing the strategies can be securely recorded and managed in the blockchain layer, ensuring transparency and security in resource trading.* Step 4. Obtain high-quality data for GAI model fine-tuning: Under the role of incentives, IoT devices collect fresh sensing data and provide them to the edge for GAI model fine-tuning. To further prevent malicious behavior of IoT devices and ensure the quality of collected data, the reputation metric can be utilized to quantify the reliability of IoT devices, and the reputations would be securely managed in the blockchain layer<cit.>.* Step 5. Enable GIoT applications: Based on the data collected by IoT devices or generated in the cloud, GAI model fine-tuning and inference can be performed on edge servers to efficiently enable GIoT applications, and the data stored on edge servers can be also securely managed in the blockchain layer. Considering that certain types of training data might be idle, GAI techniques have the ability to autonomously synthesize data, enhancing the performance of models<cit.>. § CASE STUDY: GDM-ENABLED MODERN INTERNET OF VEHICLE TRAFFIC MONITORINGIn this section, we present a case study on modern IoV traffic monitoring. Specifically, we propose a GDM-based contract theory model, which can incentivize users to contribute high-quality sensing data for GAI model fine-tuning, facilitating substantial advancements in intelligent transportation systems. §.§ System ModelFigure <ref> depicts a specific case of the proposed framework. Specifically, with the capabilities of strong generalization and automated content generation<cit.>, GAI can offer personalized and advanced services to users, such as navigation and route optimization<cit.>. To ensure the quality of services, GAI model fine-tuning at the edge requires high-quality datasets. However, due to the resource constraints of IoT devices, users may be reluctant to contribute fresh sensing data to the edge. Moreover, because of the dynamic and heterogeneous natures of vehicular networks<cit.>, edge servers may lack awareness of the private information of users, such as their ability to collect sensing data, which can lead to users contributing data dishonestly to gain additional benefits<cit.>.§.§ Problem FormulationWe consider that each edge server can support M users. Based on statistical distributions of user types from historical data, we classifyM users into N types and the user types are arranged in ascending order as θ_1 ≤⋯≤θ_N. In this definition, the higher type users can provide sensing data with the higher quality. For ease of understanding, the user with type n is called the type-n user. §.§.§ User utilityThe utility of type-n users is denoted as U_n^C, equaling the difference between its obtained benefit and its cost of participation. As shown in Fig. <ref>, the obtained benefit of type-n users is defined as (θ_nR_n)<cit.>, where R_n is the received reward. The cost of type-n users is defined as θ_n(L_max/L_n-1)<cit.>, where L_n is the latency spent by type-n users in collecting and transmitting sensing data with a guaranteed amount. Note that L_max represents the highest permissible value of the latency.§.§.§ Edge server utilityThe utility obtained by the edge server from type-n users is denoted as U_E^n, equaling the difference between the corresponding revenue for received datasets within L_n and the reward R_n. According to <cit.>, the revenue can be defined as a general quality-latency metric, i.e., a_1(θ_n)^b_1-a_2(L_n/L_max)^b_2. Here, a_1>0 and a_2>0 are pre-defined coefficients about the quality of received data and the latency spent for collecting and transmitting data<cit.>, respectively. Similarly, b_1 ≥ 1 and b_2 ≥ 1 are given factors measuring the effects of data quality and the latency<cit.>, respectively. Considering that the probability that a user is of type-n is Q_n, where the sum of probabilities of all types is 1, the expected utility of the edge server U_E is shown in Fig. <ref>.§.§.§ Contract formulationAs an economic tool, contract theory is effective in addressing information asymmetry for incentive mechanism designs<cit.>. Therefore, the edge server can devise a contract comprising a group of contract items (L_n^-1, R_n), where L_n^-1 is the reciprocal of L_n<cit.>. To ensure that each user optimally chooses the contract item designed for its type, the contract must satisfy both Individual Rationality (IR) and Incentive Compatibility (IC) constraints<cit.>, where IR constraints indicate that the contract item that a user chooses should ensure a non-negative utility<cit.>, and IC constraints indicate that a user of any type prefers to choose the contract item designed for its type rather than any other contract item<cit.>. Finally, the optimization problem is to find the optimal contract c^*, i.e., {L_1^-1^*,…,L_N^-1^*} and {R_1^*,…,R_N^*}, thereby maximizing the expected utility of the edge server U_E while satisfying IR and IC constraints. §.§ GDM-empowered Contract GenerationIn this part, we adopt GDMs to derive optimal contract design<cit.>. Specifically,* Step 1. Model the environment state: For simplicity, we consider that each edge server supports two types of users in the environment of vehicle traffic monitoring, where θ_1 and θ_2 are randomly sampled within (10,100) and (100,200), respectively<cit.>. Therefore, the environment state vector is defined as S ≜ [m, N, L_max, Q_1, Q_2, θ_1, θ_2]. Note that Q_1 and Q_2 are generated randomly<cit.>, following the Dirichlet distribution, and L_max is set as 150.* Step 2. Formulate the participant utilities: After determining the environment states, we formulate the utility of type-n users U_n^C and the expected utility of the edge server U_E, where the former is used to guarantee IR and IC constraints, and the latter is the optimization objective that we intend to maximize. Note that the weighting factors a_1, a_2, b_1, and b_2 are set as 15, 10, 1, and 1<cit.>, respectively.* Step 3. Customize the GDM settings: The action space is the universe of the contract design<cit.>. Each contract is formed as {L_1^-1, R_1, L_2^-1, R_2}. Then, we can customize the model hyperparameters. For instance, in our case, the training epoch of the GDM is set as 120, and the discount factor is set as 0.95.* Step 4. Train the GDM and generate the optimal contract: We train the policy π(c^*|s) for generating the optimal contract c^* under the state s∈{S}<cit.>. To evaluate each generated contract, we adopt the action-value function Q(c^*|s)<cit.>, which can guide the diffusion process. Finally, we can obtain the optimal contract c^*.§.§ Numerical ResultsFigure <ref> shows test reward curves of our proposed GDM-based contract generation scheme and conventional DRL-PPO for the optimal contract finding task. We can observe that our proposed scheme always outperforms DRL-PPO under the same parameter settings. The reason is that the contract generation policy in our scheme is fine-tuned by the diffusion process, which can mitigate the impact of randomness and noise<cit.>. Figure <ref> illustrates the quality of contracts generated by the proposed scheme and DRL-PPO. For a given environment state, we can observe that the proposed GDM-based contract generation scheme can provide a contract design that achieves the edge server utility value of 2204.6284, which is greater than the 1450.7832 achieved by the PPO. Overall, the above numerical results demonstrate that the performance of the proposed GDM-based contract generation scheme is better than that of DRL-PPO.§ FUTURE DIRECTION §.§ Distributed and Green Generative AI ModelsOne of the main challenges of GAI for future development is the computational and storage limitations in training and deploying GAI models. For instance, GPT-3, as the OpenAI's state-of-the-art language model, consists of 175 billion parameters<cit.>, which is one of the largest language models in existence. Hence, how to reduce the energy consumption of GAI models during their training and deployment is worth studying. One of the potential solutions is to design lightweight GAI models or adopt federated learning to train models.§.§ Quality Metrics for Reliable Generative AI OutputsAlthough GAI techniques have the incredible ability to automate content generation, they can be exploited to generate incorrect or fraudulent content, such as fake videos or wrong texts<cit.>. To address this issue, future research can explore the Quality of Service (QoS) metric from the user perspective to measure user satisfaction with the generated content. With the help of QoS, the performance of GAI models can be improved, and high-quality content can be generated to meet user satisfaction. §.§ Service Optimization by Prompt EngineeringFormulating technical prompts to effectively instruct GAI models presents a challenge for individuals lacking adequate training in the relevant domain. Furthermore, the utilization of subpar prompts may diminish the overall generation quality of GAI models. Therefore, the exploration of prompt engineering for achieving the optimization of AI-generated content services is also a topic worthy of investigation. For instance, users can manually formulate diverse prompts and subsequently search for the one that yields the highest quality of generated outputs. §.§ Security and Privacy Protection for UsersCentralized training or fine-tuning of GAI models at the edge may raise user concerns about data privacy and security, as IoT data involving sensitive and personal information could potentially be exposed to attackers, leading to threats to user privacy and security. Therefore, future research can develop a user-centric privacy-preserving training approach to protect user security and privacy. § CONCLUSIONIn this article, we presented the concept of Generative IoT (GIoT). Firstly, we reviewed several GAI techniques and explored their potential for IoT applications. Then, we summarized GIoT applications, including vision-based, audio-based, and text-based applications, and discussed the main challenges of integrating GAI with modern IoT to enable GIoT. To address these challenges, a general GAI-based secure incentive mechanism framework was proposed, in which we adopted GDMs for the optimal incentive mechanism design and utilized blockchain technologies for secure GIoT management. Furthermore, we conducted a case study on modern IoV traffic monitoring, leveraging GDMs to generate flexible contracts for motivating users to provide high-quality data for GAI model fine-tuning. The numerical results demonstrated the effectiveness of our proposed GDM-based contract generation scheme compared to DRL-PPO. Finally, we discussed potential research directions that can further facilitate the development of the GIoT ecosystem. IEEEtran
http://arxiv.org/abs/2310.18382v1
{ "authors": [ "Jinbo Wen", "Jiangtian Nie", "Jiawen Kang", "Dusit Niyato", "Hongyang Du", "Yang Zhang", "Mohsen Guizani" ], "categories": [ "cs.LG", "cs.GT", "cs.NI" ], "primary_category": "cs.LG", "published": "20231027025811", "title": "From Generative AI to Generative Internet of Things: Fundamentals, Framework, and Outlooks" }
Sketching and Streaming for Dictionary Compression Ruben Becker^∗, Matteo Canton^†, Davide Cenzato^∗, Sung-Hwan Kim^∗, Bojana Kodric^∗, and Nicola Prezza^∗ ^∗Ca' Foscari University of Venice^†University of Udine Via Torino 155Via delle Scienze 206 30172 Venezia, Italy33100 Udine, Italy<[email protected]><[email protected] > Received: January 14, 2024; accepted: October 20, 2023 =============================================================================================================================================================================================================================================================================================================================================================empty We initiate the study of sub-linear sketching and streaming techniques for estimating the output size of common dictionary compressors such as Lempel-Ziv '77, the run-length Burrows-Wheeler transform, and grammar compression. To this end, we focus on a measure that has recently gained much attention in the information-theoretic community and which approximates up to a polylogarithmic multiplicative factor the output sizes of those compressors: the normalized substring complexity function δ.As a matter of fact, δ itself is a very accurate measure of compressibility: it is monotone under concatenation, invariant under reversals and alphabet permutations, sub-additive, and asymptotically tight (in terms of worst-case entropy) for representing strings, up to polylogarithmic factors.We present a data sketch of O(ϵ^-3log n + ϵ^-1log^2 n) words that allows computing a multiplicative (1±ϵ)-approximation of δ with high probability, where n is the string length.The sketches of two strings S_1,S_2 can be merged in O(ϵ^-1log^2 n) time to yield the sketch of {S_1,S_2}, speeding up the computation of Normalized Compression Distances (NCD). If random access is available on the input, our sketch can be updated in O(ϵ^-1log^2 n) time for each character right-extension of the string.This yields a polylogarithmic-space algorithm for approximating δ,improving exponentially over the working space of the state-of-the-art algorithms running in nearly-linear time. Motivated by the fact that random access is not always available on the input data, we thenpresent a streaming algorithm computing our sketch in O(√(n)·log n) working space and O(ϵ^-1log^2 n) worst-case delay per character. We show that an implementation of our streaming algorithm can estimate δ on a dataset of 189GB with a throughput of 203MB per minute while using only 5MB of RAM, and that our sketch speeds up the computation of all-pairs NCD distances by one order of magnitude, with applications to phylogenetic tree reconstruction. § INTRODUCTIONSketching techniques allow to summarize in sub-linear space information on big datasets, enabling the approximation of useful statistics such as high-order moments <cit.>, norms <cit.>, and frequencies <cit.> (to name a few). Additionally, most data sketches can be computed on data streams in sub-linear space, making them attractive in big data scenarios. In this paper, we consider data sketches summarizing the information content of a string as approximated by data compression techniques.Previous research on this problem has focused on empirical entropy.Chakrabarti et al. <cit.> showed that the zero-order empirical entropy H_0 of a data stream can be efficiently approximated up to a multiplicative (1 + ϵ)-factor in poly-logarithmic space, but any multiplicative approximation of the k-th order entropy H_k requires nearly-linear space for k ≥ 1.In addition to this fact, it is well-known that H_k is a weak measure when the dataset is highly repetitive <cit.>. As extensively shown in the literature (see, for example, the survey by Navarro <cit.>), dictionary compression measures such as the number z of phrases of the Lempel-Ziv'77 factorization (used by , , , ), the number r of equal-letter runs in the Burrows-Wheeler transform (used by ), and the size g of a smallest context-free grammar generating (only) the text, are exempt from such a limitation. The information-theoretic quality of these measures is strengthened by the fact that Normalized Compression Distances based on dictionary compressors yield very precise notions of string similarity <cit.>. Sketching and streaming techniques for such measures would thus speed up tasks such as the computation of all-pairs similarities when the underlying metric is based on data compression (useful, for example, in the computation of phylogenetic trees <cit.>). Motivated by the above considerations, in this paper we present the first sub-linear-space sketching and streaming techniques for estimating the output sizes of dictionary compressors. This result is obtained by describing a data sketch yielding a(1±ϵ)-approximation of the normalized substring complexity δ = max_k≥ 1{d_k/k}, where d_k is the number of distinct length-k substrings of the string, a measure introduced by Raskhodnikova et al. in <cit.>. As shown by Kociumaka et al. <cit.> and Kempa and Kociumaka <cit.>, any of the above dictionary compression measures is lower-bounded by δ and upper-bounded by δ(log n)^c,wheren is the string's length and c is an opportune constant depending on the compressor. Even better, Bonnie et al. in <cit.> experimentally showed that δ, z, and r (normalized to the interval [0,1]) are almost indistinguishable on collections of genomic data. As a matter of fact, δ is known to be an even moreaccurate information measure than z, r, and g: it is monotone under string concatenation, invariant under reversals and alphabet permutations, sub-additive, and asymptotically tight (in terms of worst-case entropy) for representing strings, up to polylogarithmic factors <cit.>. None of the measures z,r,g possesses simultaneously all of these properties. Overview of the paper.After providing all necessary definitions in Section <ref>, in Section <ref> we prove new properties of the normalized substring complexity δ and of the Normalized Compression Distance <cit.> _δ based on δ. In particular, we show that δ is perfectly sub-additive, that _δ(x,y) always lies in [0,1] (according to <cit.>, this is an indicator that δ is a compressibility measure of good quality), and that _δ̃(x,y) is an additive Θ(ϵ)-approximation of _δ(x,y) if δ̃ is a multiplicative (1±ϵ)-approximation of δ. This motivates designing data sketches for δ, a problem that we solve in Section <ref>.Our sketch is based on the observation (already noted in <cit.> for the particular case ϵ=1) that max_i≥ 0{d_⌈(1+ϵ)^i⌉/⌈(1+ϵ)^i⌉} is a (1-Θ(ϵ))-approximation of δ. We approximate d_k, for each sampled length k = ⌈ (1+ϵ)^i ⌉, by keeping a count-distinct sketch <cit.> for the subset of distinct (Rabin's fingerprints <cit.> of the) length-k substrings. Our sketch uses space polynomial in ϵ^-1log n and supports updates and queries (returning a (1±ϵ) approximation of δ), in O(ϵ^-1log^2 n) time. The sketches of two strings S_1 and S_2 can moreover be merged in O(ϵ^-1log^2 n) time to obtain the sketch of {S_1, S_2}, from which one can compute an additive ϵ-approximation of _δ(S_1, S_2). In Section <ref> we show how to compute our sketch in sub-linear space on an input stream of length n. The main difficulty in achieving sub-linear space is that, in order to compute the Rabin's fingerprints of the stream's length-k substrings, we need random access to the k-th most recent stream's character. Since the largest k for which we need to compute d_k is linear in n,storing the most recent k characters would require Θ(n) working space.Our solution relies on the observation that, if k̂ = _k≥ 1{d_k/k} is small, then we can afford keeping a sliding window of the last k̂ stream's characters. If, on the other hand, k̂ is large, then the stream is highly repetitive so we can compress it in small space while supporting bookmarked access to its characters. We conclude in Section <ref> with experimental results. Complete proofs can be found in the full version <cit.>. Related work. Bonnie et al. <cit.> have already observed that d_k can be efficiently estimated by employing count-distinct sketches, and that this can yield an heuristic algorithm for estimating δ. Their strategy relies on estimating d_k/k for increasing values of k, until a local maximum is found. While this strategy works well in practice because, as they showed, k̂ = _k≥ 1{d_k/k} tends to be a very small number, on particular strings (for example, Thue-Morse) k̂ is of the order of Θ(n) and, as a result, computing all the sketches for d_k requires linear space and quadratic processing time in the worst case. Moreover, local maxima of d_k/k do not always coincide with the global maximum, so this strategy does not yield any provable approximation of δ. We are not aware of other works in the literature describing data sketches for estimating the output sizes of dictionary compressors (the literature on estimating empirical entropy is, on the other end, much richer: see <cit.> and references therein).Our results can be viewed also as a space-efficient way to approximate measure δ. Christiansen et al. <cit.> showed how to compute δ for a given string T in linear time and space. Recently, Bernardini et al. <cit.>provided space-time trade-offs for computing/approximating δ in sub-linear working space on top of the input string. IfO(n n) time is allowed, their algorithms require Θ(n/ n) working space, which they proved to be optimal for computing δ exactly. Our algorithm, on the other hand, computes a multiplicative (1±ϵ)-approximation of δ using working spacepolynomial in ϵ^-1log n. § PRELIMINARIESWe denote [n]:={1, …, n} for any integer n ([n]=∅ for n≤ 0). For a ∈ℝ^+ and a real number ∈ [0, 1], we write [(1±)a] for the interval [(1-)a, (1+)a]. Similarly, we write [a±] for the interval [a-, a+]. We assume to be given a string S of length n > 1 over an alphabet Σ of cardinality σ > 1. For k≥ 1, we define D_k(S):={S[i..i + k - 1]: i∈ [n - k + 1]}, i.e., the set of all distinct substrings of length k of S. Notice that D_k(S)=∅ if k>n. The k-substring complexity d_k(S) of S is the cardinality of this set, i.e., d_k(S) := |D_k(S)|.The normalized substring complexity δ is defined as follows:δ(S):= max_k≥ 1{ |D_k(S)|/k } = max_k≥ 1{ d_k(S)/k }.We omit the argument from D_k, d_k, and δ in case it is clear from the context. Here, we also extend this measure to pairs of strings S and T. Rather than using δ(ST), we propose the following natural definition that does not take into account artificial length-k substrings crossing the border between S and T: δ(S, T) := max_k ≥ 1{ |D_k(S) ∪ D_k(T) | / k }. This version also gives mathematically cleaner results (e.g., perfect sub-additivity) and, in any case, differs from δ(ST) by at most 1. As a consequence, most of our results (read also below) hold also by replacingδ(S, T) with δ(ST). The Normalized Compression Distance has been defined by Cilibrasi and Vitányi <cit.> as a proxy for the non-computable Normalized Information Distance <cit.>. For two strings S and T and an arbitrary compressibility measure Z (for example, the output size of compression software such asand ), it is defined as _Z(S, T) := Z(S, T) - min{Z(S), Z(T)}/max{Z(S), Z(T)}. Given a uniform prime q = n^Θ(1), the Rabin's fingerprint <cit.> of S is defined as ρ(S) = ∑_i=1^n S[i]·σ^n-i q.Collisions between substrings of S through ρ happen with low probability, so the results of our paper hold with high probability.We extensively use the fact that the fingerprint of the concatenation of two strings S_1,S_2 can be computed in constant time from (i) the fingerprints of S_1 and S_2 and (ii) σ^|S_2| q (see <cit.>). Given a set B ⊆Σ^* of strings, we define ρ(B) = {ρ(s): s∈ B}. Given a set U, a count-distinct sketch CD(U) is a sub-linear-space data structure supporting three main operations: CD(U).add(x), which turns the sketch into CD(U∪{x}), CD(U_1).merge(CD(U_2)), which turns the sketch into CD(U_1 ∪ U_2), andCD(U).estimate(), which returns a (1±ϵ) approximation of |U|. In our work, we use the optimal count-distinct sketch of Kane et al. <cit.>. Letting U ⊆ [u], this sketch uses O(ϵ^-2 + log u) words of space and computes a (1±ϵ) approximation of |U| with high probability of success. All operations are supported in O(log u) time[The authors claim O(ϵ^-2 + log u) bits of space and 2/3 success probability, which can be amplified by taking the median of Θ(log u) sketches (thus yielding the bounds we claim above). In our paper, the universe is composed by Rabin's fingerprints and has therefore size u = n^Θ(1).]. Assume S[1]=$, where $ is lexicographically smaller than all other alphabet's characters and does not appear anywhere else in S. The Burrows-Wheeler transform (BWT) of the reverse S^R of S is obtained by sorting lexicographically all suffixes of S^Rand then taking, in this order, the character preceding each suffix. For example, if S =$babba, then the sorted suffixes and the BWT of S^R are shown in Table <ref>.The LF property of the BWT states that the i-th occurrence of c∈Σ in the BWT corresponds to the position of the i-th suffix starting with c∈Σ in Table <ref>.The LF function is the permutation of [1,n] implementing this observation: for instance, in the above example BWT.LF(2) = 5 because character BWT[2] corresponds to the first character (b) of the fifth (in lexicographic order) suffix bab$. We denote with r the number of equal-letter runs of the BWT; in the above example, r=5 (runs are highlighted in alternating bold/italic). We moreover use the following result: Letting S be a string and r be the number of equal-letter runs in BWT(S^R), there exists a data structure of O(r) words storing BWT(S^R) supporting right-extensions of S (i.e. BWT(S^R) → BWT((Sa)^R), for any a∈Σ) in O(log|S|) time. Within the same time, the structure supports computing the LF function and retrieving any character of BWT(S^R). § PROPERTIES OF Δ AND _ΔWe start by proving some properties of δ.Proofs of some statements are omitted due to space limitations and can be found in the full version <cit.>. The first main property that we show is that δ is both sub-additive and monotone in the following sense.lemmasubadditivity For any strings S and T, max{δ(S), δ(T)}≤δ(S, T) ≤δ(S) + δ(T).The proof of the lemma uses the properties of the corresponding maximizers together with the fact that the union is a superset of both its arguments (left inequality) and that the union is of smaller cardinality than the sum of the cardinalities of its arguments (right inequality). We remark that it is a well-known fact that the monotonicity property holds for the case of concatenation of the two strings <cit.>. Using the sub-additivity of δ, weobtain:For any strings S and T it holds that 0 ≤_δ(S, T) ≤ 1.To see why this holds, assume, w.l.o.g., that max{δ(S), δ(T)}=δ(S). Then,_δ(S, T) = δ(S,T) - δ(T)/δ(S)≥δ(T) - δ(T)/δ(S) = 0 and _δ(S, T) ≤δ(S) + δ(T) - δ(T)/δ(S) = 1. Ming et al. <cit.> state that common compressors yield a normalized compression distance between 0 and 1+ϵ, where the ϵ is due to “imperfections” of the compression algorithm. Above we proved that in the case of the normalized substring complexity δ, the corresponding ϵ is equal to 0. We conclude by showing that a multiplicative approximation of δ can be used to obtain an additive approximation of the Normalized Compression Distance _δ. lemmaNCDapprox Let ∈ (0,1), ':=/5 and let S and T be two strings. Assume that δ̃(S), δ̃(T), and δ̃(S, T) are approximations of δ in the sense that δ̃(S)∈ [(1±')δ(S)], δ̃(T)∈ [(1 ±')δ(T)], as well as δ̃(S, T)∈ [(1±')δ(S, T)]. Then _δ̃(S, T) ∈ [_δ(S, T) ±ϵ]. We prove this lemma by using the facts that δ̃ is a multiplicative approximation of δ, that δ is sub-additive (see Lemma <ref>), and that _δ∈ [0, 1] (see Corollary <ref>). § A DATA SKETCH FOR ESTIMATING Δ We introduce our data sketch, then prove that it yields a good approximation of δ. Let S be a string, A:={⌈α^i ⌉ : i∈ [⌊log_α n ⌋ ]} be a set of sampled lengths for some real number (sample rate) α > 1, and CD_k = CD(ρ(D_k(S))), where CD is the count-distinct sketch described in Section <ref> and ρ is Rabin's hash function. Our data sketch is defined as κ(S) = ⟨ CD_k : k ∈ A ⟩. We defineκ(S).estimate() = max{ CD_k.estimate()/k : k ∈ A }. When extending the stream S with a new character a, yielding string Sa, the sketch is updated by calling CD_k.add(ρ(S[|S|-k+2,|S|]a)) for all k∈ A.We denote this operation by κ(S).extend(a). Note that, if constant-time random access is available on S and if σ^k-1 q has been pre-computed for all k∈ A (in O(ϵ^-1log^2 n) time), ρ(S[|S|-k+2,|S|]a) can be computed in constant time from ρ(S[|S|-k+1,|S|]); see <cit.>. Finally, κ(S_1).merge(κ(S_2)) returns the sketch κ({S_1,S_2}) = ⟨ CD'_k : k ∈ A ⟩, where CD'_k = CD^1_k.merge(CD^2_k) and CD^i_k is the count-distinct sketch for the (fingerprints of the) length-k substrings of S_i, i∈{1,2}.Operation extend(a) is not defined when the sketch represents a set of strings; this is not an issue, since we will call merge only to estimate δ(S_1,S_2) and NCD_δ(S_1,S_2).With the next theorem we show that κ(S).estimate() returns a multiplicative (1±ϵ)-approximation of δ(S) (analogous for κ({S_1, S_2}).estimate()).lemmaapproxdelta Let S be a string of length n. Let > 0, ' = /4, and α = 1 + '. Assume that d̃_k∈ [(1±') d_k(S)] for all k∈ A:={⌈α^i ⌉ : i∈ [⌊log_α n ⌋ ]}, thenδ̃:= max{d̃_k/k : k∈ A}, satisfiesδ̃∈ [(1 ±)δ].We show this theorem by quantifying the impact of two types of errors on the quantity δ. These two types are (1) the error obtained when approximating the values d_k by d̃_k and (2) the error due to the restriction of the string's offsets [n] to the set A. The error of type (1) directly implies an error of the same magnitude (1±') on δ. We note that this error actually itself has two sources, namely (1.1) errors due to collisions when computing Rabin's fingerprints and (1.2) errors due to the count-distinct sketch when applied to the fingerprints. Both of these errors are accounted for in the assumption d̃_k∈ [(1±') d_k(S)]. The error of type (2) instead is more subtle to analyse – the main observation here is that d_j + 1≥ d_j - 1 for every j, as every distinct length-j substring other than possibly S[n - j + 1..n] gives at least one distinct length-(j + 1) substring. Now assume that i∈ [n]∖ A and that a∈ A is the minimum element of A larger than i. Then applying the previous observation iteratively yields d_a≥ d_i - β, where β = a - i. Hence, we can quantify how much δ gets “perturbed” by restricting to the subset A of the string's offsets [n].From Lemmas <ref> and <ref>, the sketch of Definition <ref> yields a multiplicative (1±ϵ)-approximation of δ and an additive ϵ-approximation of _δ if CD is the count-distinct sketch of <cit.> with error rate ϵ/20, and the set A is built with sample rate α = 1 + ϵ/20.From <cit.> and since |A| ∈Θ(ϵ^-1log n), our data sketch uses Θ(ϵ^-3log n + ϵ^-1log^2 n) words of space and supports all operations in time O(ϵ^-1log^2 n). Using repeatedly operation extend on our data sketch we immediately obtain: For any string S of length n supporting random access in time at most O(log n), and any approximation rate ϵ>0,we can compute amultiplicative (1±ϵ)-approximation of δ(S) in O(ϵ^-1nlog^2 n) time usingΘ(ϵ^-3log n + ϵ^-1log^2 n) words of working space on top of the string. The result is correct with high probability.§ STREAMING ALGORITHM We now show how to compute the sketch of Definition <ref> in O(√(n)log n) words of working space (on top of the sketch) with one pass over the streamed input string.Let S denote the current stream, and S^R be the reversed stream. We assume that an upper-bound n to the maximum stream length is known before the algorithm starts. Let r be the number of equal-letter runs in the Burrows-Wheeler transform of S^R. By <cit.> and by the fact that δ is invariant under string reversals, it holds r ≤ 8δlog^2n. Our streaming algorithm works as follows. We keep a sliding window S[|S|-K+1,|S|] of the last K stream characters, for some parameter K to be determined later, and at the same time we keep a dynamic run-length BWT (RLBWT) of S^R that we update by appending the stream's characters using Lemma <ref>. Before the algorithm starts, in O(ϵ^-1log^2 n) time we compute σ^k-1 q for all the |A| ∈ O(ϵ^-1log n) sampled substring lengths k∈ A in our sketch, using fast exponentiation.Let k∈ A be one of the sampled string lengths in our sketch, and let a be a new character arriving on the stream (so that the new stream is Sa). In order to update our sketch, we need to compute the fingerprint of the last k stream's characters: ρ(S[|S|-k+2,|S|]a).At any stage of the algorithm, we keep the Rabin's fingerprint ρ(S[2,|S|]) of the whole stream, excluding character S[1]=$.If |Sa|=k+1, then ρ(S[|S|-k+2,|S|]a) is equal to the Rabin's fingerprint of the whole stream.Otherwise, if |Sa|>k+1 then ρ(S[|S|-k+1,|S|]) has already been computed in the previous steps andwe can use the formula ρ(S[|S|-k+2,|S|]a) = (ρ(S[|S|-k+1,|S|]) - S[|S|-k+1]·σ^k-1)·σ + aq. As a result, updating the fingerprint reduces to extracting character S[|S|-k+1].We use the window S[|S|-K+1,|S|] to extract S[|S|-k+1] for any k≤ K, and the RLBWT toextract S[|S|-k+1] for any k > Kusing a bookmarking technique that we sketch in Figure <ref> and we describe in full detail in the full version <cit.>. This allows us to update the Rabin's fingerprints for all sampled substring lengths k and thus to implement operation extend(a).We now describe the policy we employ to keep space usage under control. Let r' be the number of equal-letter runs in the BWT obtained by ignoring (removing) character $. It is easy to see that (i) r-2 ≤ r' ≤ r and (ii) r' is non-decreasing upon appending characters at the end of the stream. As soon as r' ≥ 8n(log^2n)/K, we discard the RLBWT and keep only the sliding window for the rest of the stream. As a consequence, from this point on we are only able to extract (fingerprints of) length-k substrings with k≤ K. However, we show that this is enough: if we discard the RLBWT, then it means that δ≥r/8log^2n≥r'/8log^2n≥ n/K. Let k̂ = _k≥ 1{d_k/k}. Then, k̂ = d_k̂/δ≤ n/δ≤ K so to compute δ on the rest of the stream we can focus only on the length-k substrings with k≤ K.The sliding window S[|S|-K+1,|S|] uses K words of space. We discard the RLBWT when r' ≥ 8n(log^2n)/K, so (since r≤ r'+2) this structure always uses at most O(r) ⊆ O(n(log^2n)/K) words. As a consequence, in total we use O(K + n(log^2n)/K) words of space, which is optimized asymptotically when K=√(n)log n; then, our algorithm uses at most O(√(n)log n) words of space.We keep one bookmark (a position in the BWT) for every sampled length k∈ A, so our bookmarking technique does not affect the asymptotic working space if ϵ≥ n^-1/2 (i.e. |A| ≤√(n)log n). Updating each bookmark and extracting S[|S|-k+1] from the RLBWT take O(log n) time by Lemma <ref>. This running time is absorbed by operation merge() on the count-distinct sketches, see Section <ref>. We obtain: Given an upper-bound n to the stream's length, we can compute the sketch of Definition <ref> in O(√(n)log n) words of working space and O(ϵ^-1log^2 n) worst-case delay per stream character, for any approximation factorϵ≥ n^-1/2. § IMPLEMENTATION AND EXPERIMENTS We implemented a parallel version of our streaming algorithm in C++.[ <https://github.com/regindex/substring-complexity>]We ran experiments on a server with Intel(R) Xeon(R) W-2245 CPU @ 3.90GHz with 16 threads and 128GB of RAM running Ubuntu 18.04 LTS 64-bit. Our complete experimental results are reported in <cit.>.We used the repetitive real Pizza&Chilli dataset (P&C)[<https://pizzachili.dcc.uchile.cl/repcorpus/real/>], large Canterbury corpus[<http://corpus.canterbury.ac.nz/resources/large.tar.gz>], and datasets from AF Project[<https://afproject.org>]. We computed the relative error of our approximation δ̃ with respect to δ for different sampling densities (i.e. parameter α of Definition <ref>). With the sparsest (less precise) sampling scheme (option -p 1), δ̃ always differed from δ by up to 5% and the average throughput was of 174 MB per minute using up to 16 threads (option -t 0). For efficiency reasons, the RLBWT is disabled by default: in practice this does not affect precision, since k̂=_k d_k/k was always extremely small (k̂≤ 100 in all datasets), meaning that the RLBWT is never required. We also computed δ̃ on a big dataset of 189GB long reads of Rana Muscosa[<https://trace.ncbi.nlm.nih.gov/Traces/?view=run_browser acc=SRR11606868>]. Our software finished the computation in 15:31 hours with a throughput of 203MB per minute using only about 5MB of internal memory.Experiments on repetitiveness measures. We studied the effectiveness of δ̃ as a repetitive measure. We compared it to exact δ, to the number of runs of the BWT r, to the number of phrases of the LZ77 parse z, and to the output of two popular compressors, xz and 7z. For each dataset in the repetitive P&C corpus, we computed these five measures for prefixes of increasing length. We observe that δ̃ not only follows closely the values of δ, but it also mirrors the trend of the other four measures. Thissuggests experimentally that δ̃ computed by our streaming algorithm is a good indicator of repetitiveness and compressibility.Experiments on phylogeny reconstruction. We verified that NCD based on the compression software 𝚡𝚣, on δ, and on δ̃ yield similar phylogenetic trees with thedataset from AF Project; the average normalized Robinson-Foulds distances ranged from 0.1 to 0.3, indicating that the reconstructed trees were very similar. We also measured the running time to compute all-pair NCDs on 29 sequences of average length ∼81k. This process took only 3 minutes for δ̃ and 24 minutes for the exact δ, while for 𝚡𝚣 it required 42 minutes.ReferencesIEEEbib§ MISSING PROOFSWe start with the following two easy observations that we use in our proofs later on.If S≠ a^n for a∈Σ, then δ(S) ≥ d_1/1 ≥ 2/1 = 2.Note that if at least two distinct letters appear, it follows that d_1 > 1 and thus δ≥d_1/1 ≥ 2. Since it is easy to recognize the case S=a^n for some a∈Σ in constant space and constant delay per character, from now on we assume w.l.o.g. that δ≥ 2. We continue with the following simple observation that we will use in the proof of Lemma <ref>.It holds that k̂ = _k≥ 1{d_k(S)/k}≤ n/2.Assume that k̂ > n/2. Then it is immediate that d_k̂≤ n - k̂ + 1 < n/2 (as the right most character in the substring can be at index at most n). We now obtain that d_k̂ / k̂ < 1, contradicting Observation <ref>. * Let k_S, T, k_S, and k_T be such that δ(S, T) = |D_k_S, T(S) ∪ D_k_S, T(T) | / k_S, T, δ(S) = |D_k_S(S)|/k_S, and δ(T) = |D_k_T(T)|/k_T. Let, w.l.o.g., δ(S)=max{δ(S), δ(T)}. Then, δ(S, T) = |D_k_S, T(S) ∪ D_k_S, T(T) |/k_S, T≥|D_k_S(S) ∪ D_k_S(T) |/k_S≥|D_k_S(S)|/k_S = δ(S),where the second inequality uses the fact that k_S is the maximizer for S. For the second claim, δ(S, T) = |D_k_S, T(S) ∪ D_k_S, T(T) |/k_S, T≤|D_k_S, T(S)|/k_S, T + |D_k_S, T(T) |/k_S, T≤|D_k_S(S)|/k_S + |D_k_T(T) |/k_T = δ(S) + δ(T),where the second inequality uses the fact that k_S and k_T are the respective maximizers for S and T. *We start with the lower bound. Using the definition of _δ̃(S, T), we obtain_δ̃(S, T) ≥(1 - ')·δ(S, T) - (1 + ') min{δ(S), δ(T)}/(1 + ')max{δ(S), δ(T)} = 1/1 + '·_δ(S, T) - '/1 + '·δ(S, T) + min{δ(S), δ(T)}/max{δ(S), δ(T)} = _δ(S, T) - '/1 + '·(δ(S, T) + min{δ(S), δ(T)}/max{δ(S), δ(T)} + _δ(S, T))≥_δ(S, T) - 4'/1 + '≥_δ(S, T) -using the sub-additivity of δ from Lemma <ref>, the fact that _δ(S, T)≤ 1 according to Corollary <ref>, and the definition of '. Similarly, now for the upper bound, we obtain_δ̃(S, T) ≤(1 + ')·δ(S, T) - (1 - ') min{δ(S), δ(T)}/(1 - ')max{δ(S), δ(T)} = 1/1 - '·_δ(S, T) + '/1 - '·δ(S, T) + min{δ(S), δ(T)}/max{δ(S), δ(T)} = _δ(S, T) + '/1 - '·(δ(S, T) + min{δ(S), δ(T)}/max{δ(S), δ(T)} + _δ(S, T))≤_δ(S, T) + 4'/1 - '≤_δ(S, T) + again using the sub-additivity of δ from Lemma <ref>, the fact that _δ(S, T)≤ 1 according to Corollary <ref>, the definition of ' and the assumption that <1. * We first observe that ⌈α^⌊log_α n⌋⌉≤ n. To see this, assume otherwise, i.e., that α^⌊log_α n⌋ = n + x for some x>0. Then x =α^⌊log_α n⌋ - n ≤α^log_α n - n = 0, contradicting the assumption that x>0. It follows that A⊆ [n]. Now, for the upper bound notice that δ̃≤max{ (1+') d_k / k : k∈ A}≤ (1 + ') ·δ≤ (1 + ) ·δ.For the lower bound, let k̂∈ [n] be such that δ=d_k̂/k̂ and let i be such that ⌈α^i-1⌉≤k̂≤⌈α^i ⌉. Notice that obviously ⌈α^i-1⌉∈ A, but also ⌈α^i ⌉∈ A as α^i ≤αk̂≤α n/2≤ n by Observation <ref>. We now distinguish two cases: (1) ⌈α^i ⌉ = ⌈α^i - 1⌉ + 1 and (2) ⌈α^i ⌉≥⌈α^i-1⌉ + 2. In case (1), we get that k̂∈{⌈α^i-1⌉, ⌈α^i⌉}⊆ A and consequently δ̃≥max{ (1 - ') d_k / k : k∈ A} = (1 - ') ·δ≥ (1 - ) ·δ. In case (2), it holds that α^i - 1·' = α^i- α^i - 1≥⌈α^i ⌉ - 1 - α^i - 1≥⌈α^i - 1⌉ + 1 - α^i - 1≥ 1.Now let β := ⌈α^i ⌉ - k̂. We note that d_j + 1≥ d_j - 1 for every j, as every distinct length-j substring other than possibly S[n - j + 1..n] gives at least one distinct length-j + 1 substring. Applying the same observation iteratively yields d_⌈α^i ⌉≥ d_k̂ - β. Henceδ̃≥ (1 - ') ·d_⌈α^i ⌉/⌈α^i ⌉≥ (1 - ') ·d_k̂ - β/k̂ + β = (1 - ') δ·1 - β/d_k̂/1 + β/k̂≥ (1 - ') δ·1 - β/2k̂/1 + β/k̂,where we used that δ(S) = d_k̂/ k̂≥ 2 in the last step. We can now upper bound β by ⌈α^i ⌉ - ⌈α^i-1⌉≤α^i + 1 - α^i-1 = α^i-1·' + 1 ≤k̂' + 1. This yields δ̂≥ (1 - ')δ·1 - '/2- 1/2 k̂/1 + ' + 1/k̂≥ (1 - ')δ·1 - '/1 + 2 '≥ (1 - ) ·δ,where the second inequality uses that k̂≥ 1/' following from (<ref>) and the last inequality uses the definition of '.§ DETAILS ON BOOKMARKING THE RLBWT We show how to extract S[|S|-k+1] from the RLBWT, for any of the sampled lengths k. See also the example in Figure <ref>. We show how to initialize and update (upon character extensions of the stream) an index (bookmark) j such that BWT[j] = S[|S|-k+1]. This allows us retrieving S[|S|-k+1] in O(log |S|) ⊆ O(log n) time with a random access operation BWT[j] on the RLBWT data structure.We first discuss how to initialize the bookmark j as soon as the stream's length becomes S = k+1(before that, the window of the last k characters is not completely filled). The initialization works by setting j = BWT.LF(i), where i is the position such that BWT[i] = $. Since the LF mapping on the BWT of the reversed stream corresponds to advancing one position in the stream, it is easy to see that, after this operation, it holds BWT[j] = S[|S|-k+1]. See Figure <ref> for an example. Suppose we are storing the bookmark j such that BWT[j] = S[|S|-k+1]. We now show how to update j when a new character a arrives; letS' = Sa be the updated stream. Our goal is to modify j so that BWT[j] = S'[|S'|-k+1] holds. The observation is that, upon the extension of the stream by one character a, the algorithm of <cit.> modifies the BWT as follows: letting i being the index such that BWT[i]=$, the algorithm (1) replaces BWT[i] ← a, and (2) inserts $ in the position i' corresponding to the lexicographic rank of the new reversed stream (Sa)^R (position i' is computed in O(log|S|) time using basic operations on the RLBWT, see <cit.> and Example <ref>): the new BWT becomes BWT ← BWT[1,i'-1]·$· BWT[i'+1,|S|]. If j < i' (i.e. $ is inserted after position j), then after these modification we have that BWT[j] = S'[|S'|-k]; if, on the other hand, j ≥ i' (i.e. $ is inserted before position j), then we increment j as j← j+1, and BWT[j] = S'[|S'|-k] holds also in this case.Finally, we need to “advance” j by one position on the stream; this operation corresponds to one LF mapping step on the BWT: j ← BWT.LF(j) (O(log |S|) time). After these operations, we finally have that BWT[j] = S'[|S'|-k+1]. § DETAILED EXPERIMENTAL RESULTS§.§ Estimation of d_k.As mentioned above, there are two types of errors in the computation of the approximation δ̃ of δ:(1) the error obtained when approximating the values d_k by d̃_k and (2) the error due to the restriction of the string's offsets [n] to the “sampled set” A. The error of type (1) itself has two sources, namely (1.1) errors due to collisions when computing fingerprints with Rabin's hash function and (1.2) errors due to the count-distinct sketch when applied to the fingerprints. We experimentally evaluated the error of type (1.1) and (1.2) as follows.For the Pizza&Chili repetitive corpus, we compute the exact values of d_k and their estimated values d̃_k for k∈{2^i:0≤ i ≤ 7}.We observe that the error (1.1) caused by collisions in Rabin's fingerprint were negligible; the error in the ratio of the distinct number of fingerprints and the actual number of distinct substrings was less than 0.01%. The error due to the count-distinct sketch (1.2) was dependent on its parameter: the number of registers used for estimation. It is worth noting that the number of registers does not affect the time complexity when updating sketches, but only affects the space usage by a constant factor (and the time to compute the actual estimation at the end, which is negligible). When more than 2^14 registers were used for count-distinct sketches, the maximum relative error was observed to be below 2%, and the average error on d_k was below 0.5%; see Table <ref>. §.§ Experiments on phylogenetic tree reconstruction.To show similar behavior of _𝚡𝚣, _δ, and _δ̃, we conducted experiments on phylogenetic tree reconstruction usingdataset from AF project[<https://afproject.org/>]. It contains 11 groups of sequences (651 sequences in total) where each group yields a tree. We constructed 11 phylogenetic trees (i.e. one tree for each group) for each of the NCD measures, and compare the constructed trees by measuring the normalized Robinson-Foulds (nRF) distance, a widely-used distance measure for this purpose. The distance tends to 0 as trees become similar, and tends to 1 when comparing with a random tree. The average nRF between _𝚡𝚣 and _δ was measured as 0.2, indicating similar trees were reconstructed. The average nRF between _δ and _δ̃ ranges from 0.115 to 0.250 depending on the parameters. For ease of interpretation of these values, we depict two similar phylogenetic trees with nRF=0.194 in Figure <ref>, which is an actual example of reconstructed trees using NCD with δ̃ and 𝚡𝚣. Running Time. To construct a phylogenetic tree from a sequence set, we usually need to compute all-pair distances. When sequences are long, computing NCDs can be quite costly because we need to compress concatenated sequences for all pairs of sequence in the input set. On the other hand, our sketching can be more efficient because we only need to compute sketches for each sequence, then merging sketches can be done very quickly compared to processing the entire sequences all over again. To demonstrate this, we measure the running time for computing all-pair NCDs on dataset from AF Project that consists of 29 sequences of average length 81,588. Computing all-pair NCDs withand the exact δ took about 42 and 24 minutes. On the other hand, our sketching method took only 3 minutes.
http://arxiv.org/abs/2310.17980v2
{ "authors": [ "Ruben Becker", "Matteo Canton", "Davide Cenzato", "Sung-Hwan Kim", "Bojana Kodric", "Nicola Prezza" ], "categories": [ "cs.DS" ], "primary_category": "cs.DS", "published": "20231027085119", "title": "Sketching and Streaming for Dictionary Compression" }
Artifact-Robust Graph-Based Learning in Digital PathologySaba Heidari Gheshlaghi and Milan Aryal contributed equally in this paper.S. Heidari Gheshlaghi, M. Aryal andN. Yahyasoltani are with the Department of Computer Science, Marquette University, Milwaukee, WI 53202 USA (e-mail: {saba.heidari, milan.aryal, nasim.yahyasoltani}@marquette.edu).M. Ganji is a pathologist with the Northshore Pathologists, S.C., Milwaukee, WI 53211 USA (e-mail: [email protected]). Saba Heidari Gheshlaghi, Milan Aryal, Nasim Yahyasoltani, and Masoud Ganji Accepted XXX. Received YYY; in original form ZZZ =============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== In this paper, we introduce Randomized Q-learning (), a novel randomized model-free algorithm for regret minimization in episodic Markov Decision Processes (MDPs). To the best of our knowledge, is the first tractable model-free posterior sampling-based algorithm.We analyze the performance of in both tabular and non-tabular metric space settings. In tabular MDPs, achieves a regret bound of order (√(H^5SAT)), where H is the planning horizon, S is the number of states, A is the number of actions, and T is the number of episodes. For a metric state-action space, enjoys a regret bound of order (H^5/2 T^(d_z+1)/(d_z+2)), where d_z denotes the zooming dimension. Notably, achieves optimistic exploration without using bonuses, relying instead on a novel idea of learning rate randomization. Our empirical study shows that outperforms existing approaches on baseline exploration environments. § INTRODUCTION In reinforcement learning (RL, ), an agent learns to interact with an unknown environment by acting, observing the next state, and receiving a reward. The agent's goal is to maximize the sum of the collected rewards. To achieve this, the agent can choose to use model-based or model-free algorithms. In model-based algorithms, the agent builds a model of the environment by inferring the reward function and the transition kernel that produces the next state. The agent then plans in this model to find the optimal policy. In contrast, model-free algorithms directly learn the optimal policy, which is the mapping of a state to an optimal action, or equivalently, the optimal Q-values, which are the mapping of a state-action pair to the expected return of an optimal policy starting by taking the given action at the given state. Although empirical evidence suggests that model-based algorithms are more sample efficient than model-free algorithms <cit.>; model-free approaches offer several advantages. These include smaller time and space complexity, the absence of a need to learn an explicit model, and often simpler algorithms. As a result, most of the recent breakthroughs in deep RL, such as those reported by <cit.>, have been based on model-free algorithms, with a few notable exceptions, such as<cit.>. Many of these model-free algorithms <cit.> are rooted in the well-known Q-learning algorithm of <cit.>. Q-learning is an off-policy learning technique where the agent follows a behavioral policy while simultaneously incrementally learning the optimal Q-values by combining asynchronous dynamic programming and stochastic approximation. Until recently, little was known about the sample complexity of Q-learning in the setting where the agent has no access to a simulator allowing to sample an arbitrary state-action pair. In this work, we consider such challenging setting where the environment is modelled by an episodic Markov Decision Process (MDP) of horizon H. After T episodes, the performance of an agent is measured through regret which is the difference between the cumulative reward the agent could have obtained by acting optimally and what the agent really obtained during the interaction with the MDP. This framework poses the famous exploration-exploitation dilemma where the agent must balance the need to try new state-action pairs to learn an optimal policy against exploiting the current observations to collect the rewards. One effective approach to resolving this dilemma is to adopt the principle of optimism in the face of uncertainty. In finite MDPs, this principle has been successfully implemented in the model-based algorithm using bonuses <cit.>. Specifically, the upper confidence bounds (UCBs) on the optimal Q-value are built by adding bonuses and then used for planning. Building on this approach, <cit.> proposed the algorithm, which applies a similar bonus-based technique to Q-learning, achieving efficient exploration. Recently, <cit.> introduced a simple modification to that achieves optimal sample complexity, making it competitive with model-based algorithms. Another class of methods for optimistic exploration is Bayesian-based approaches.An iconic example among this class is the posterior sampling for reinforcement learning (,) algorithm. This model-based algorithm maintains a surrogate Bayesian model of the MDP, for instance, a Dirichlet posterior on the transition probability distribution if the rewards are known. At each episode, a new MDP is sampled (i.e., a transition probability for each state-action pair) according to the posterior distribution of the Bayesian model. Then, the agent plans in this sampled MDP and uses the resulting policy to interact with the environment. Notably, an optimistic variant of , named optimistic posterior sampling for reinforcement learning (,) also enjoys an optimal sample complexity <cit.>. The random least square value iteration (, <cit.>) is another well-known model-based algorithm that leverages a Bayesian-based technique for exploration. Precisely, directly sets a Gaussian prior on the optimal Q-values and then updates the associated posterior trough value iteration in a model <cit.>. A close variant of proposed by <cit.>, using a more sophisticated prior/posterior couple, is also proven to be near-optimal.It is noteworthy that Bayesian-based exploration techniques have shown superior empirical performance compared to bonus-based exploration, at least in the tabular setting <cit.>. Furthermore, these techniques have also been successfully applied to the deep RL setting <cit.>. Finally, Bayesian methods allow for the incorporation of apriori information into exploration (e.g. by giving more weight to important states).However, most of the theoretical studies on Bayesian-based exploration have focused on model-based algorithms, raising the natural question of whether the approach can be extended to a provably efficient model-free algorithm that matches the good empirical performance of its model-based counterparts. Recently, <cit.> proposed a model-free posterior sampling algorithm for structured MDPs, however, it is not computationally tractable. Therefore, a provably tractable model-free posterior sampling algorithm has remained a challenge. In this paper, we aim to resolve this challenge. We propose the randomized Q-learning () algorithm that achieves exploration without bonuses, relying instead on a novel idea of learning rate randomization. is a tractable model-free algorithm that updates an ensemble of Q-values via Q-learning with Beta distributed step-sizes. If tuned appropriately, the noise introduced by the random learning rates is similar to the one obtained by sampling from the posterior of the algorithm. Thus, one can see the ensemble of Q-values as posterior samples from the same induced posterior on the optimal Q-values as in . Then, chooses among these samples in the same optimistic fashion as . We prove that for tabular MDPs, a staged version <cit.> of , named enjoys the same regret bound as the algorithm, that is, (√(H^5SAT)) where S is the number of states and A the number of actions. Furthermore, we extend beyond the tabular setting into the algorithm to deal with metric state-action spaces <cit.>. operates similarly to but over a fixed discretization of the state-action space and uses a specific prior tuning to handle the effect of discretization. We prove that enjoys a regret bound of order (H^5/2 T^(d_c+1)/(d_c+2)), where d_c denotes the covering dimension. This rate is of the same order as the one of by <cit.>, an adaptation of to metric state-action space and has a better dependence on the budget T than one of the model-based kernel algorithms such that by <cit.>. We also explain how to adapt and its analysis to work with an adaptive discretization as by <cit.>. Finally, we provide preliminary experiments to illustrate the good performance of against several baselines in finite and continuous environments.We highlight our main contributions: * The algorithm, a new tractable (provably efficient) model-free Q-learning adaptation of the algorithm that explores through randomization of the learning rates. * A regret bound of order (√(H^5SAT)) for a staged version of the algorithm in finite MDPs where S is the number of states and A the number of actions, H the horizon and T the budget.* A regret bound of order(H^5/2 T^(d_c+1)/(d_c+2)) for an adaptation of to metric spaces where d_c denotes the covering dimension.* Adaptive version of metric space extension of algorithm that achieves a regret bound of order (H^5/2 T^(d_z+1)/(d_z+2)), where d_z is a zooming dimension.* Experiments in finite and continuous MDPs that show that is competitive with model-based and model-free baselines while keeping a low time-complexity. § SETTING We consider an episodic MDP (, , H, {p_h}_h∈[H],{r_h}_h∈[H]),whereis the set of states,is the set of actions,H is the number of steps in one episode, p_h(s'|s,a) is the probability transition from state s to state s' upon taking action a at step h, and r_h(s,a)∈[0,1] is the bounded deterministic reward received after taking the action a in state s at step h. Note that we consider the general case of rewards and transition functions that are possibly non-stationary, i.e., that are allowed to depend on the decision step h in the episode. Policy & value functions A deterministic policy π is a collection of functions π_h : → for all h∈ [H], where every π_hmaps each state to a single action. The value functions of π, denoted by V_h^π, as well as the optimal value functions, denoted by _h are given by the Bellman and the optimal Bellman equations, Q_h^π(s,a)= r_h(s,a) + p_h V_h+1^π(s,a) V_h^π(s)= π_h Q_h^π (s)Q_h^⋆(s,a)=r_h(s,a) + p_h V_h+1^⋆(s,a) V_h^⋆(s)= max_a Q_h^⋆ (s, a), where by definition, V_H+1^⋆≜ V_H+1^π≜ 0. Furthermore, p_h f>(s, a) ≜_s' ∼ p_h(· | s, a)[f(s')] denotes the expectation operator with respect to the transition probabilities p_h and π_h g(s) ≜g(s,π_h(s)) denotes the composition with the policy π at step h. Learning problem The agent, to which the transitions are unknown (the rewards are assumed to be known[Our work can be extended without too much difficulty to the case of random rewards.] for simplicity), interacts with the environment during T episodes of length H, with a fixed initial state s_1.[As explained by <cit.> if the first state is sampled randomly as s_1∼ p, we can simply add an artificial first state s_1' such that forany action a, the transition probability is defined as the distribution p_1'(s_1',a) ≜ p.] Before each episode t the agent selects a policy π^t based only on the past observed transitions up to episode t-1. At each step h∈[H] in episode t, the agent observes a state s_h^t∈, takes an action π_h^t(s_h^t) = a_h^t∈ andmakes a transition to a new state s_h+1^t according to the probability distribution p_h(s_h^t,a_h^t) and receives a deterministic reward r_h(s_h^t,a_h^t). Regret The quality of an agent is measured through its regret, that is the difference between what it could obtain (in expectation) by acting optimally and what it really gets, ^T ≜∑_t=1^T _1(s_1)- V_1^π^t(s_1) . Additional notation For N∈_++, we define the set [N]≜{1,…,N}. We denote the uniform distribution over this set by [N]. We define the beta distribution with parameters α,β as Β(α,β). Appendix <ref> references all the notation used. § RANDOMIZED Q-LEARNING FOR TABULAR ENVIRONMENTS In this section we assume that the state spaceis finite of size S as well as the action spaceof size A. We first provide some intuitions for algorithm.§.§ Concept The main idea of is to perform the usual Q-learning updates but instead of adding bonuses to the targets as to drive exploration, injects noise into the updates of the Q-values through noisy learning rates. Precisely, for J∈, we maintain an ensemble of size J of Q-values[We index the quantities by n in this section where n is the number of times the state-action pair (s, a) is visited. In particular this is different from the global time t since, in our setting, all the state- action pair are not visited at each episode. See Section <ref> and Appendix <ref> precise notations.] (^n,j)_j∈[J] updated with random independent Beta-distributed step-sizes (w_n,j)_j∈[J] where w_n,j∼Β(H, n). Then, policy Q-values ^n areobtained by taking the maximum among the Q-values of the ensemble^n+1,j_h(s,a)=(1 - w_n,j) ^n,j_h(s,a) + w_n,j [r_h(s,a) + ^n_h+1(s^n_h+1)]^n+1_h(s,a)= max_j ∈ [J]^n+1,j_h(s,a), ^n+1_h(s) = max_a ∈^n+1_h(s,a),where s^n_h+1 stands for the next (in time) state after n-th visitation of(s,a) at step h. Note that the policy Q-values ^n are designed to be upper confidence bound on the optimal Q-values. The policy used to interact with the environment is greedy with respect to the policy Q-values π_h^n(s) ∈_a _h^n(s,a). We provide a formal description of in Appendix <ref>.Connection withWe observe that the learning rates of are in expectation of the same order [w_n,j ]= H/(n+H) as the ones used by the algorithm. Thus, we can view our randomized Q-learning as a noisy version of the algorithm that doesn't use bonuses. Connection withIf we unfold the recursive formula above we can express the Q-values ^n+1,j as a weighted sum ^n+1,j_h(s,a) = W^0_n,j^1,j_h(s,a) + ∑_k=1^n W^k_n,j [r_h(s,a) + ^k_h+1(s^k_h+1)], where we defineW^0_n,j = ∏_ℓ=0^n-1 (1 - w_ℓ,j) and W^k_n,j = w_k-1,j∏_ℓ=k^n-1 (1 - w_ℓ,j).To compare, we can unfold the corresponding formula for algorithm using the aggregation properties of the Dirichlet distribution (see e.g. Section 4 of <cit.> or Appendix <ref>)^n+1_h(s,a) = ^0_n^1_h(s,a) + ∑_k=1^n ^k_n [r_h(s,a) + ^n+1_h+1(s^k_h+1)],where weights (^0_n,…, ^n_n) follows Dirichlet distribution (n_0, 1, …, 1) and n_0 is a weight for the prior distribution. In particular, one can represent these weights as partial products of other weights w_n∼Β(1, n+n_0). If we use (<ref>) to construct a model-free algorithm, this would require recomputing the targets r_h(s,a) + ^n+1(s^k_h+1) in each iteration. To make algorithm more efficient and model-free, weapproximate ^n+1 by ^k, and, as a result, obtain algorithm with weight distribution w_n,j∼Β(1, n+n_0).Note thatin expectation this algorithm isequivalent to with the uniform step-sizes which are known to be sub-optimal due to a high bias (see discussion in Section 3 of <cit.>). There are two known ways to overcome this sub-optimality for Q-learning: to introduce more aggressive learning rates w_n,j∼Β(H, n+n_0) leading to algorithm, or to usestage-dependent framework by <cit.> resulting in algorithm. The aforementionedtransition from to is similar to the transition from <cit.> to Q-learning. To make model-free, one has to to keep old targets in Q-values. This, however,introduces a bias that could be eliminated either by more aggressive step-size <cit.> or by splitting on stages <cit.>. Our algorithms (and ) perform the similar tricks for and thus could be viewed as model-free versions of it. Additionally, shares some similarities with the algorithm <cit.> in the way of introducing optimism (taking maximum over J independent ensembles of Q-values).Let us also mention a close connection to the theory of Dirichlet processes in the proof of optimism for thecase of metric spaces (see Remark <ref> in Appendix <ref>).Prior As remarked above, in expectation, has a learning rate of the same order as . In particular, it implies that the first (1-1/H) fraction of the the target will be forgotten exponentially fast in the estimation of the Q-values, see <cit.>. Thus we need to re-inject prior targets, as explained in Appendix <ref>, in order to not forget too quickly the prior and thus replicate the same exploration mechanism as in the algorithm.§.§ AlgorithmIn this section, following <cit.>, we present the algorithm a scheduled version of that is simpler to analyse. The main idea is that instead of using a carefully tuned learning rate to keep only the last 1/H fraction of the targets we split the learning of the Q-values in stages ofexponentially increasing size with growth rate of order 1+1/H. At a given stage, the estimate of the Q-value relies only on the targets within this stage and resets at the beginning of the next stage. Notice that the two procedures are almost equivalent. A detail description of is provided in Algorithm <ref>. Counts and stages Let n^t_h(s,a) ≜∑_i=1^t-1{ (s^i_h, a^i_h) = (s,a) } be the number of visits of state-action pair (s,a) at step h before episode t.We say that a triple (s,a,h)belongs to the k-th stage at the beginning of episode t if n^t_h(s,a) ∈ [∑_i=0^k-1 e_i, ∑_i=0^k e_i ). Heree_k = ⌊ (1 + 1/H)^k · H ⌋ is the length of the stage k ≥ 0 and, by convention, e_-1 = 0. Let ^t_h(s,a)≜ n^t_h(s,a) - ∑_i=0^k-1 e_i be the number of visits of state-action pair (s,a) at step h during the current stage k. Temporary Q-values At the beginning of a stage, let say time t, we initialize J temporary Q-values as ^t,j_h(s,a) = r_h(s,a) + (H-h-1) for j∈[J] and r_0 some pseudo-reward. Then as long as (s^t_h, a^t_h,h) remainswithin a stage we update recursively the temporary Q-values^t+1,j_h(s,a) =(1- w_j, ) ^t,j_h(s,a) + w_j,[r_h(s,a) + ^t_h+1(s^t_h+1)], (s,a) = (s^t_h, a^t_h)^t,j_h(s,a)otherwise,where =^t_h(s,a) is the number of visits, w_j, is a sequence of i.i.d. random variables w_j,∼Β(1/κ, ( + n_0) / κ) with κ >0 being some posterior inflation coefficient and n_0 a number of pseudo-transitions. Policy Q-values Next we define the policy Q-values that is updated at the end of a stage. Let say for state-action pair (s,a) at step h an stage ends at time t. This policy Q-values is then given by the maximum of temporary Q-values _h^t+1=max_j∈[J]^t+1,j_h(s,a). Then the policy Q-values is constant within a stage. The value used to defined the targets is ^t+1_h(s) = max_a ∈^t+1_h(s,a). The policy used to interact with the environment is greedy with respect to the policy Q-values π^t+1_h(s) ∈_a ∈^t+1_h(s,a) (we break ties arbitrarily). §.§ Regret boundWe fix δ∈(0,1) and the number of posterior samples J ≜⌈ c_J ·log(2SAHT/δ) ⌉, where c_J = 1/log(2/(1 + Φ(1))) and Φ(·) is thecumulative distribution function (CDF) of a normal distribution. Note that J has a logarithmic dependence on S,A,H,T, and 1/δ.We now state the regret bound of with a full proof in Appendix <ref>.Consider a parameter δ∈ (0,1). Let κ≜ 2(log(8 SAH/δ) + 3log(π(2T+1))), n_0 ≜⌈κ(c_0 + log_17/16(T)) ⌉, ≜ 2, wherec_0 is an absolute constant defined in (<ref>); see Appendix <ref>. Then for , with probability at least 1-δ, ^T = ( √(H^5 SAT)+ H^3 S A ).Discussion The regret bound of Theorem <ref>coincides (up to a logarithmic factor) with the bound of the algorithm with Hoeffding-type bonuses from <cit.>.Up to a H factor, our regret matches the information-theoretic lower bound Ω(√(H^3SAT)) <cit.>. This bound could be achieved (up to logarithmic terms) in model-free algorithms by using Bernstein-type bonuses and variance reduction <cit.>. We keep these refinements for future research as the main focus of our paper is onthe novel randomization technique and its use to construct computationally tractable model-free algorithms.Computational complexity is a model-free algorithm, and thus gets the (HSA) space complexity as , recall that we set J=(1). The per-episode time-complexity is also similar and of order (H) .§ RANDOMIZED Q-LEARNING FOR METRIC SPACES In this section we present a way to extend to general state-action spaces. We start from the simplest approach with predefined ε-net type discretization of the state-action space × (see ), and then discuss an adaptive version of the algorithm, similar to one presented by <cit.>. §.§ Assumptions To pose the first assumption, we start from a general definition of covering numbers.Let (M, ρ) be a metric space. A setof open balls of radius ε is called an ε-coverof M if M ⊆⋃_B ∈ B. The cardinality of the minimal ε-cover is called covering number N_ε of (M,ρ). We denote the corresponding minimal ε-covering by _ε. A metric space (M, ρ) has a covering dimension d_c if ∀ε > 0 : N_ε≤ C_N ε^-d_c, where C_N is a constant. The last definition extends the definition of dimension beyond vector spaces. For example, is case of M = [0,1]^d the covering dimension of M is equal to d. For more details and examples see e.g. <cit.>.Next we are ready to introduce the first assumption. [Metric Assumption] Spacesandare separable compact metric spaces with the corresponding metrics ρ_ and ρ_. The joint space × endowed with a product metric ρ that satisfies ρ((s,a),(s',a')) ≤ρ_(s,s') + ρ_(a,a'). Moreover, the diameter of × is bounded by d_max, and × hascovering dimension d_c with a constant C_N. This assumption is, for example, satisfied for the finite state and action spaces endowed with discrete metrics ρ_(s,s') = {s ≠ s'}, ρ_(a,a') = {a ≠ a'} with d_c = 0, C_N = SA and S and A being the cardinalities of the state and action spaces respectively. The above assumption also holds in the case ⊆ [0,1]^d_ and ⊆ [0,1]^d_ with d_c = d_ + d_. The next two assumptions describe the regularity conditions of transition kernel and rewards. [Reparametrization Assumption] The Markov transition kernel could be represented as an iterated random function. In other words, there exists a measurable space (Ξ, _Ξ) and a measurable function F_h(×) ×Ξ→, such that s_h+1∼ p_h(s_h,a_h)s_h+1 = F_h(s_h,a_h, ξ_h) for a sequence of independent random variables {ξ_h}_h∈[H]. This assumption is naturally satisfied for a large family of probabilistic model, see <cit.>. Moreover, it has been utilized by the RL community both in theory <cit.> and practice <cit.>. Essentially, this assumption holds for Markov transition kernels over a separable metric space, see Theorem 1.3.6 by <cit.>. However, the function F_h could be ill-behaved. To avoid this behaviour, we need the following assumption. [Lipschitz Assumption] The function F_h(·, ξ_h) is L_F-Lipschitz in the first argument for almost every value of ξ_h. Additionally, the reward function r_h ×→ [0,1] is L_r-Lipschitz.This assumption is commonly used in studies of the Markov processes corresponding to iterated random functions, see <cit.>. Moreover, this assumption holds for manycases of interest. As main example, it trivially holds in tabular and Lipschitz continuous deterministic MDPs <cit.>. Notably, this observation demonstrates that Assumption <ref> does not necessitate Lipschitz continuity of the transition kernels in total variation distance, since deterministic Lipschitz MDPs are not continuous in that sense. Additionally, incorporation of an additive noise to deterministic Lipschitz MDPs will lead toAssumption <ref> withL_F = 1.Furthermore, it is possible to show that Assumption <ref> implies other assumptions stated in the literature. For example, it implies that the transition kernel is Lipschitz continuous in 1-Wasserstein metric, and thatandare both Lipschitz continuous. Let Assumption <ref>,<ref>,<ref> hold. Then the transition kernels p_h(s,a) are L_F-Lipschitz continuous in 1-Wasserstein distance_1(p_h(s,a), p_h(s',a')) ≤ L_F ·ρ((s,a), (s',a')),where 1-Wasserstein distance between two probability measures on the metric space (M,ρ) is defined as _1(ν, η) = sup_fis1-Lipschitz∫_M f ν - ∫_M f η.Let Assumption <ref>,<ref>,<ref> hold. Then _h and _h are Lipschitz continuous with Lipschitz constant L_V,h≤∑_h'=h^H L_F^h'-h L_r.The proof of these lemmas is postponed to Appendix <ref>.For a more detailed exposition on 1-Wasserstein distance we refer to the book by <cit.>. The first assumption was studied by <cit.> in the settingof model-based algorithms in metric spaces. We are not aware of any natural examples of MDPs with a compact state-action space where the transition kernels are Lipschitz in _1 but fail to satisfyAssumption <ref>. §.§ Algorithms In this section, following <cit.>, we present algorithm that combines a simple non-adaptive discretization and an idea of stages by <cit.>.We assume that we have an access to all Lipschitz constants L_r, L_F, L_V≜ L_V,1. Additionally,we haveaccess to the oracle that computes ε-cover _ε of the space × for any predefined ε > 0[Remark that the simple greedy algorithm can generate ε-cover of size N_ε/2, that will not affect the asymptotic behavior of our regret bounds, see <cit.>.]. Counts and stages Let n^t_h(B) ≜∑_i=1^t-1{ (s^i_h, a^i_h) ∈ B } be the number of visits of the ball B ∈_ε at step h before episode t. Let e_k = ⌊ (1 + 1/H)^k · H ⌋ be length of the stage k ≥ 0 and, by convention, e_-1 = 0. We say that (B,h)belongs to the k-th stage at the beginning of episode t if n^t_h(B) ∈ [∑_i=0^k-1 e_i, ∑_i=0^k e_i ). Let ^t_h(B)≜ n^t_h(s,a) - ∑_i=0^k-1 e_i be the number of visits of the ball B at step h during the current stage k.Temporary Q-values At the beginning of a stage, let say time t, we initialize J temporary Q-values as ^t,j_h(B) =H for j∈[J] andsome pseudo-reward. Then within a stage k we update recursively the temporary Q-values^t+1,j_h(B) =(1- w_j, ) ^t,j_h(B) + w_j,[r_h(s^t_h,a^t_h) + ^t_h+1(s^t_h+1)], (s,a) = (s^t_h, a^t_h)^1,j_h(B)otherwise,where =^t_h(B) is the number of visits, w_j, is a sequence of i.i.d random variables w_j,∼Β(1/κ, ( + n_0(k)) / κ) with κ >0 some posterior inflation coefficient and n_0(k) a number of pseudo-transitions. The important difference between tabular and metric settingsis the dependence on the pseudo-count n_0(k) on k in the latter case, since here thepriorisused to eliminatethe approximation error. Policy Q-values Next, we define the policy Q-values that are updated at the end of a stage. Let us fix a ball B at step h and suppose that the currents stage ends at time t. Then the policy Q-values are given by the maximum of the temporary Q-values _h^t+1(B) =max_j∈[J]^t+1,j_h(B). The policy Q-values are constant within a stage. The value used to define the targets is computed on-flight using the formula ^t_h(s) = max_a ∈^t_h(ψ_ε(s,a)), where ψ_ε×→_ε is a quantization map, that assigns each state-action pair (s,a) to a ball B ∋ (s,a). The policy used to interact with the environment is greedy with respect to the policy Q-values and also computed on-flight π^t_h(s) ∈_a ∈^t_h(ψ_ε(s,a)) (we break ties arbitrarily).A detail description of is provided in Algorithm <ref> in Appendix <ref>.§.§ Regret Bound We fix δ∈(0,1), the discretization level ε > 0 and the number of posterior samples J ≜⌈c̃_J · ( log(2C_NHT/δ) + d_c log(1/ε) ) ⌉,where c̃_J = 1/log(4/(3 + Φ(1))) and Φ(·) is thecumulative distribution function (CDF) of a normal distribution. Note that J has a logarithmic dependence on H,T,1/ε and 1/δ. For the regret-optimal discretization level ε = T^-1/(d_c + 2), the number J is almostindependent of d_c . Let us note that the role of prior in metric spaces is much higher than in the tabular setting. Another importantdifference is dependence of the prior count on the stage index. In particular, we haven_0(k) = ⌈_0 + κ + ε L/H-1· (e_k + _0 + κ) ⌉, _0 = (c_0 + 1 + log_17/16(T)) ·κwherec_0 is an absolute constant defined in (<ref>) ( see Appendix <ref>), κ is the posterior inflation coefficient and L = L_r + (1+L_F)L_V is aconstant. We now state the regret bound of with a full proof being postponedto Appendix <ref>.Suppose that N_ε≤ C_N ε^-d_c for all ε>0 and some constant C_N>0. Consider a parameter δ∈ (0,1) and take an optimal level of discretization ε = T^-1/(d_c+ 2). Let κ≜ 2(log(8HC_N/δ) + d_c log(1/ε) + 3log(π(2T+1))), ≜ 2. Then it holds for , with probability at least 1-δ, ^T = ( H^5/2 C_N^1/2 T^d_c+1/d_c+2 + H^3 C_N T^d_c/d_c+2 + L T^d_c+1/d_c+2). We can restore the regret bound in the tabular setting by letting d_c = 0 and C_N = SA, where S is the cardinality of the state-space, and A is the cardinality of the action-space. Discussion From the point of view of instance-independent bounds, our algorithm achieves the same result as <cit.> and <cit.>, that matches the lower bound Ω(H T^d_c+1/d_c+2) by <cit.> in dependence on budget T and covering dimension d_c. Notably, as discussed by <cit.>, the model-based algorithm such as <cit.> does not achieves optimal dependence in T due to hardness of the transition estimation problem.Computational complexity For a fixed level of discretization ε, our algorithm has a space complexity of order (H_ε). Assuming that the computation of a quantization map ψ_ε has (1) time complexity, we achieve a per-episode time complexity of (HA) for a finite action space and (H N_ε) for an infinite action space in the worst case due to computation of _a ∈_h(ψ_ε(s,a)). However, this can be improved to (H) if we consider adaptive discretization <cit.>. Adaptive discretization Additionally, we propose a way to combine with adaptive discretization by<cit.>.This combination results in two algorithms: and . The second one could achieve the instance-dependent regret bound that scales with a zooming dimension, the instance-dependent measure of dimension. We will follow <cit.> in the exposition of the required notation. For any (s,a) ∈×, the stage-dependent sub-optimality gap is defined as _h(s,a) = _h(s) - _h(s,a). This quantity is widely used in the theoretical instance-dependent analysis of reinforcement learningand contextual bandit algorithms. The near-optimal set of × for a given value ε defined as Z^ε_h = { (s,a) ∈×|_h(s,a) ≤ (H+1) ε}. The main insight of this definition is that essentially we are interested in a detailed discretizationof the near-optimal set Z^ε_h for small ε, whereas all other state-action pairs could be discretized in a more rough manner.Interestingly enough, Z^ε_h could be a lower dimensional manifold, leading to the following definition.The step-h zooming dimension d_z,h with a constant C_N,h and a scaling factor ρ > 0 is given byd_z,h = inf{ d>0 : ∀ε > 0 N_ε(Z^ρ·ε_h) ≤ C_N,hε^-d}.Under some additional structural assumptions on _h, it is possible to show that the zooming dimension could be significantly smallerthan the covering dimension, see, e.g., Lemma 2.8 in <cit.>. However, at the same time, it has been shown that d_z,h≥ d_ - 1, where d_ is a covering dimension of the state space. Thus, the zooming dimension allows adaptation to a rich action space but not a rich state space.Given this definition, it is possible to define define an adaptive algorithm that attains the following regret guaranteesConsider a parameter δ∈ (0,1). For a value κ that depends on T, d_c ad δ, for the following holds with probability at least 1-δ, ^T = (H^3 +H^3/2∑_h=1^HT^d_z,h+1/d_z,h+2),where d_z,h is the step-h zooming dimension and we ignore all multiplicative factors in the covering dimension d_c, log(C_N), and Lipschitz constants. We refer to Appendix <ref> to a formal statement and a proof.§ EXPERIMENTS In this section we present the experiments we conducted for tabular environments usinglibrary <cit.>. We also provide experiments in non-tabular environment inAppendix <ref>. Environment We use a grid-world environment with 100 states (i, j) ∈ [10]×[10] and 4 actions (left, right, up and down). The horizon is set to H=50. When taking an action, the agent moves in the corresponding direction with probability 1-ϵ, and moves to a neighbor state at random with probability ϵ=0.2. The agent starts at position (1, 1). The reward equals to 1 at the state (10, 10) and is zero elsewhere. r0.5< g r a p h i c s > Regret curves of and baselines in a grid-world environment for H=50 and transition noise ϵ = 0.2. The average is over 4 seeds.Variations of randomized Q-learning For the tabular experiment we use the algorithm, described in Appendix <ref> as it is the version of randomized Q-learning that is the closest to the baseline . Note that, we compare the different versions of randomized Q-learning in Appendix <ref>. Baselines We compare algorithm to the following baselines: (i) <cit.> (ii) <cit.>(iii) , a version of using real– time dynamic programming <cit.> (iv) <cit.> and (v) <cit.>. For the hyper-parameters used for these baselines refer to Appendix <ref>. Results Figure <ref> shows the result of the experiments. Overall, we see that outperforms algorithm on tabular environment, but still degrades in comparison to model-based approaches, that is usual for model-free algorithms in tabular environments. Indeed, using a model and backward induction allows new information to be more quickly propagated. But as counterpart, has a better time-complexity and space-complexity than model-based algorithm, see Table <ref> in Appendix <ref>. § CONCLUSION This paper introduced the algorithm, a new model-free algorithm that achieves exploration without bonuses. It utilizes a novel idea of learning rate randomization, resulting in provable sample efficiency with regret of order (√(H^5SAT)) in the tabular case. We also extend to the case of metric state-action space by using proper discretization techniques. The proposed algorithms inherit the good empirical performance of model-based Bayesian algorithm such that while keeping the small space and time complexity of model-free algorithm. Our result rises following interesting open questions for a further research. Optimal rate forWe conjecture that could get optimal regret in the tabular setting if coupled with variance reductions techniques as used by <cit.>.However, obtaining such improvements is not straightforward due to the intricate statistical dependencies involved in the analysis of . Beyond one-step learning We observe a large gap in the experiments between Q-learning type algorithm that do one-step planning and e.g. algorithm that does full planning or that does one-step planning with full back-up (expectation under transition of the model)for all actions. Therefore, it would interesting to study also algorithms that range between these two extremes <cit.>.§ ACKNOWLEDGMENTS The work of D. Tiapkin, A. Naumov, and D. Belomestny were supported by the grant for research centers in the field of AI provided by the Analytical Center for the Government of the Russian Federation (ACRF) in accordance with the agreement on the provision of subsidies (identifier of the agreement 000000D730321P5Q0002) and the agreement with HSE University No. 70-2021-00139. E. Moulines received support from the grant ANR-19-CHIA-002 SCAI and parts of his work has been done under the auspices of Lagrange Center for maths and computing. P. Ménard acknowledges the Chaire SeqALO (ANR-20-CHIA-0020-01). This research was supported in part through computational resources of HPC facilities at HSE University. plainnatPART:Appendix § NOTATION Let (,) be a measurable space and () be the set of all probability measures on this space. For p ∈() we denote by _p the expectation w.r.t. p. For random variable ξ: → notation ξ∼ p means Law(ξ) = p. We also write _ξ∼ p instead of _p. For independent (resp. i.i.d.) random variables ξ_ℓ p_ℓ (resp. ξ_ℓ p), ℓ = 1, …, d, we will write _ξ_ℓ p_ℓ (resp. _ξ_ℓ p), to denote expectation w.r.t. product measure on (^d, ^⊗ d). For any x ∈ we denote δ_x a Dirac measure supported at point x.For any p, q ∈() the Kullback-Leibler divergence (p, q) is given by(p, q) ≜_p[log p/ q], p ≪ q, + ∞,otherwise.For any p ∈() and f: →, p f = _p[f]. In particular, for any p ∈_d and f: {0, …, d}→, pf =∑_ℓ = 0^d f(ℓ) p(ℓ). Define _p(f) = _s' ∼ p[(f(s')-p f)^2] = p[f^2] - (pf)^2. For any (s,a) ∈, transition kernel p(s,a) ∈() and f → define pf(s,a) = _p(s,a)[f] and _p[f](s,a) = _p(s,a)[f].Let (, ρ) be a metric space, then the 1-Wasserstein distance between p,q ∈() is defined as _1(p,q) = sup_fis1 -Lipschitz_p[f] - _q[f].We write f(S,A,H,T) = (g(S,A,H,T,δ)) if there exist S_0, A_0, H_0, T_0, δ_0 and constant C_f,g such that for any S ≥ S_0, A ≥ A_0, H ≥ H_0, T ≥ T_0, δ < δ_0, f(S,A,H,T,δ) ≤ C_f,g· g(S,A,H,T,δ). We write f(S,A,H,T,δ) = (g(S,A,H,T,δ)) if C_f,g in the previous definition is poly-logarithmic in S,A,H,T,1/δ.For α, β > 0, we define Β(α, β) as a beta distribution with parameters α, β. For setsuch that || < ∞ define () as a uniform distribution over this set. In particular, [N] is a uniform distribution over a set [N].For a measure p ∈([0,b]) supported on a segment [0,b] (equipped with a Borel σ-algebra) and a number μ∈ [0,b] we define (p, μ) ≜inf{(p,q): q ∈([0,b]), p ≪ q, _X ∼ q[X] ≥μ} .As the Kullback-Leibler divergence this quantity admits a variational formula by Lemma 18 of <cit.> up to rescaling for any u ∈ (0, b)(p, μ) = max_λ∈[0,1/(b-μ)]_X∼ p[ log( 1-λ (X-μ))].§ DESCRIPTION OF RANDQLIn this appendix we describe and algorithms. §.§ algorithm We recall that n^t_h(s,a) = ∑_i=1^t-1{ (s^i_h, a^i_h) = (s,a) } is the number of visits of state-action pair (s,a) at step h before episode t.We start by initializing the ensemble of Q-values, the policy Q-values, and values to an optimistic value _h^t,j(s,a) = _h^1(s,a) = ^1_h(s,a) = r_h(s,a)+r_0 (H-h) for all (j,h,s,a)∈[J]×[H]×× and r_0>0 some pseudo-rewards.At episode t we update the ensemble of Q-values as follows, denoting by n=n^t_h(s,a) the count, w_j,n∼Β(H, n) the independent learning rates, ^t+1,j_h(s,a) =(1- w_j,n) ^t,j_h(s,a) + w_j,n_h^t,j(s,a), (s,a) = (s^t_h, a^t_h)^t,j_h(s,a)otherwise,where we defined the target _h^t,j(s,a) as a mixture between the usual target and some prior target with mixture coefficient _n,j∼Β(n, n_0) and n_0 the number of prior samples,_h^t,j(s,a) = _j,n [r_h(s,a) + ^t_h+1(s^t_h+1)] + (1-_j,n) [ r_h(s,a) + r_0 (H-h-1)] .It is important to note that in our approach, we need to re-inject prior targets to avoid forgetting their effects too quickly due to the aggressive learning rate. Indeed, the exponential decay of the prior effect can hurt exploration. We observe that the ensemble Q-value only averages uniformly over the last 1/H fraction of the targets, as the expected value of the learning rate is [w_j,n] = H/(n+H). Since [1-_j,n] = n_0(n+n_0) the weight put on the prior sample in expectation, when we unfold the definition of _h^t+1,j, is of order H/n · n/H · n_0/(n+n_0) =n_0/(n+n_0),which is consistent with the usual prior forgetting in Bayesian learning. In , we avoid forgetting the prior too quickly by resetting the temporary Q-value to a prior value at the beginning of each stage. The policy Q-values are obtained by taking the maximum among the ensemble of Q-values_h^t+1(s,a) = max_j∈[J]_h^t+1,j(s,a) .The policy is then greedy with respect to the policy Q-valuesπ_h^t+1(s) ∈_a∈_h^t+1(s,a) and the value is ^t+1_h(s)=max_a∈_h^t+1(s,a). The complete procedure is detailed in Algorithm <ref>.§.§ algorithm To create an algorithm that is more similar to , it is possible to select a Q-value at random from the ensemble of Q-values, rather than using the maximum Q-value_h^t(s,a) = _h^t,j_t(s,a)withj_t ∼[J].In this case we also need to update each Q-value in the ensemble with its corresponding target, see <cit.>, _h^t,j(s,a) = _j,n [r_h(s,a) + ^t,j_h+1(s^t_h+1)] + (1-_j,n) [ r_h(s,a) + r_0 (H-h-1)]where ^t,j_h(s)=max_a∈_h^t,j(s,a). We name this new procedure and detail it in Algorithm <ref>. § WEIGHT DISTRIBUTION IN In this section we study the joint distribution of weights over all targets in algorithm, described in details in Appendix <ref>. To do it, we describe a very useful distribution, defined by <cit.>. We say that a random vector (X_1,…,X_n,X_n+1) has a generalized Dirichlet distribution (α_1,…,α_n;β_1,…,β_n) if X_n+1 = 1 - (X_1 + …+ X_n) and (X_1,…,X_n) it has the following density over the simplex {x_1,…,x_n : x_1 + … + x_n≤ 1}, p(x) = ∏_i=1^n 1/B(α_i, β_i) x_i^α_i-1 (1 - x_1 - … - x_i)^γ_ifor x_1 + … + x_n ≤ 1, x_j ≥ 0 for j=1,…,n, and γ_j = β_j - α_j+1 - β_j+1 for j=1,…,n-1 and γ_n = β_n - 1. If we set x_n+1 = 1-(x_1 + … + x_n) then weobtain a homogeneous formulap(x) = ∏_i=1^n 1/B(α_i, β_i) x_i^α_i-1( ∑_j=i+1^n+1 x_j )^γ_i Alternative characterization of generalized Dirichlet distribution could be given using independent beta-distributed random variables Z_1,…,Z_n with Z_i ∼Β(α_i,β_i) as followsX_1= Z_1, X_j= Z_j(1 - X_1 - … - X_j-1) = Z_j ∏_i=1^j-1(1-Z_i) forj = 2,3,…,n X_n+1 = 1 - X_1 - … - X_n = ∏_i=1^n (1 - Z_i) Therefore, for algorithm without prior re-injection we have the following formula^t,j_h(s,a) =∑_i=0^n^t_h(s,a) W^i_j, n( r_h(s^ℓ^i_h,a^ℓ^i_h) +^ℓ^i_h+1(s^ℓ^i_h+1) ),for n = n^t_h(s,a) andweights are defined as followsW^0_j, n = ∏_q=0^n-1 ( 1 - w_j,q),W^i_j,n = w_j,i-1·∏_q=i^n-1 (1 - w_j,q), i ≥ 1.And, moreover, we have that this vector of weights has the generalized Dirichlet distribution(W^n_n,j, W^n-1_n,j, …, W^1_n,j, W^0_n,j) ∼(H, H, …, H; n+n_0,…, n_0+1, n_0).That is, weights generated by the procedure is an inverted generalized Dirichlet random vector, that induces additional similarities with a usual posterior sampling approaches. Notably, that for H=1 we recover exactly usual Dirichlet distribution, as in the setting of .In the setting of the analysis, the main feature of this distribution is asymmetry in attitude to the order of components. In particular, the expectation of the prior weight W^0_n,j is ∏_i=1^n ( 1 - H/i+H) ∼ n^-H that leads to too rapid forgetting of the prior information. § PROOFS FOR TABULAR ALGORITHM§.§ AlgorithmIn this section we describe in detail the tabular algorithms and the ways we will analyze them. We also provide some notations that will be used in the sequel. Let n^t_h(s,a) be the number of visits of (s,a,h) (i.e., of the state-action pair (s,a) at step h) at the beginning of episode t: n^t_h(s,a) = ∑_i=1^t-1{ (s^i_h, a^i_h) = (s,a) }. In particular, n^T+1_h(s,a) is the number of visits of (s,a,h) after all episodes.Let e_k = ⌊ (1 + 1/H)^k · H ⌋ be the length of each stage for any k ≥ 0 and, by convention, e_-1 = 0.We will say that at the beginning of episode t a triple (s,a,h) is in k-th stage if n^t_h(s,a) ∈ [∑_i=0^k-1 e_i, ∑_i=0^k e_i ).Let ^t_h(s,a) be the number of visits of state-action pair during the current stage at the beginning of episode t. Formally, it holds ^t_h(s,a) = n^t_h(s,a) - ∑_i=0^k-1 e_i, where k is the index of current stage. Let κ > 0 be the posterior inflation coefficient, n_0 be the number of prior transitions, and J be the number of temporary Q-functions. Let ^t,j_h be the j-th temporary Q-function and ^t_h be the policy Q-function at the beginning of episode t. We initialize them as follows^1_h(s,a) = r_h(s,a) +(H - h - 1), ^1,j_h(s,a) = r_h(s,a) + (H-h-1),We can treat this initialization as a setting prior over n_0 pseudo-transitions to artificial state s_0 with > 1 reward for each interaction.For each transition we perform the following update of temporary Q-functions^t+1/2,j_h(s,a) =(1- w^k_j, ) ·^t,j_h(s,a) + w^k_j,[r_h(s,a) + ^t_h+1(s^t_h+1)], (s,a) = (s^t_h, a^t_h)^t,j_h(s,a)otherwise,where =^t_h(s,a) is the number of visits of (s,a,h) during the current stage at the beginning of episode t, k is the index of the current stage, and w^k_j, is a sequence of independent beta-distribution random variables w^k_j,∼Β(1/κ, ( + n_0) / κ). Here we slightly abuse the notation by dropping the dependence of weights w^k_j, on the triple (h,s,a) in order to simplify the exposition. In the case that the explicit dependence is required, we will call these weights as w^k,h_j,(s,a).Next we define the stage update as follows^t+1_h(s,a)= max_j∈[J]^t+1/2,j_h(s,a)^t_h(s,a) = ⌊ (1 + 1/H)^k H ⌋ ^t_h(s,a)otherwise ^t+1,j_h(s,a)= r_h(s,a) +(H-h+1) ^t_h(s,a) = ⌊ (1 + 1/H)^k H ⌋ ^t+1/2,j_h(s,a)otherwise ^t+1_h(s)= max_a ∈^t+1_h(s,a)π^t+1_h(s)∈_a ∈^t+1_h(s,a),where k is the current stage. In other words, we update ^t+1 with temporary values of ^t+1/2,j, and then, if the change of stage is triggered, reinitialize ^t+1,j_h(s,a) for all j. For episode t we will call k^t_h(s,a) the index of stage where ^t_h(s,a) was updated (and k^t_h(s,a) = -1 if there was no update). For all t we define τ^t_h(s,a) ≤ t as an episode when the stage update happens. In other words, for any t the following holds^t+1_h(s,a) = max_j∈[J]^τ^t_h(s,a)+1/2,j_h(s,a),where τ^t_h(s,a) = 0 and e_k = 0 if there was no updates. To simplify the notation we will omit dependence on (s,a,h) where it is deducible from the context.To simplify the notation, we can extend the state spaceby an additional state s_0 that will be purely technical and used in the proofs. This state has the prescribed value function _h(s_0) = (H-h) and could be treated as a absorbing pseudo-state with reward .We notice that in this case we use e_k samples to compute ^τ^t_h(s,a)+1/2,j for k = k^t_h(s,a). For this k we define ℓ^i_k,h(s,a) as a time of i-th visit of state-action pair (s,a) during k-th stage. Then we have the following decomposition^τ^t+1/2,j_h(s,a) = r_h(s,a) + ∑_i=0^e_k W^i_j, e_k, k^ℓ^i_h+1(s^ℓ^i_h+1),where we drop dependence on k and (s,a,h) in ℓ^i to simplify notations, and use the convention s^ℓ^0_k,h(s,a)_h+1 = s_0, and the following aggregated weightsW^0_j, n, k = ∏_q=0^n-1 ( 1 - w^k_j,q),W^i_j,n,k = w^k_j,i-1·∏_q=i^n-1 (1 - w^k_j,q), i ≥ 1.We will omit the dependent on the stage index k when it is not needed for the statement. However, we notice that these vectors, for different stage k, will be independent. By the properties of generalized Dirichlet distribution it is possible to show the following resultFor any fixed n > 0, the random vector (W^0_j,n,W^1_j,n, …, W^n_j,n) has a Dirichlet distribution (n_0/κ, 1/κ, …, 1/κ). Using the Dirichlet random variate generation from marginal beta distributions, it is sufficient to prove that for all i∈{0,…,n}, W^n-i_j,n,k = (1-W^n_j,n,k -…- W^n-i+1_j,n,k) w_j,n-i-1^k, with the convention w_j,-1^k = 1. This is trivial for i=0, as W^n_j,n,k = w_j,n-1^k. Now, if this is true for some i, then, for i+1∈{0,…,n}, we haveW^n-i-1_j,n,k = w_j,n-i-2^k ∏_q=n-i-1^n-1(1-w_j,q^k)=w_j,n-i-2^k(1-w_j,n-i-1^k) (1-W^n_j,n,k -…- W^n-i+1_j,n,k)= w_j,n-i-2^k (1-W^n_j,n,k -…- W^n-i+1_j,n,k - w_j,n-i-1^k(1-W^n_j,n,k -…- W^n-i+1_j,n,k)_=W^n-i_j,n,k),which finishes the proof. Notably, the expression (<ref>) shows a significant similarity between our method and . It is the reason why we can call this method a model-free posterior sampling, where posterior sampling is performed over the model in a lazy and model-free fashion.§.§ Concentration Let (0,1) ×→_+ and β^B, β^, β (0,1) →_+ be some function defined later on in Lemma <ref>. We define the following favorable events ^⋆(δ)≜{∀ t ∈, ∀ h ∈ [H], ∀ (s,a)∈×, k = k^t_h(s,a): ( 1/e_k∑_i=1^e_kδ__h+1(s^ℓ^i_h+1) ,p_h _h+1(s,a) ) ≤(δ,e_k)/e_k} , ^B(δ)≜{∀ t ∈ [T], ∀ h ∈ [H], ∀ (s,a) ∈×, ∀ j ∈ [J], k = k^t_h(s,a):| ∑_i=0^e_k( W^i_j, e_k, k - [W^i_j, e_k,k] ) ^ℓ^i_h+1(s^ℓ^i_h+1) | ≤ 60 ^2 √(^2 H^2 κβ^B(δ)/e_k + n_0 κ) + 1200H κlog(e_k + n_0 κ) (β^B(δ))^2/e_k + n_0 κ} , ^(δ)≜{∀ t ∈ [T], ∀ h ∈ [H], ∀ (s,a)∈×, k = k^t_h(s,a): |1/e_k∑_i=1^e_k_h+1(s^ℓ^i_k,h(s,a)_h+1)- p_h_h+1(s,a) | ≤√(2^2 H^2 β^(δ)/e_k)} (δ)≜{∑_t=1^T ∑_h=1^H (1+1/H)^H-h| p_h[_h+1 - V^π_t_h+1](s^t_h, a^t_h) - [_h+1 - V^π_t_h+1](s^t_h+1)| ≤ 2 H√(2HT β(δ)). }.We also introduce the intersection of these events, (δ) ≜^⋆(δ) ∩^B(δ) ∩^(δ) ∩(δ). Weprove that for the right choice of the functions ,β^, β^, β, β^ the above events hold with high probability. For any δ∈ (0,1) and for the following choices of functions β,(δ,n)≜log(8SAH/δ) + 3log(π(2n+1)) , β^B(δ)≜log(8SAH/δ) + log(TJ) , β^(δ)≜log(8SAH/δ) + log(2T) , β(δ)≜log(16/δ),it holds that[^⋆(δ)] ≥ 1-δ/8, [^B(δ)]≥ 1-δ/8, [^(δ)]≥ 1-δ/8, [(δ)]≥ 1-δ/8.In particular, [(δ)] ≥ 1-δ/2. From the fact that s^ℓ^i_h+1 are i.i.d. generated from p_h(s,a), Theorem <ref>, and union bound ×× [H] it holds [^⋆(δ)]≥ 1-δ/8. Next we fix all t,h,s,a,j, and denote n = e_k^t_h(s,a). First, we define a filtration of σ-algebras _τ that is sigma-algebra generated by all random variables appeared untill the update (<ref>) in the episode t and step h, before newly generated random weights but after receiving new state s^t_h+1. Formally, we can define it as follows_t,h = σ( { (s^τ_h', a^τ_h', w^k^τ_h'+1, h'_j, ^τ_h'(s^τ_h', a^τ_h') ), ∀τ < t, (h',j) ∈ [H] × [J] }∪{ (s^t_h', a^t_h', s^t_h'+1), ∀ h' ≤ h }∪{ w^k^t_h'+1, h'_j, ^t_h'(s^t_h', a^t_h'), ∀ h' < h, j ∈ [J] }),where we drop dependence on state-action pairs everywhere where it is deducible from the context. Consider a sequence ℓ^1 < … < ℓ^n be an excursion of the state-action pair (s,a) at the step h. Each ℓ^i is a stopping time w.r.t _t,h, so we can consider a stopped filtration (with a shift by 1 in indices) _i-1 = _ℓ^i,h.In other words, this filtration at time-stamp i-1 contains all the information that is available just before generation of random weights for i-th update of temporary Q-functions inside the last stage. We notice that under this definition we have[^ℓ_i_h+1(s^ℓ_i_h+1) | _i-1]= ^ℓ_i_h+1(s^ℓ_i_h+1),[W_j,n,k^i | _i-1]= [ w^k_j,i-1∏_ℓ =i^n-1 (1-w^k_j,ℓ) | _i-1] = [W^i_j,n,k],Next, we notice that the joint vector of weights follows the Dirichlet distribution, applying aggregation property and extending the filtration backward by adding fake transitions we can extend sum to n+n_0 summands defining s^ℓ^-i_h+1 = s_0 ∑_i=0^n( W^i_j, n,k - [W^i_j, n,k] ) ^ℓ^i_h+1(s^ℓ^i_h+1) = ∑_q = -n_0+1^n( _q - [_q] ) ^ℓ^q_h+1(s^ℓ^q_h+1).Finally, we notice that marginals of Dirichlet random vector follow Beta distribution, therefore by Proposition <ref> and union bound we conclude [^B(δ)] ≥ 1 -δ/8.To show that ^(δ) > 1-δ/8, it is enough to apply Hoeffding inequality for a fixed number of samples e_k used in empirical mean, and then use union bound of all possible values of (s,a,h) ∈×× [H] and e_k ∈ [T].Next, define the following sequenceZ_t,h ≜ (1+1/H)^H-h([_h+1-V^π^t_h+1](s_h+1^t)-p_h [_h+1- V^π^t_h+1](s^t_h,a^t_h)),t∈ [T], h ∈ [H],It is easy to see that these sequences form a martingale-difference w.r.t filtration _t,h = σ{{ (s^ℓ_h', a^ℓ_h', π^ℓ), ℓ < t, h' ∈ [H] }∪{ (s^t_h', a^t_h', π^t), h' ≤ h }}. Moreover,|Z_t,h|≤ 2 H for all t∈ [T] and h∈ [H]. Hence, the Azuma-Hoeffding inequality implies (|∑_t=1^T ∑_h=1^H Z_t,h|> 2 H√(2 t H ·β(δ))) ≤ 2exp(-β(δ))=δ/8,therefore [(δ)] ≥ 1 - δ/8.§.§ OptimismIn this section we prove that our estimate of Q-function ^ t_h(s,a) is optimistic, that is the event_≜{∀ t ∈ [T], h ∈ [H], (s,a) ∈×:^t_h(s,a) ≥_h(s,a) }.holds with high probability on the event ^⋆(δ).Define constantsc_0 ≜8/π( 4/√(log(17/16)) + 8 + 49· 4√(6)/9)^2 + 1.andc_J ≜1/log( 2/1 + Φ(1)),where Φ(·) is a CDF of a normal distribution.Assume that J = ⌈c_J ·log(2SAHT/δ)⌉, κ = 2β^⋆(δ, T), = 2, and n_0 = ⌈ (c_0 + 1 + log_17/16(T))·κ⌉. Then conditionally on ^⋆(δ) theevent_≜{ ∀ t ∈ [T],∀ h ∈ [H],∀ (s,a) ∈×: max_j ∈ [J]{∑_i=0^e_k W^i_j,e_k,k_h+1(s^ℓ^i_t,h(s,a)_h+1)}≥ p_h _h+1(s,a), k = k^t_h(s,a) }holds with probability at least 1-δ/2. Let us fix t ∈ [T], h ∈ [H], (s,a) ∈×, and j ∈ [J].By Lemma <ref>, we have that the vector (W^i_j,e_k,k)_i=0,…,e_k has Dirichlet distribution.Note that _h+1(s^ℓ^0_h+1) = (H-h-1) is an upper bound on V-function andthe weight of the first atom is α_0 ≜ n_0/κ≥c_0 + log_17/16(T) for c_0 defined in (<ref>). Define a measure _e_k = n_0 - 1/e_k + n_0 - 1δ__h+1(s_0) + ∑_i=1^e_k1/e_k + n_0 - 1δ__h+1(s^ℓ^i_h+1). Since p_h _h+1(s,a) ≤ H-h-1, we can apply Lemma <ref> with a fixed ε = 1/2 conditioned on independent samples { s^ℓ_i_h+1}_i=1^e_k from p_h(s,a)[ ∑_i=0^e_k W^i_j,e_k,k _h+1(s^ℓ^i_t,h(s,a)_h+1) ≥ p_h _h+1(s,a) |{ s^ℓ_i_h+1}_i=1^e_k] ≥1/2( 1 - Φ(√(2 (e_k + n_0 -κ) (_e_k, p_h _h+1(s,a)) /κ))),where Φ is a CDF of a normal distribution. Combining Lemma <ref> and the event ^⋆(δ)(e_k + n_0 - κ) ( _e_k, p_h _h+1(s,a))≤ e_k ( _e_k, p_h _h+1(s,a)) ≤β^⋆(δ, T),where _e_k = 1/e_k∑_i=1^e_kδ__h+1(s^ℓ^i_h+1), and, as a corollary [ ∑_i=0^e_k W^i_j,e_k,k_h+1(s^ℓ^i_t,h(s,a)_h+1) ≥ p_h _h+1(s,a) |^⋆(δ), { s^ℓ^i_h+1}_i=1^e_k] ≥1/2( 1 - Φ( √(2β^⋆(δ, T) /κ)) ). By taking κ = 2β^⋆(δ, T) we have a constant probability of being optimistic∑_i=0^e_k W^i_j,e_k,k_h+1(s^ℓ^i_t,h(s,a)_h+1) ≥ p_h _h+1(s,a) |^⋆(δ) ≥1 - Φ(1)/2≜γ.Next, using a choice J = ⌈log(2SAHT/δ) / log(1/(1-γ)) ⌉ = ⌈ c_J ·log(2SAHT/δ)⌉ [ max_j ∈ [J]{∑_i=0^e_k W^i_j,e_k,k_h+1(s^ℓ^i_t,h(s,a)_h+1) }≥ p_h _h+1(s,a) |^⋆(δ) ] ≥ 1 - (1 - γ)^J≥ 1 - δ/2SAHT·By a union bound we conclude the statement.Next we provide a connection between ^ and ^.It holds that ^⊆^.We proceed by a backward induction over h. Base of induction h = H+1 is trivial. Next by Bellman equations for ^t_h and _h[^t_h - _h](s,a) = max_j ∈ [J]{∑_i=0^n W^i_j,n^ℓ^i_h+1(s^ℓ^i_h+1)} - p_h _h+1(s,a),where n = e_k^t_h(s,a) and we drop dependence on k,t,h,s,a in ℓ^i. By induction hypothesis we have ^ℓ^i_h+1(s') ≥^ℓ^i_h+1(s', π^⋆(s')) ≥_h+1(s', π^⋆(s')) = _h+1(s') for any i, thus[^t_h - _h](s,a) ≥max_j ∈ [J]{∑_i=0^n W^i_j,n_h+1(s^ℓ^i_h+1)} - p_h _h+1(s,a).By the definition of event ^(δ) we conclude the statement.Assume that J = ⌈c_J ·log(2SAHT/δ)⌉, κ = 2β^⋆(δ, T), = 2, and n_0 = ⌈ (c_0 + 1 + log_17/16(T)) ·κ⌉, where c_0 is defined in (<ref>) and c_J is defined in (<ref>). Then ^|^⋆(δ)≥ 1-δ/2.§.§ Regret Bound Let us define the main event '(δ) = (δ) ∩^. On this event we have the following corollary that connects with with Hoeffding bonuses. Define the following quantityβ^max(δ) = max{κ, n_0/κ, β^B(δ), β^(δ), β(δ), log(T+n_0)} = (log(SATH/δ)).Assume conditions of Proposition <ref> hold. Let t ∈ [T], h∈[H], (s,a) ∈×. Define k = k^t_h(s,a) and let ℓ^1 < … < ℓ^e_k be a excursions of (s,a,h) until the previous stage. Then on the event '(δ) the following bound holds for k ≥ 00≤^t_h(s,a) - _h(s,a) ≤1/n∑_i=1^n [^ℓ^i_h+1(s^ℓ^i_h+1) - _h+1(s^ℓ^i_h+1) ]+ ^t_h(k),where^t_h(k) = 61^2H(β^max(δ))/√(e_k) + 1201 H (β^max(δ))^4/e_k. The lower bound follows from the definition of the event ^. For the upper bound we first apply the decomposition for ^t_h(s,a) and the definition of event ^B(δ) from Lemma <ref>^t_h(s,a)= r_h(s,a) + max_j ∈ [J]{∑_i=0^e_k W^i_j,e_k^ℓ^i_h+1(s^ℓ^i_h+1) }≤ r_h(s,a) + 1/e_k + n_0∑_i=1^e_k^ℓ^i_h+1(s^ℓ^i_h+1) + n_0 κ· H/e_k + n_0 + 60 ^2 √(^2 H^2 κβ^B(δ)/e_k + n_0)+ 1200 H κlog(e_k + n_0) (β^B(δ))^2/e_k + n_0.Then, by Bellman equations, ^t_h(s,a) - _h(s,a)≤1/e_k∑_i=1^e_k[ ^ℓ^i_h+1 - _h+1](s^ℓ^i_h+1) +1/e_k∑_i=1^e_k[ _h+1 (s^ℓ^i_h+1)- p_h _h+1(s,a)] + (1200 + 1)H (β^max(δ))^4/e_k + n_0 + 60^2 · H β^max(δ)/√(e_k +n_0)By the definition of event ^(δ)we conclude the statement.Let us define δ^t_h = ^t_h(s^t_h) - V^π^t_h(s^t_h) and ζ^t_h = ^t_h(s^t_h) - _h(s^t_h). Assume conditions of Proposition <ref> hold. Then on event '(δ) = (δ) ∩^, where (δ) is defined in Lemma <ref>, the following upper bound on regret holds^T ≤ H ∑_t=1^T ∑_h=1^H {k^t_h(s^t_h, a^t_h) = -1} + ∑_t=1^T ∑_h=1^H (1+1/H)^H-hξ^t_h + ∑_t=1^T ∑_h=1^H ^t_h,where ξ^t_h = p_h [_h+1 - V^π^t_h+1](s^t_h,a^t_h) - [_h+1 - V^π^t_h+1](s^t_h+1) and ^t_h = ^t_h(s^t_h, a^t_h) ·{k^t_h(s^t_h, a^t_h) ≥ 0} for ^t_h defined in Corollary <ref>. We notice that on the event ^ the following upper bound holds^T ≤∑_t=1^T δ^t_1.Next we analyze δ^t_h. By the choice of a^t_h = _a∈^t_h(s^t_h, a), Corollary <ref>, and Bellman equations, we haveδ^t_h= ^t_h(s^t_h) - V^π^t_h(s^t_h) =^t_h(s^t_h, a^t_h) - Q^π^t_h(s^t_h, a^t_h) = ^t_h(s^t_h, a^t_h) - _h(s^t_h, a^t_h) + _h(s^t_h, a^t_h) - Q^π^t_h(s^t_h, a^t_h) ≤ H {N^t_h = 0} + { N^t_h > 0}( 1/N^t_h∑_i=1^N^t_hζ^ℓ^i_t,h_h+1 + ^t_h(s^t_h, a^t_h) + p_h [_h+1 - V^π^t_h+1](s^t_h,a^t_h) ).where k^t_h = k^t_h(s^t_h, a^t_h), N^t_h = e_k^t_h, ℓ^i_t,h isepisode ofthe i-th visitation of the state-action pair (s^t_h, a^t_h) during the stage k^t_h, and additionally by the convention 0/0 = 0.Let ξ^t_h = p_h [_h+1 - V^π^t_h+1](s^t_h,a^t_h) - [_h+1 - V^π^t_h+1](s^t_h+1) be a martingale-difference sequence, and ^t_h = ^t_h(s^t_h, a^t_h) {N^t_h > 0} then δ^t_h ≤ H {N^t_h = 0} + {N^t_h > 0}/N^t_h∑_i=1^N^t_hζ^ℓ^i_t,h_h+1- ζ^t_h+1 + δ^t_h+1 + ξ^t_h + ^t_h.and, as a result∑_t=1^T δ^t_h≤ H ∑_t=1^T {N^t_h = 0} + ∑_t=1^T { N^t_h > 0}/N^t_h∑_i=1^N^t_hζ^ℓ^i_t,h_h+1-∑_t=1^Tζ^t_h+1 + ∑_t=1^T δ^t_h+1 + ∑_t=1^T ξ^t_h + ∑_t=1^T ^t_h.Next we have to analyze the second term, following the approach by <cit.>,∑_t=1^T {N^t_h > 0}/N^t_h∑_i=1^N^t_hζ^ℓ^i_t,h_h+1 = ∑_q=1^T ∑_t=1^T {N^t_h > 0}/N^t_h∑_i=1^N^t_hζ^ℓ^i_t,h_h+1{ℓ^i_t,h = q}= ∑_q=1^T ζ^q_h+1·∑_t=1^T {k^t_h ≥ 0}/N^t_h∑_i=1^N^t_h{ℓ^i_t,h = q}. Notice that ∑_i=1^N^t_h{ℓ^i_t,h = q}≤ 1 since all visitations are increasing in i, and, moreover, it turns to equality if and only if (s^q_h ,a^q_h) = (s^t_h ,a^t_h) and this visitation happens in stage k^t_h, where k^t_h is equal to the stage of episode q with respect to (s^q_h, a^q_h, h). Since the sum is over all the next episodes with respect to stage of q, we have that the number of non-zero elements in the sum over t is bounded by (1+1/H) N^t_h. Thus∑_q=1^T ζ^q_h+1·∑_t=1^T {k^t_h ≥ 0}/N^t_h∑_i=1^N^t_h{ℓ^i_t,h = q}≤( 1 + 1/H)∑_q=1^T ζ^q_h+1.After a simple algebraic manipulations and using the fact that ζ^t_h ≤δ^t_h,∑_t=1^T δ^t_h≤ H ∑_t=1^T {N^t_h = 0} + ∑_t=1^T (1 + 1/H) ζ^t_h+1 - ∑_t=1^Tζ^t_h+1 + ∑_t=1^T δ^t_h+1 + ∑_t=1^T ξ^t_h + ∑_t=1^T ^t_h ≤H ∑_t=1^T {N^t_h = 0} + (1 + 1/H) ∑_t=1^T δ^t_h+1 + ∑_t=1^T ξ^t_h + ∑_t=1^T ^t_h.By rolling out the upper bound on regret (<ref>) and using inequality (1+1/H)^H-h≤ we have^T ≤ H ∑_t=1^T ∑_h=1^H {N^t_h = 0} + ∑_t=1^T ∑_h=1^H (1+1/H)^H-hξ^t_h + ∑_t=1^T ∑_h=1^H ^t_h. First, we notice that the event '(δ) defined in Lemma <ref>, holds with probability at least 1-δ by Lemma <ref> and Proposition <ref>. Thus, we may assume that '(δ) holds.We start from the decomposition given by Lemma <ref>^T ≤ H ∑_t=1^T ∑_h=1^H {k^t_h(s^t_h, a^t_h) = -1} + ∑_t=1^T ∑_h=1^H (1+1/H)^H-hξ^t_h + ∑_t=1^T ∑_h=1^H ^t_h. The first term is upper bounded by SAH^3, since there is no more than H visits of each state-action-step triple before the update for the first stage. The second term is bounded by (√(H^3 T)) by a definition of the event (δ) in Lemma <ref>. To upper bound the last term we have to analyze the following sum∑_t=1^T ∑_h=1^H { e_k^t_h(s^t_h,a^t_h) > 0 }/√(e_k^t_h(s^t_h, a^t_h))≤∑_(s,a,h) ∈×× [H]∑_k=0^k^T+1_h(s,a)e_k+1/√(e_k),wheree_k = ⌊(1+1/H)^k H ⌋⇒e_k+1/√(e_k)≤ 2 √(e_k),therefore by Cauchy inequality ∑_k=0^k^T+1_h(s,a)e_k+1/√(e_k)≤ 2 ∑_k=0^k^T+1_h(s,a) √(e_k)≤ 2 √(k^T+1_h(s,a))√(∑_k=0^k^T+1_h(s,a) e_k)≤ 2 √(log(T)/log(1+1/H))√(n^T+1_h(s,a)), where we used the definition of the previous stage k^T+1_h(s,a)N^T+1_h(s,a) ≥∑_k=0^k^T+1_h(s,a) e^k,thus by Cauchy inequality and inequality log(1+1/H) ≥ 1/(4H) for H ≥ 1∑_t=1^T ∑_h=1^H { e_k^t_h(s^t_h,a^t_h) > 0}/√(e_k^t_h(s^t_h, a^t_h)) ≤ 2√(H log(T))∑_(s,a,h) ∈×× [H]√(N^T+1_h(s,a) + 1)≤ 4√(SAH^2 log(T))√(∑_(s,a,h) (N^T+1_h(s,a) + 1))≤ 4 √(SAH^3T log(T)) + 4 SAH^2 log(T).Using this upper bound, we have∑_t=1^T ∑_h=1^H ^t_h = ( H ∑_t=1^T ∑_h=1^H { e_k^t_h(s^t_h,a^t_h) > 0 }/√(e_k^t_h(s^t_h, a^t_h))) = ( √(H^5 SA T) + SAH^3).Combining this upper bound with the previous ones, we conclude the statement.§ PROOFS FOR METRIC ALGORITHM§.§ Assumptions In this section we proof Lemma <ref> and Lemma <ref>.By the dual formula for 1-Wasserstein distance (see e.g. Section 6 of <cit.>) we have_1(p_h(s,a), p_h(s',a')) = sup_fis1-Lipchitz{ p_h f(s,a) - p_h f(s',a') }.By Assumption <ref> we havep_h f(s,a) - p_h f(s',a') = _ξ_h[ f(F_h(s,a,ξ_h)) - f(F_h(s',a',ξ_h)) ] ≤ L_F ρ((s,a),(s',a')). Let us proceed by a backward induction over h. For h=H+1 we have _H+1(s,a) = _H+1(s) = 0, therefore they are 0-Lipchitz. Next we assume that have for any h' > h the statement of Lemma <ref> holds. Then by Bellman equations|_h(s,a) - _h(s',a')|≤| r_h(s,a) + r_h(s',a') | + | p_h _h+1(s,a) - p_h _h+1(s',a') | .By Assumption <ref> we can represent the action of the transition kernel as followsp_h _h+1(s,a) - p_h _h+1(s',a') = _ξ_h[ _h+1(F_h(s,a, ξ_h)) - _h+1(F_h(s', a', ξ_h) ].Since by induction hypothesis _h+1 is ∑_h'=h+1^H L_F^h'-h L_r-Lipschitz and F_h(·, ξ_h) is L_F-Lipschitz, therefore|_h(s,a) - _h(s',a')| ≤( L_r+L_F ·∑_h'=h+1^H L_F^h'-h L_r) ρ((s,a), (s',a')) ≤( ∑_h'=h^H L_F^h'-h L_r ) ρ((s,a), (s',a'))To show that _h is also Lipchitz, we have that there is some action a^⋆ equal to π^⋆(s) or π^⋆(s'), such that|_h(s) - _h(s') |≤|_h(s,a^⋆) - _h(s', a^⋆) |≤ L_V,h·ρ((s,a^⋆),(s',a^⋆)) ≤ L_V,h·ρ_(s,s'),where in the end we used the sub-additivity assumption on metric over joint space (see Assumption <ref>). §.§ Algorithm Next we describe a simple non-adaptive version of our algorithm that works with metric spaces. We assume that for any ε > 0 we can compute a minimal ε-cover of state-action space _ε.[Remark that the greedy algorithm can easily generate ε-cover of size N_ε/2, that will not affect the asymptotic behavior of regret bounds, see <cit.>.]Next we will use the same notation but with state-action pairs replaces with balls from a fixed cover _ε. To unify the notation, we define ψ_ε×→_ε that maps any point (s,a) to any ball from ε-cover that contains it.For any t,h we define B^t_h = ψ_ε(s^t_h, a^t_h). Next, let n^t_h(B) be a number of visits of ball B before the episode t: n^t_h(B) = ∑_k=1^t-1{ B^k_h = B }.Let e_k = ⌊ (1 + 1/H)^k · H ⌋ be length of each stage for any k ≥ 0 and, by convention, e_-1 = 0.We will call that in the beginning of episode t a pair (B,h) is in k-th stage if n^t_h(B) ∈ [∑_i=0^k-1 e_i, ∑_i=0^k e_i ). Let ^t_h(B) be a number of visits of state-action pair during the current stage in the beginning of episode t. Formally, ^t_h(B) = n^t_h(B) - ∑_i=0^k-1 e_i, where k is an index of current stage. Define κ > 0 be a posterior inflation coefficient, n_0 is a number of pseudo-transitions, and J as a number of temporary Q-functions. Let ^t,j_h be a j-th temporary Q-value and ^t_h be a policy Q-value at the beginning of episode t, defined over the ε-cover. We initialize them as follows^1_h(B) =H, ^1,j_h(s,a) =H.Additionally, we define to the value function as follows^t_h(s) = max_a ∈^t_h(ψ_ε(s,a)).Notice that we cannot precomute it as in the tabular setting, however, it is possible to use its values in lazy fashion.For each transition we preform the following update of temporary Q-values over balls B ∈_ε^t+1/2,j_h(B) =(1- w_j, ) ·^t,j_h(B) + w_j,[ r_h(s^t_h, a^t_h) + ^t_h+1(s^t_h+1)], B = B^t_h^t,j_h(B)otherwise,where=^t_h(B) is the number of visits of (B,h) in the beginning of episode t, and w_j, is a sequence of independent beta-distribution random variables w_j,∼Β(1/κ, ( + n_0) / κ).Next we define the stage update as follows^t+1_h(B)= max_j∈[J]^t+1/2,j_h(B)^t_h(B) = ⌊ (1 + 1/H)^k H ⌋ ^t_h(B)otherwise ^t+1,j_h(B)=Hn^t_h(B) ∈^t_h(B) = ⌊ (1 + 1/H)^k H ⌋ ^t+1/2,j_h(B)otherwise ^t+1_h(s)= min{(H-h), max_a ∈^t+1_h(ψ_ε(s,a)) }; π^t+1_h(s)∈_a ∈^t+1_h(ψ_ε(s,a)),where k is the current stage. A detailed description of the algorithm is presented in Algorithm <ref>. For episode t we will call k^t_h(B) the index of stage where ^t_h(B) were updated (and k^t_h(B) = -1 if there was no update). For all t we define τ^t_h(B) ≤ t as the episode when the stage update happens. In other words, for any t the following holds^t+1_h(B) = max_j∈[J]^τ^t_h(B)+1/2,j_h(B),where τ^t_h(B) = 0 and e_k = 0 if there was no updates. To simplify the notation we will omit dependence on (s,a,h) where it is deducible from the context.We notice that in this case we use e_k samples to compute ^τ^t_h(B)+1/2,j for k = k^t_h(s,a). For this k we define ℓ^i_k,h(s,a) as the time of i-th visit of state-action pair (s,a) during k-th stage. Then we have the following decomposition^τ^t+1/2,j_h(B) =∑_i=0^e_k W^i_j, e_k( r_h(s^ℓ^i_h,a^ℓ^i_h) +^ℓ^i_h+1(s^ℓ^i_h+1) ),where we drop dependence on k and (B,h) in ℓ^i to simplify notations, using the convention r_h(s^ℓ^0_h,a^ℓ^0_h) = , ^ℓ^0_h+1(s^ℓ^0_h+1) =(H-1) and the following aggregated weightsW^0_j, n = ∏_q=0^n-1 ( 1 - w_j,q),W^i_j,n = w_j,i-1·∏_q=i^n-1 (1 - w_j,q), i ≥ 1.§.§ ConcentrationLet (0,1) ×× (0, d_max) →_+ and β^B, β^, β (0,1) × (0, d_max) →_+ be some function defined later on in Lemma <ref>. We define the following favorable events ^⋆(δ, ε)≜{∀ t ∈, ∀ h ∈ [H], ∀ B∈_ε, k = k^t_h(B), (s,a) = (B): ( 1/e_k∑_i=1^e_kδ__h+1(F_h(s,a,ξ^ℓ^i_h+1)), p_h _h+1(s,a) ) ≤(δ,e_k,ε)/e_k} , ^B(δ, ε)≜{∀ t ∈ [T], ∀ h ∈ [H], ∀ B ∈_ε, ∀ j ∈ [J], k = k^t_h(B):| ∑_i=0^e_k( W^i_j, e_k, k - [W^i_j, e_k,k] ) ( r_h(s^ℓ^i_h, a^ℓ^i_h) +^ℓ^i_h+1(s^ℓ^i_h+1) ) | ≤ 60 ^2 √(^2 H^2 κβ^B(δ, ε)/e_k + n_0(k)) + 1200H κlog(e_k + n_0(k)) (β^B(δ, ε))^2/e_k + n_0(k)} , ^(δ, ε)≜{∀ t ∈ [T], ∀ h ∈ [H], ∀ B ∈_ε, k = k^t_h(B): |1/e_k∑_i=1^e_k_h+1(s^ℓ^i_k,h(B)_h+1)- p_h_h+1(s^ℓ^i_k,h(B)_h, a^ℓ^i_k,h(B)_h) | ≤√(2^2 H^2 β^(δ, ε)/e_k)} (δ)≜{∑_t=1^T ∑_h=1^H (1+1/H)^H-h| p_h[_h+1 - V^π_t_h+1](s^t_h, a^t_h) - [_h+1 - V^π_t_h+1](s^t_h+1)| ≤ 2 H√(2HT β(δ)). }.We also introduce the intersection of these events, (δ) ≜^⋆(δ) ∩^B(δ) ∩^(δ) ∩(δ). Weprove that for the right choice of the functions ,β^, β^, β, β^ the above events hold with high probability. For any δ∈ (0,1) and ε∈ (0, d_max) and for the following choices of functions β,(δ,n, ε)≜log(8H/δ) + log(N_ε) + 3log(π(2n+1)) , β^B(δ, ε)≜log(8H/δ) + log(N_ε) +log(TJ) , β^(δ, ε)≜log(8H/δ) + log(N_ε) +log(2T) , β(δ)≜log(16/δ),it holds that[^⋆(δ, ε)] ≥ 1-δ/8, [^B(δ, ε)]≥ 1-δ/8, [^(δ, ε)]≥ 1-δ/8, [(δ)]≥ 1-δ/8.In particular, [(δ)] ≥ 1-δ/2. Let us describe the changes from the similar statement in Lemma <ref>.Regarding event ^⋆(δ, ε), for any fixed ball B we have exactly the same structure of the problem thanks to Assumption <ref> and a sequence of i.i.d. random variables ξ^ℓ^i_h. Thus, Theorem <ref> combined with a union bound over B ∈_ε and H ∈ [H] concludes ^⋆(δ, ε)≥ 1 - δ/8.The proof for the event ^B(δ,ε) remains the almost the same, with two differences: the predictable weights slightly changed but the upper bound for them remain the same,and we havetake a union bound not over all state-action pairs (s,a) ∈× but all over balls B ∈_ε.To show that ^(δ, ε)≥ 1 - δ/8, let us fix B ∈_ε, h ∈ [H] and e_k ∈ [T]. Then we can define a filtration _t,h = σ{{ (s^ℓ_h', a^ℓ_h', π^ℓ), ℓ < t, h' ∈ [H] }∪{ (s^t_h', a^t_h', π^t), h' ≤ h }} and, since ℓ^i_k,h(B) are stopping times for all i = 1, …, e_k, we can define the stopped filtration _i = _ℓ^i, h. Then we notice that X_i = _h+1(s^ℓ^i_k,h(B)_h+1)- p_h_h+1(s^ℓ^i_k,h(B)_h, a^ℓ^i_k,h(B)_h) forms a martingale-difference sequence with respect to _i,h. Thus, by Azuma-Hoeffding inequality and a union bound we have ^(δ, ε)≥ 1 - δ/8.The proof of (δ)≥ 1-δ/8 remains exactly the same as in Lemma <ref>.§.§ Optimism In this section we prove that our estimate of Q-function ^ t_h(s,a) is optimistic that is the event_(ε) ≜{∀ t ∈ [T], h ∈ [H], (s,a) ∈×:^t_h(ψ_ε(s,a)) ≥_h(s,a) }.holds with high probability on the event ^⋆(δ, ε).Define constantsc_0 ≜8/π( 4/√(log(17/16)) + 8 + 49· 4√(6)/9)^2 + 1.and slightly another constantc̃_J ≜1/log( 4/3 + Φ(1)),where Φ(·) is a CDF of a normal distribution.Define a constant L = L_r + L_V(1+L_F). Assume that J = ⌈c̃_J · (log(2HT/δ)+ log(N_ε) ⌉, κ = 2β^⋆(δ, T, ε), = 2, and a prior count n_0(k) = _0 + κ + ε L/H-1· (e_k + _0 + κ) dependent on the stage k, where _0 = (c_0 + 1 + log_17/16(T)) ·κ .Then on event ^⋆(δ, ε) the following event_ ≜{∀ t ∈ [T]∀ h ∈ [H]∀ B ∈_ε: fork = k^t_h(B), (s,a) = (B):max_j ∈ [J]{W^0_j,e_k,k (H-1) + ∑_i=1^e_k W^i_j,e_k,k_h+1(F_h(s, a, ξ^ℓ^i_h)) }≥ p_h _h+1(s,a) + L ε}holds with probability at least 1-δ/2. We notice that the obtained result is connected to the theory of Dirichlet processes.First, let us define the Dirichlet process, following <cit.>. The stochastic process G, indexed by elements B of , is a Dirichlet Process with parameter ν (G∼DP(ν)) ifG(B_1),…,G(B_d)∼Dir(ν(B_1), …, ν(B_d)),for any measurable partition (B_1, …, B_d) of . Let _n = 1/n∑_i=1^n δ_Z_i be an empirical measure of an i.i.d. sample Z_1,…,Z_n ∼ P. Let ν be a finite (not necessarily probability) measure onand P_n ∼DP(ν + n _n). Then we have the following representation for the expectations of a function f → over a sampled measure P_n (see Theorem 14.37 of <cit.> with σ=0 for a proof)P_n f = V_n· Qf + (1 - V_n) ∑_i=1^n W_i f(Z_i),where V_n ∼Β(|ν|, n),Q ∼DP(ν), and a vector (W_1,…,W_n) follows uniform Dirichlet distribution (1,…,1). If we take ν = n_0 ·δ_Z_0 for some Z_0 ∈ such that f(Z_0) = (H-1)[We can augment the spacewith this additional point if needed], then by a stick-breaking process representation of the Dirichlet distribution we haveP_n f = _0 (H-1) + ∑_i=1^n _1 f(Z_i),(_0, …, _1) ∼(n_0, 1, …, 1).By taking an appropriateand f we have that Proposition <ref> could be interpret as a deriving a lower bound on the probability of [P_n f ≥ Pf + ε L |{ Z_i}_i=1^n ]. First for all, let us fix t ∈ [T], h ∈ [H] and B ∈_ε and, consequently, k = k^t_h(B). Also, let fix j ∈ [J]. To simplify the notation in the sequel, define X_0 =(H-1)and X_i = _h+1(F_h(s, a, ξ^ℓ^i_h)) for i > 0. Notice that X_i for i>0 is a sequence of i.i.d. random variables supported on [0, H-h-1].By Lemma <ref> we have (W^0_j,e_k,k,…, W^e_k_j,e_k,k) ∼(n_0(k)/κ, 1/κ, …, 1/κ). Then we use the aggregation property of Dirichlet distribution: there is a vector (W^-1_j, …, W^e_k_j) ∼( (n_0(k) - _0)/κ, _0 / κ, 1/κ, …, 1/κ) such that∑_i=0^e_k W^i_j,e_k,k X_i =W^-1_j X_0 + ∑_i=0^e_kW^i_jX_i.Next we are going to represent the Dirichlet random vector W by a stick breaking process (or, equivalently, represent via the generalized Dirichlet distribution)W^-1_j= ξ_jξ_j ∼Β( (n_0(k) - _0)/κ, (e_k + _0)/κ), (W^0_j, …, W^e_k_j)= (1 - ξ_j) · (W^0_j, …, W^e_k_j),W_j ∼(_0/κ, 1/κ, …, 1/κ),where ξ_j and W_j are independent. Therefore, we have the final decomposition∑_i=0^e_k W^i_j,e_k,k X_i - p_h _h+1(s,a) - ε L= ξ_j ((H-1) - p_h _h+1(s,a) ) - ε L_T_approx+ (1-ξ_j) ·( ∑_i=0^e_kW^i_j X_i - p_h _h+1(s,a) )_T_stoch.By independence of ξ_j and W_j we have[∑_i=0^e_k W^i_j,e_k,k X_i ≥ p_h _h+1(s,a) + ε L | {X_i}_i=1^e_k] ≥[T_approx≥ 0] ·[T_stoch≥ 0]. We split our problem to lower bound the two separate probabilities. Approximation error To deal with approximation error, we first of all notice that p_h _h+1(s,a) ≤ H-1, therefore we have[T_approx≥ 0] = [ξ_j ≥ε L/H-1].Next we assume that ε < (H-1)/L, since ξ_j ∼Β( (n_0(k) - _0)/κ, (e_k + _0)/κ), we may apply <cit.> [T_approx≥ 0] ≥Φ(-sign(p - μ) ·√(2 (p, μ))),where p = (n_0(k) - _0 - κ) / (e_k + _0 - κ) and μ = ε L / (H-1). Since n_0(k) = _0 + κ + ε L/H-1· (e_k + _0 + κ), we have [T_approx≥ 0] ≥ 1/2.Stochastic error Since X_0 = (H-1) is an upper bound on V-function, and we have that the weight of the first atom α_0 ≜_0 /κ - 1 = c_0 + log_17/16(T) - 1 for c_0 defined in (<ref>).Define a measure _e_k = _0 - κ/e_k + _0 - κδ_X_0 + ∑_i=1^e_k1/e_k + n_0 - 1δ_X_i. Since p_h _h+1(s,a) ≤ H-h-1, we can apply Lemma <ref> with a fixed ε = 1/2 conditioned on independent random variables X_i[ ∑_i=0^e_kW^i_j X_i≥ p_h _h+1(s,a) |{ X_i }_i=1^e_k] ≥1/2( 1 - Φ(√(2 (e_k + n_0 -κ) (_e_k, p_h _h+1(s,a)) /κ))),where Φ is a CDF of a normal distribution. By Lemma <ref> and the event ^⋆(δ, ε)(e_k + n_0 - κ) ( _e_k, p_h _h+1(s,a))≤ e_k ( _e_k, p_h _h+1(s,a)) ≤β^⋆(δ, T, ε),where _e_k = 1/e_k∑_i=1^e_kδ__h+1(F(s,a, ξ^ℓ^i_h+1)), and, as a corollary[ ∑_i=0^e_kW^i_j X_i ≥ p_h _h+1(s,a) |^⋆(δ, ε), { X_i }_i=1^e_k] ≥1/2( 1 - Φ( √(2β^⋆(δ, T, ε) /κ)) ).By taking κ = 2β^⋆(δ, T, ε) we have a constant probability of being optimistic for stochastic error[T_stoch≥ 0 |^⋆(δ, ε)] ≥1 - Φ(1)/2. Overall, combining two lower bound for approximation and stochastic terms, we have[∑_i=0^e_k W^i_j,e_k,k X_i ≥ p_h _h+1(s,a) + ε L | ^⋆(δ, ε) ] ≥1 - Φ(1)/4= γ. Next, using a choice J = ⌈ (log(2HT/δ) + log(N_ε)) / log(1/(1-γ)) ⌉ = ⌈c̃_J · ( log(2HT/δ) + log(N_ε)) ⌉ [max_j ∈ [J]{∑_i=0^e_k W^i_j,e_k,k X_i }≥ p_h _h+1(s,a) + ε L | ^⋆(δ, ε)] ≥ 1 - (1 - γ)^J≥ 1 - δ/2N_εHT.By a union bound we conclude the statement. Next we provide a connection between ^ and ^.It holds ^⊆^. We proceed by a backward induction over h. Base of induction h = H+1 is trivial. Fix state-action pair (s,a) and let us call (s',a') a center of the ball ψ_ε(s,a) that is the ball where (s,a) contains.Next by the update formula for ^t_h, and Bellman equations^t_h(ψ_ε(s,a)) - _h(s,a)= max_j ∈ [J]{∑_i=0^n W^i_j,n[r_h(s^ℓ^i_h, a^ℓ^i_h) - r_h(s',a')] + ∑_i=0^n W^i_j,n^ℓ^i_h+1(s^ℓ^i_h+1) - p_h _h+1(s',a') } + [_h(s,a) - _h(s',a')],where n = e_k^t_h(B) and we drop dependence on k,t,h,s,a in ℓ^i. By induction hypothesis we have ^ℓ^i_h+1(s') ≥^ℓ^i_h+1(ψ_ε(s', π^⋆(s'))) ≥_h+1(s', π^⋆(s')) = _h+1(s') for any i, thus combining it with Lipchitz continuity of reward function and , and the value of r_h(s^ℓ^0, a^ℓ^0) => r_h(s,a), ^t_h(ψ_ε(s,a)) - _h(s,a) ≥ max_j ∈ [J]{W^0_j,n (H-1) + ∑_i=1^n W^i_j,n_h+1(F_h(s^ℓ^i_h, a^ℓ^i_h, ξ^ℓ^i_h)) }- p_h _h+1(s',a') - (L_r + L_V) ε.Next we apply Lipschitz continuity of F_h and _h+1 and obtain ^t_h(ψ_ε(s,a)) - _h(s,a) ≥ max_j ∈ [J]{W^0_j,n (H-1) + ∑_i=1^n W^i_j,n_h+1(F_h(s', a', ξ^ℓ^i_h)) }- p_h _h+1(s',a') - (L_r + L_V(1 + L_F)) ε. By the definition of event ^ we conclude the statement.Define a constant L = L_r + L_V(1+L_F). Assume that J = ⌈c̃_J · (log(2HT/δ)+ log(N_ε) ⌉, κ = 2β^⋆(δ, T, ε), = 2, and a prior count n_0(k) = _0 + κ + ε L/H-1· (e_k + _0 + κ) dependent on the stage k, where _0 = (c_0 + 1 + log_17/16(2e_k)) ·κ, c_0 is defined in (<ref>), c̃_J is defined in (<ref>).Then ^|^⋆(δ, ε)≥ 1-δ/2.§.§ Regret BoundsAs in the tabular setting, we first connect our algorithm to the algorithm by <cit.>, using the following corollary. Define an event '(δ, ε) = (δ, ε) ∩^.Let us define the logarithmic term as followsβ^max(δ, ε) = max{κ, _0 / κ, β^B(δ, ε), β(δ, ε), β^(δ, ε) }that has dependence of order ( log(TH/δ) + log N_ε). Fix ε∈ (0, L_V/H) and assume conditions of Proposition <ref>. Let t ∈ [T], h∈[H], B ∈_ε. Define k = k^t_h(B) and let ℓ^1 < … < ℓ^e_k be a excursions of (B,h) till the end of the previous stage. Then on the event '(δ) the following bound holds for k ≥ 0 and any (s,a) ∈ B0≤^t_h(B) - _h(s,a) ≤1/e_k∑_i=1^e_k [^ℓ^i_h+1(s^ℓ^i_h+1) - _h+1(s^ℓ^i_h+1) ]+ ^t_h(k),where^t_h(k) = 121^2 ·√(H^2 (β^max(δ, ε))^2/e_k) + 2401·H (β^max(δ, ε))^4/e_k+ 3(L_r + (1 + L_F) L_V) ε.The lower bound follows from the definition of the event ^. For the upper bound we first apply the decomposition for ^t_h(s,a) and the definition of event ^B(δ, ε) from Lemma <ref>^t_h(B)=max_j ∈ [J]{∑_i=0^e_k W^i_j,n(r_h(s^ℓ^i_h,a^ℓ^i_h) +^ℓ^i_h+1(s^ℓ^i_h+1)) }≤1/e_k + n_0(k)∑_i=1^e_k( r_h(s^ℓ^i_h, a^ℓ^i_h) + ^ℓ^i_h+1(s^ℓ^i_h+1) ) + n_0(k) · 2H/e_k + n_0(k)+ 120 ^2 √(H^2 κβ^B(δ, ε)/e_k + n_0(k)) + 2400H κlog(n + n_0(k)) (β^B(δ, ε))^2/e_k + n_0(k).Additionally, by Bellman equations_h(s,a)= 1/e_k∑_i=1^e_k_h(s^ℓ^i_h,a^ℓ^i_h) + 1/e_k∑_i=1^e_k(_h(s,a) - _h(s^ℓ^i_h,a^ℓ^i_h)) ≥1/e_k∑_i=1^e_k( r_h(s^ℓ^i_h,a^ℓ^i_h) + p_h _h+1(s^ℓ^i_h,a^ℓ^i_h) ) - 2 ε L_V.Combining and using the fact that n_0(k) ≤L ε/H-1· (e_k + n_0(k)) + _0 + κ for L = L_r + (1+L_F) L_V^t_h(s,a) - _h(s,a)≤1/e_k∑_i=1^e_k[ ^ℓ^i_h+1 - _h+1](s^ℓ^i_h+1) +1/e_k∑_i=1^e_k[ _h+1 (s^ℓ^i_h+1)- p_h _h+1(s^ℓ^i_h,a^ℓ^i_h)] + 120^2 ·√(H^2 (β^max(δ, ε))^2/e_k) + (2400 + 2)H (β^max(δ, ε))^4/e_k+ 3L ε.Finally, the applications of event ^(δ, ε) concludes the statement.Let us define δ^t_h = ^t_h(s^t_h) - V^π^t_h(s^t_h) and ζ^t_h = ^t_h(s^t_h) - _h(s^t_h). Assume conditions of Proposition <ref>. Then on event '(δ, ε) = (δ, ε) ∩^, where (δ, ε) is defined in Lemma <ref>, the following upper bound on regret holds^T ≤ 2H ∑_t=1^T ∑_h=1^H {N^t_h = 0} + ∑_t=1^t ∑_h=1^H (1+1/H)^H-hξ^t_h + ∑_t=1^T ∑_h=1^H ^t_h.where ξ^t_h = p_h [_h+1 - V^π^t_h+1](s^t_h,a^t_h) - [_h+1 - V^π^t_h+1](s^t_h+1) and ^t_h = ^t_h(k^t_h(s^t_h, a^t_h)) ·{k^t_h(s^t_h, a^t_h) ≥ 0} for ^t_h defined in Corollary <ref>. As in the tabular setting, we notice that on the event ^ we can upper bound the regret in terms of δ^t_1.^T ≤∑_t=1^T δ^t_1.Next we analyze δ^t_h. Since a^t_h = _a∈^t_h(ψ_ε(s^t_h, a)), we can use Corollary <ref> and Bellman equations in the following wayδ^t_h= ^t_h(s^t_h) - V^π^t_h(s^t_h) =^t_h(B^t_h) - Q^π^t_h(s^t_h, a^t_h) = ^t_h(B^t_h) - _h(s^t_h, a^t_h) + _h(s^t_h, a^t_h) - Q^π^t_h(s^t_h, a^t_h) ≤ H {N^t_h = 0} + { N^t_h > 0}( 1/N^t_h∑_i=1^N^t_hζ^ℓ^i_t,h_h+1 + ^t_h(k^t_h) + p_h [_h+1 - V^π^t_h+1](s^t_h,a^t_h) ).where k^t_h = k^t_h(B^t_h), N^t_h = e_k^t_h, ℓ^i_t,h is an i-th visitation of the ball B^t_h during an stage k^t_h, and additionally by a convention 0/0 = 0.Define ξ^t_h = p_h [_h+1 - V^π^t_h+1](s^t_h,a^t_h) - [_h+1 - V^π^t_h+1](s^t_h+1) a martingale-difference sequence, and ^t_h = ^t_h(k^t_h) {N^t_h > 0} then δ^t_h ≤ H {N^t_h = 0} + {N^t_h > 0}/N^t_h∑_i=1^N^t_hζ^ℓ^i_t,h_h+1- ζ^t_h+1 + δ^t_h+1 + ξ^t_h + ^t_h.and, as a result∑_t=1^T δ^t_h≤ H ∑_t=1^T {N^t_h = 0} + ∑_t=1^T { N^t_h > 0}/N^t_h∑_i=1^N^t_hζ^ℓ^i_t,h_h+1-∑_t=1^Tζ^t_h+1 + ∑_t=1^T δ^t_h+1 + ∑_t=1^T ξ^t_h + ∑_t=1^T ^t_h. For the second term we may repeat arguments as in the proof of Lemma <ref> and obtain∑_q=1^T ζ^q_h+1·∑_t=1^T {k^t_h ≥ 0}/N^t_h∑_i=1^N^t_h{ℓ^i_t,h = q}≤( 1 + 1/H)∑_q=1^T ζ^q_h+1.After a simple algebraic manipulations and using the fact that ζ^t_h ≤δ^t_h∑_t=1^T δ^t_h≤ H ∑_t=1^T {N^t_h = 0} + ∑_t=1^T (1 + 1/H) ζ^t_h+1 - ∑_t=1^Tζ^t_h+1 + ∑_t=1^T δ^t_h+1 + ∑_t=1^T ξ^t_h + ∑_t=1^T ^t_h ≤H ∑_t=1^T {N^t_h = 0} + (1 + 1/H) ∑_t=1^T δ^t_h+1 + ∑_t=1^T ξ^t_h + ∑_t=1^T ^t_h.By rolling out the upper bound on regret (<ref>) we have^T ≤ 2 H ∑_t=1^T ∑_h=1^H {N^t_h = 0} + ∑_t=1^t ∑_h=1^H (1+1/H)^H-hξ^t_h + ∑_t=1^T ∑_h=1^H ^t_h.First, we notice that the event '(δ, ε) defined in Lemma <ref>, holds with probability at least 1-δ by Lemma <ref> and Proposition <ref>. Thus, we may assume that '(δ, ε) holds for ε > 0 that we will specify later.By Lemma <ref>^T ≤ 2 H ∑_t=1^T ∑_h=1^H {k^t_h = -1} + ∑_t=1^t ∑_h=1^H (1+1/H)^H-hξ^t_h + ∑_t=1^T ∑_h=1^H ^t_h. The first term is upper bounded by 2 H^3 · N_ε, since there is no more than H visits of each ball in ε-net before the update for the first stage. The second term is bounded by (√(H^3 T β^max(δ, ε))) by a definition of the event (δ) in Lemma <ref>. To analyze the last term, consider the following sum∑_t=1^T ∑_h=1^H { e_k^t_h(B^t_h) > 0 }/√(e_k^t_h(B^t_h))≤∑_(B,h) ∈_ε× [H]∑_k=0^k^T_h(B)e_k+1/√(e_k),wheree_k = ⌊(1+1/H)^k H ⌋⇒e_k+1/√(e_k)≤ 2 √(H)(1 + 1/H)^k/2,therefore∑_h=0^k^T_h(B)e_k+1/√(e_k)≤ 4√(H)(1+1/H)^(k^T_h(B)+1)/2/√(1+1/H)-1 = 4H √(e^k^T_h(B)+1).Notice that N^T+1_h(B) ≥∑_k=0^k^T_h(B) e^k = H (e^k^T_h(B)+1 - 1) ⇒e^k^T_h(B)+1≤N^T+1_h(B) + 1/Hthus from the Cauchy-Schwarz inequality∑_t=1^T ∑_h=1^H { e_k^t_h(B^t_h) > 0}/√(e_k^t_h(B^t_h)) ≤ 4√(H)∑_(B,h) ∈_ε× [H]√(N^T+1_h(B) + 1)≤ 4√(SAH^2)√(∑_(B,h) (N^T+1_h(B) + 1))≤ 4 √(H^3T · N_ε) + 4 N_εH^2.By the similar arguments we have∑_t=1^T ∑_h=1^H { e_k^t_h(B^t_h) > 0 }/e_k^t_h(B^t_h)≤( H N_εlog(T)). Using this upper bound, we have for L = L_r + (1+L_F)L_V∑_t=1^T ∑_h=1^H ^t_h= ( H β^max(δ, ε) ∑_t=1^T ∑_h=1^H { e_k^t_h(s^t_h,a^t_h) > 0 }/√(e_k^t_h(s^t_h, a^t_h)))+ (H (β^max(δ, ε))^4 ∑_t=1^T ∑_h=1^H { e_k^t_h(s^t_h,a^t_h) > 0 }/√(e_k^t_h(s^t_h, a^t_h))) + ( LTH ε) ≤( √(H^5 T N_ε· (β^max(δ, ε))^2) + H^3 N_ε (β^max(δ, ε))^4 + LTHε).Overall, for any fixed ε > 0 we have^T ≤( √(H^5 T N_ε· (β^max(δ, ε))^2) + H^3 N_ε (β^max(δ, ε))^4 + LTHε + √(H^3 T)). Next we finally use that × have covering dimension d_c that meansN_ε≤ C_N ·ε^-d_c, thus our regret bound transforms as follows^T≤( √(H^5 T C_N ε^-d_c· (log(TC_NH/δ)+ d_c log(1/ε) )^2) + H^3 C_N ε^-d_c (log(TC_NH/δ) + d_c log(1/ε) )^4 + LTHε).By taking ε = T^-1/(d_c+2) we conclude the statement§ ADAPTIVE RANDQLIn this section we describe how to improve the dependence in our algorithm from covering dimension to zooming dimension, and describe all required notation. §.§ Additional Notation In this section we introduce an additional notation that is needed for introducing an adaptive version of algorithm for metric spaces. Hierarchical partitionNext, we define all required notation to describe an adaptive partition, as <cit.>. Finally, we define the following general framework of hierarchical partition. Instead of balls, we will use a more general notion of regions that will induce a better structure from the computational point of view. We recall for any compact set A ⊆× we call (A) = max_x,y ∈ Aρ(x,y). A hierarchical partition of × of a depth d > 0 is a collection of regions _d and their centers such that * Each region B ∈_dis of the form (B) ×(B), where (B) ⊆, (B) ⊆;* _d is a cover of ×: ⋃_B ∈_d B = ×;* For every B ∈_d, we have (B) ≤ d_max· 2^-d;* Let B_1, B_2 ∈_d. If B_1 ≠ B_2 then ρ((B_1), (B_2)) ≥ d_max· 2^-d;* For any B ∈_d, there exists a unique A ∈_d-1 (called the parent of B) such that B ⊆ A.and, for d=0 we define it as _0 = {×}.We call the tree generated by the structure of = {_d}_d ≥ 0 a tree of this hierarchical partition. The main example of this partition is the dyadic partition of × in the case of = [0,1]^d_,= [0,1]^d_ and the metric induced by the infinity norm ρ((s,a), (s',a')) = max{s-s'_∞, a-a'_∞}. Forexamples we refer to <cit.>.§.§ Algorithm In this section we describe two algorithms: which is an adaptive metric counterpart of , and which is an adaptive metric counterpart of . First, we start from the notation and algorithmic parts that will be common for both algorithms.Algorithms maintain an adaptive partition ^t_h of ×, that is a sub-tree of an (infinite) tree of the hierarchical partition = {_d }_d ≥ 0.We initialize ^1_h = {_0 }, and then we refine the tree ^t_h be adding new nodes that corresponding to nodes of . The leaf nodes of ^t_h represent the active balls, and for B ∈^t_h the set of its inactive parent balls is defined as { B' ∈^t_h | B ⊂ B' }. For any B ∈^t_h we define d(B) as a depth of B in the tree under a convention d(×) = 0.Additionally, we need to define so-called selection rule and splitting rule. For any state s ∈ we define the set of all relevant balls as ^t_h(s) = { active b ∈^t_h | (s,a) ∈ B for some a ∈}. Then for the current state s^t_h we define the current ball as B^t_h = _B ∈^t_h(s^t_h)^t_h(B) and the corresponding action as a^t_h. To define the splitting rule we maintain the counters n^t_h(B) for all B ∈^t_h as a number of visits of a node B and all its parent nodes. Then we will perform splitting of the current ball B^t_h if √( d^2_max / n^t_h(B^t_h))≤(B^t_h). During splitting, we extend ^t+1_h by its child nodes in the hierarchical partition tree . For more details we refer to <cit.>, up to small changes in notation. In particular, their constant C̃ is equal to d_max in our setting to make the construction exactly the same for both and algorithms.This algorithm is an adaptive metric version of algorithm.We recall that for B ∈^t_h we define n^t_h(B) = ∑_i=1^t-1{ (B^i_h)is a parent ofB } is the number of visits of the ball B and its parent balls at step h before episode t. We start by initializing the ensemble of Q-values, the policy Q-values, and values to an optimistic value _h^t,j(B) = _h^1(B) = ^1_h(B) = r_0 H for all (j,h)∈[J]×[H] and the unique ball in the partition B = × and r_0>0 some pseudo-rewards.At episode t we update the ensemble of Q-values as follows, denoting by n=n^t_h(B) the count, w_j,n∼Β(H, n) the independent learning rates, ^t+1,j_h(B) =(1- w_j,n) ^t,j_h(B) + w_j,n_h^t,j(s^t_h, a^t_h), B = B^t_h^t,j_h(B)otherwise,where we defined the target _h^t,j(s^t_h, a^t_h) as a mixture between the usual target and some prior target with mixture coefficient _n,j∼Β(n, n_0) and n_0 the number of prior samples,_h^t,j(s^t_h, a^t_h) = _j,n [r_h(s^t_h, a^t_h) + ^t_h+1(s^t_h+1)] + (1-_j,n) r_0 H .For a discussion on prior re-injection we refer to Appendix <ref>. The value function is computed on-flight by the rule ^t_h(s) = max_B ∈^t_h^t_h(B).The policy Q-values are obtained by taking the maximum among the ensemble of Q-values_h^t+1(B) = max_j∈[J]_h^t+1,j(B) .The policy is then greedy with respect to the policy Q-values and selection rule(s,π_h^t+1(s)) = (B), where B =_B ∈^t+1_h_h^t+1(B). After the update of Q-values, algorithm verifies the splitting rule. If the splitting rule is triggered, then all new balls are defined by counter and Q-values of its parent. We notice that all Q-values could be efficiently computed on the nodes of the adaptive partition. The complete and detailed description is presented in Algorithm <ref>. The notation for this algorithm is very close to and we describe only differences between them. The main difference is a way to compute value ^t_h(s) = max_B ∈^t_h(s)^t_h(B) and policy (s,π^t_h(s)) = (B) for B = _B ∈^t_h(s)^t_h(B). Additionally, all counters including temporary will move to the child nodes after splitting, as it performed in . The detailed description is presented in Algorithm <ref>.§.§ Regret Bound In this section we state the regret bounds for and derive a proof. The given proof shares a lot of similarities with the proof of in the first half and to the proof of by <cit.> in the second half. We fix δ∈(0,1),and the number of posterior samples J ≜⌈c̃_J · ( log(2C_NHT/δ) + d_c log_2(8T/d_max)) ⌉,where c̃_J = 1/log(4/(3 + Φ(1))) and Φ(·) is thecumulative distribution function (CDF) of a normal distribution.Additionally we selectn_0(k) = ⌈_0 + κ + L · d_max/H-1·e_k + _0 + κ/√(H e_k - k - H^2)⌉, _0 = (c_0 + 1 + log_17/16(T)) ·κwherec_0 is an absolute constant defined in (<ref>) (see Appendix <ref>), κ is the posterior inflation coefficient and L = L_r + (1+L_F)L_V is aconstant. Next we restate the regret bound result for algorithm. [Restatement of Theorem <ref>] Consider a parameter δ∈ (0,1). Let κ≜ 2(log(8HC_N/δ) + d_c log_2(8T/d_max) + 3log(π(2T+1))), ≜ 2. Then it holds for , with probability at least 1-δ, ^T = ( L H^3/2∑_h=1^HT^d_z,h+1/d_z,h+2),where d_z,h is the step-h zooming dimension and we ignore all multiplicative factors in the covering dimension d_c. We divide the proof to four main parts, a little bit different proof of and since we also need to apply clipping techniques. Concentration eventsWe can define (almost) the same set of events as in Appendix <ref>, where union bound over balls is taken over all the hierarchical partition tree up to depth D that we define as _D.^⋆(δ)≜{∀ t ∈, ∀ h ∈ [H], ∀ B∈_D, k = k^t_h(B), (s,a) = (B): ( 1/e_k∑_i=1^e_kδ__h+1(F_h(s,a,ξ^ℓ^i_h+1)), p_h _h+1(s,a) ) ≤(δ,e_k,ε)/e_k} , ^B(δ, T)≜{∀ t ∈ [T], ∀ h ∈ [H], ∀ B ∈_D, ∀ j ∈ [J], k = k^t_h(B):| ∑_i=0^e_k( W^i_j, e_k, k - [W^i_j, e_k,k] ) ( r_h(s^ℓ^i_h, a^ℓ^i_h) +^ℓ^i_h+1(s^ℓ^i_h+1) ) | ≤ 60 ^2 √(^2 H^2 κβ^B(δ, ε)/e_k + n_0(k)) + 1200H κlog(e_k + n_0(k)) (β^B(δ, ε))^2/e_k + n_0(k)} , ^(δ, T)≜{∀ t ∈ [T], ∀ h ∈ [H], ∀ B ∈_D, k = k^t_h(B): |1/e_k∑_i=1^e_k_h+1(s^ℓ^i_k,h(B)_h+1)- p_h_h+1(s^ℓ^i_k,h(B)_h, a^ℓ^i_k,h(B)_h) | ≤√(2^2 H^2 β^(δ, ε)/e_k)}, (δ)≜{∑_t=1^T ∑_h=1^H (1+3/H)^H-h| p_h[_h+1 - V^π_t_h+1](s^t_h, a^t_h) - [_h+1 - V^π_t_h+1](s^t_h+1)| ≤ 2^3H√(2HT β(δ)). To apply the union bound argument, we have to bound the size of _D. First, we notice that relation between centers of balls in each layer _d implies that there at least |_d | non-intersecting balls of radius d_max· 2^-d-2. Thus, the size of this sub-tree could be bounded as|_D| ≤∑_d=0^D N_d_max 2^-d-2≤ C_N ∑_d=0^D ( 2^d+2/d_max)^d_c≤ (8/d_max)^d_c C_N · 2^d_c · D. using the relation between covering and packing numbers, see e.g. Lemma 4.2.8 by <cit.>. The only undefined quantity here is D, that can be upper-bounded given budget T. To do it, we apply Lemma B.2 by <cit.> for any B ∈^t_h( d_max/2·(B))^2 ≤ n^t_h(B) ≤( d_max/(B))^2.Our goal is to find a value D such that ^T+1_h ⊆_D for any MDPs and correct interactions. To do it, we notice that it is equivalent to show that (B) ≥ d_max 2^-D, that could be guaranteed since(B) ≥d_max/2 √(n^T+1_h(B))≥d_max/2T,which implies that D = 1+log_2(T) is enough. Finally, since for the value of interestlog | _D | ≤ d_c log_2(T) +log C_N + d_c log(8/d_max),we can define the β-functions as follows follows (δ)≜log(8C_N H/δ) + d_c log_2(8T/d_max)+ 3log(π(2n+1)) , β^B(δ, T)≜log(8C_N H/δ) + d_c log_2(8T/d_max)+log(TJ) , β^(δ, T)≜log(8C_N H/δ) + d_c log_2(8T/d_max)+log(2T) , β(δ)≜log(16C_N H/δ) + d_c log_2(8T/d_max),and following line-by-line the proof of Lemma <ref>, for an event (δ) = ^⋆(δ)∩^B(δ, T) ∩^(δ, T) ∩(δ) we have (δ)≥ 1 - δ/2.OptimismNext, we state the required analog of Proposition <ref>. We can show that with probability at least 1-δ/2on the event ^⋆(δ) the following event_≜{∀ t ∈ [T]∀ h ∈ [H]∀ B ∈_D: fork = k^t_h(B), (s,a) = (B):max_j ∈ [J]{W^0_j,e_k,k (H-1) + ∑_i=1^e_k W^i_j,e_k,k_h+1(F_h(s, a, ξ^ℓ^i_h)) }≥ p_h _h+1(s,a) + L ·(B^t_h) }under the choice J = ⌈c̃_J · (log(2HT/δ)+ log(|_D |)) ⌉, κ = 2β^⋆(δ, T), = 2, and a prior count n_0(k) = _0 + κ + L · d_max/H-1·e_k + _0 + κ/√(H e_k - k - H^2)dependent on the stage k, where _0 = (c_0 + 1 + log_17/16(T)) ·κ, L = L_r + L_V(1+L_F). In particular, the proof exactly the same as the proof of Proposition <ref> for ε dependent on k.At the same time, it is possible to show that _ implies_≜{∀ t ∈ [T], h ∈ [H], ∀ B ∈^t_h, ∀ (s,a) ∈ B:^t_h(B) ≥_h(s,a) }. Indeed, in the proof of Proposition <ref> we actively uses the bound ρ((s^ℓ^i_h, a^ℓ^i_h), (s,a)) ≤ε. In the adaptive setting, we have to, at first, use an upper bound ρ((s^ℓ^i_h, a^ℓ^i_h), (s,a)) ≤(B^ℓ^i_h) by a construction B ⊆ B^ℓ^i_h, and then apply Lemma B.2 by <cit.> by defining an upper bound (B^ℓ^i_h) ≤d_max/√(n^ℓ^i_h_h(B^ℓ^i_h))≤d_max/√(∑_i=0^k-1 e_i)≤d_max/√(H∑_i=0^k-1 (1+1/H)^i - k)≤d_max/√( H e_k - k - H^2)for k = k^t_h(B) for a particular ball B ∈^t_h in the case H e_k - k - H^2 ≥ 0.By combining event _ and the event ^B(δ) we can prove the same statement as Corollary <ref>.Let t ∈ [T], h∈[H], B ∈^t_h. Define k = k^t_h(B) and let ℓ^1 < … < ℓ^e_k be a excursions of (B,h) till the end of the previous stage. Then on the event '(δ) = (δ) ∩_ the following bound holds for k ≥ 0 and for any (s,a) ∈ B0≤^t_h(B) - _h(s,a) ≤ H {He_k/2 ≤ k + H^2 } + 1/e_k∑_i=1^e_k [^ℓ^i_h+1(s^ℓ^i_h+1) - _h+1(s^ℓ^i_h+1) ]+ ^t_h,where^t_h = 121^2 ·√(H^2 (β^max(δ, T))^2/e_k) + 2401·H (β^max(δ, T))^4/e_k+ 5 L · d_max/√(He_k)where k= k^t_h(B^t_h) and β^max(δ,T) = max{(δ, T), β^B(δ), β^(δ), β(δ)}. Also we can express this bound in terms of a diameter of B^t_h as follows(B^t_h)≥d_max/2√(n^t_h(B^t_h))≥d_max/2√(∑_i=0^k e_i)≥d_max/2√(H ∑_i=0^k (1+1/H)^i)≥d_max/2√(H)≥d_max/2√(H^2 (1+1/H)^k+1)≥d_max/2√(2He_k),thus1/√(H e_k)≤3(B^t_h)/d_max,and we have^t_h≤ 7566 ^2 H^3/2 (β^max(δ,T))^4 (B^t_h) / d_max + 15 L (B^t_h) ≤ρ(H, δ, L) ·(B^t_h),where we define ρ(H,δ,L) ≜ 7566 ^2 H^3/2 (β^max(δ,T))^4 / d_max + 15L.As a additional corollary, we have for all t∈ [T], h ∈ [H]^t_h(s) = max_B ∈^t_h(s)^t_h(B) = ^t_h(B^⋆) ≥_h(s, (s)) = _h(s),where B^⋆ is a ball that contains a pair (s, (s)). This upper and lower bound have the similar structure as Lemma D.2 by <cit.> and the rest of the proof directly follows <cit.>. Clipping techniquesNext we introduce the required clipping techniques developed by <cit.>. Definition <ref> introduces the quantity _h(s,a) = _h(s) - _h(s,a), and for any compact set B⊆× we define _h(B) = min_(s,a) ∈ B_h(s,a). Finally, we define clipping operator for any μ, ν∈(μ | ν) = μ{μ≤ν}.In particular, this operator satisfies the following important propertySuppose that _h(B) ≤ψ≤μ_1 + μ_2 for any ψ, μ_1, μ_2. Then ψ≤[μ_1 | _h(B)/H+1] + (1 + 1/H) μ_2Now we apply this lemma to our update rules, producing a result similar to Lemma E.3 of <cit.>. We notice that_h(B^t_h)≤_h(s^t_h,a^t_h) = _h(s^t_h) - _h(s^t_h,a^t_h) ≤^t_h(s^t_h) - _h(s^t_h,a^t_h) = ^t_h(B^t_h) - _h(s^t_h, a^t_h).Thus, denoting ψ =^t_h(B^t_h) - _h(s^t_h, a^t_h) and, by (<ref>),μ_1 = H {He_k^t_h/2 > k^t_h + H^2 }+ ^t_h, μ_2 = 1/e_k∑_i=1^e_k [^ℓ^i_h+1(s^ℓ^i_h+1) - _h+1(s^ℓ^i_h+1) ]we apply Lemma <ref> and obtain^t_h(s^t_h) - _h(s^t_h,a^t_h)≤[ H {He_k^t_h/2 ≤ k^t_h + H^2 }+ ^t_h | _h(B^t_h)/H+1] + ( 1 + 1/H)1/e_k∑_i=1^e_k [^ℓ^i_h+1(s^ℓ^i_h+1) - _h+1(s^ℓ^i_h+1) ]for k^t_h = k^t_h(B^t_h) and ^t_h defined in (<ref>). Regret decomposition The rest of the analysis we preform conditionally on event '(δ) = (δ) ∩_ that holds with probability at least 1-δ. By defining δ^t_h = ^t_h(s^t_h) - V^π^t(s^t_h) and ζ^t_h = ^t_h(s^t_h) - _h(s^t_h) we have^T = ∑_t=1^T _1(s^t_1) - V^π^t_1(s^t_1) ≤∑_t=1^T δ^t_1,and, at the same time, by Bellman equationsδ^t_h= ^t_h(s^t_h) - Q^π^t_h(s^t_h, a^t_h) =^t_h(s^t_h) - _h(s^t_h, a^t_h) + _h(s^t_h, a^t_h) - Q^π^t(s^t_h, a^t_h)=^t_h(s^t_h) - _h(s^t_h, a^t_h) + _h+1(s^t_h+1) - V^π^t_h(s^t_h+1) + ξ^t_h = ^t_h(s^t_h) - _h(s^t_h, a^t_h) + δ^t_h+1 - ζ^t_h+1 + ξ^t_h,where ξ^t_h = p_h [_h+1 - V^π^t_h+1](s^t_h,a^t_h) - [_h+1 - V^π^t_h+1](s^t_h+1) is a martingale-difference sequence.By (<ref>) we have∑_t=1^T δ^t_h= ∑_t=1^T ^t_h(s^t_h) - _h(s^t_h, a^t_h) + δ^t_h+1 - ζ^t_h+1 + ξ^t_h ≤(1 + 1/H) ∑_t=1^T 1/e_k^t_h∑_i=1^e_k^t_hζ^ℓ^i_k^t_h_h+1 + ∑_t=1^T δ^t_h+1 - ∑_t=1^T ζ^t_h+1 + ∑_t=1^T ξ^t_h + ∑_t=1^T [ H {He_k^t_h/2 > k^t_h + H^2 }+ ^t_h(k^t_h) | _h(B^t_h)/H+1]where k^t_h = k^t_h(B^t_h). Repeating argument of Lemma <ref> and <cit.>(1 + 1/H) ∑_t=1^T 1/e_k^t_h∑_i=1^e_k^t_hζ^ℓ^i_k^t_h_h+1≤(1 + 1/H)^2 ∑_t=1^T ζ^t_h+1≤(1 + 3/H) ∑_t=1^T ζ^t_h+1.Using an upper bound ζ^t_h ≤δ^t_h we have for any h ≥ 1∑_t=1^T δ^t_h≤(1 + 3/H) ∑_t=1^T δ^t_h+1 + ∑_t=1^T ξ^t_h + ∑_t=1^T [ H {He_k^t_h/2 ≤ k^t_h + H^2 }+ ^t_h | _h(B^t_h)/H+1],and, rolling out starting with h=1 we have the following regret decomposition^T≤^3 ∑_t=1^T ∑_h=1^H H {He_k^t_h/2 ≤ k^t_h + H^2 } =+ ^3 ∑_t=1^T ∑_h=1^H [ ^t_h | _h(B^t_h)/H+1] =+ ∑_t=1^T ∑_h=1^H (1 + 3/H)^H-hξ^t_h.=TermFor this term we notice that for any fixed h the following eventH e_k^t_h≤ 2(k^t_h + H^2)H ⌊ H (1 + 1/H )^k^t_h⌋≤ 2(k^t_h + H^2),that is guaranteed if(1 + 1/H )^k^t_h≤ 2 T + 3k^t_h log(1+1/H) ≤log(2T/H^2 + 3).Thus, indicator can be equal to 1 no more than H log(2T+3) times for any t ∈ [T]. As a result,≤^2 H^3 log(2T+3).TermLet us rewrite this term using a definition of clipping operator and use the definition of near-optimal set (see Definition <ref>)= ^3 ∑_t=1^T ∑_h=1^H ^t_h {(H+1) ^t_h ≥_h(B^t_h) }≤^3 ∑_t=1^T ∑_h=1^H ^t_h{(B^t_h) ∈ Z^^t_h_h }. Next we consider the summation for a fixed h. Here we follow Theorem F.3 by <cit.> and obtain∑_t=1^T ^t_h {(B^t_h) ∈ Z^^t_h _h } = ∑_r∑_B: (B) = r∑_t: B^t_h = B^t_h {(B) ∈ Z^^t_h_h },where we applied an additional rescaling by a function ρ defined in (<ref>).Next we fix a constant r_0 > 0 and break a summation into two parts: r ≥ r_0 and r ≤ r_0.* Case r ≤ r_0. In this situation we have can apply (<ref>)∑_r ≤ r_0∑_B: (B) = r ∑_t: B^t_h = B^t_h{(B) ∈ Z^^t_h_h }= ( T r_0 ρ(H, δ, L)). * Case r ≥ r_0. In this situation we also apply (<ref>) under the indicator function∑_r ≥ r_0∑_B: (B) = r ∑_t: B^t_h = B^t_h {(B) ∈ Z^^t_h_h } ≤∑_r ≥ r_0∑_B: (B) = r ·ρ(H, δ, L) {(B) ∈ Z^ρ(H,δ,L) · r _h}∑_t: B^t_h = B^t_h.To upper bound the last sum we repeat the argument of (<ref>) and apply (<ref>), using the fact that (B) = r ·ρ(H, δ, L)∑_t: B^t_h = B1/√(e_k) ≤∑_k=0^k^T_h(B)e_k+1/√(e_k)≤ 4H √(e^k^T_h(B) + 1)≤ 4 √(H (n^T+1_h(B) + 1))≤ 4 √(2H)·d_max/(B) = √(32H)· d_max/r.As a result, we have by (<ref>)∑_t: B^t_h = B^t_h ≤√(32H)· d_max/r·( 2522 ^2 H (β^max(δ, T))^4 + 5L d_max / √(H))and∑_r ≥ r_0∑_B: (B) = r ∑_t: B^t_h = B^t_h {(B) ∈ Z^^t_h_h }= (∑_r ≥ r_0 N_r(Z^ρ(H, δ,L) · r_h) ·H^3/2 d_max(β^max(δ, T))^4 + L d^2_max/r).Finally, by an arbitrary choice of r_0 and a definition of zooming dimension with a scaling ρ = ρ(H,δ,L) (Definition <ref>)= ( (H^3/2 d_max (β^max(δ, T))^4 + L d^2_max) ·∑_h=1^Hinf_r̃_0{ T r_0 + ∑_r ≥ r_0C_N,h/r̃^d_z,h + 1}).Term For this term we just apply definition of the main event (δ) ⊇(δ) and obtain= ( √(H^3 T β^max(δ, T))).Final regret bound First, we notice that β^max(δ,T) = ( d_c ), therefore we have^T = ( H^3 d_c + (H^3/2 d_c^4 + L) ∑_h=1^H inf_r_0 > 0{ T r_0+ ∑_r ≥ r_0C_N,h/r^d_z,h + 1} + √(H^3 T d_c)).Taking r_0 = K^-d_z,h + 1/2 for each h and summing the geometric series we conclude the statement. § DEVIATION AND ANTI-CONCENTRATION INEQUALITIES §.§ Deviation inequality forFor a measure ν∈([0,b]) supported on a segment [0,b] (equipped with a Borel σ-algebra) and a number μ∈ [0,b] we recall the definition of the minimum Kullback-Leibler divergence (ν, μ) ≜inf{(ν,η): η∈([0,b]), ν≪η, _X ∼η[X] ≥μ} . As the Kullback-Leibler divergence this quantity admits a variational formula. For all ν∈([0,b]), u∈ [0,b),(ν,u) = max_λ∈[0,1]_X∼ν[ log( 1-λX-u/b-u)] , moreover if we denote by λ^⋆ the value at which the above maximum is reached, then _X∼ν[1/1-λ^⋆X-u/b-u] ≤ 1 .Contrary to <cit.> we allow that u=0 but in this case Lemma <ref> is trivially true, indeed(ν, 0) =0= max_λ∈[0,1]_X∼ν[ log( 1-λX/b)] . Let (X_t)_t∈^⋆ be i.i.d. samples from a measure ν supported on [0,b]. We denote by _n ∈([0,b]) the empirical measure _n = ∑_i=1^n δ_X_i, where δ_X_i is a Dirac measure on X_i ∈ [0,b]. We are now ready to state the deviation inequality for theby <cit.> which is a self-normalized version of Proposition 13 by <cit.>. Notice that this inequality is stated in terms of slightly less general definition of , however, the proof remains completely the same. For all ν∈([0,b]) and for all δ∈[0,1], (∃ n∈^⋆,n(_n, _X ∼ν[X]) > log(1/δ) + 3log(eπ(1+2n)))≤δ.§.§ Anti-concentration Inequality for Dirichlet Weighted Sums In this section we state anti-concentration inequality by <cit.> in terms of slightly different definition of . c_0(ε) = (4/√(log(17/16)) + 8 + 49 · 4 √(6)/9)^2 2/π·ε^2 + log_17/16( 5/32 ·ε^2).For any α = (α_0+1, α_1, …, α_m) ∈_++^m+1 define∈_m such that (ℓ) = α_ℓ/, ℓ = 0, …, m, where = ∑_j=0^m α_j. Let ε∈ (0,1). Assume that α_0 ≥ c_0(ε) + log_17/16() for c_0(ε) defined in (<ref>), and ≥ 2α_0. Then for any f {0,…,m}→ [0,] such that f(0) =, f(j) ≤ b < /2, j ∈{1,…,m} and μ∈ ( f,) _w ∼(α)[wf ≥μ] ≥ (1 - ε)_g ∼(0,1)[g ≥√(2 ( ∑_i=0^m (i) ·δ_f(i), μ))].Next we formulate a simple corollary of Theorem <ref>, that slightly relaxes assumptions of this theorem under assumption μ < b ≤/2. For any α = (α_0+1, α_1, …, α_m) ∈_++^m+1 define∈_m such that (ℓ) = α_ℓ/, ℓ = 0, …, m, where = ∑_j=0^m α_j. Also define a measure = ∑_i=0^m (i) ·δ_f(i).Let ε∈ (0,1). Assume that α_0 ≥ c_0(ε) + log_17/16(2( - α_0)) for c_0(ε) defined in (<ref>). The for any f {0, …, m}→ [0,] such that f(0) = , f(j) ≤ b ≤/2, j ∈ [m], and any μ∈ (0, b)_w ∼(α)[wf ≥μ] ≥ (1 - ε)_g ∼(0,1)[g ≥√(2 ( , μ))].Assume that assumption ≥ 2α_0 holds.Then we show that the Theorem <ref> also holds for μ≤ f. First, we notice that for any γ > 0 _w ∼(α)[wf ≥μ] ≥_w ∼(α)[wf ≥ f + γ ] ≥(1 - ε)_g ∼(0,1)[g ≥√(2 ( ,f + γ))]. By continuity ofin its second argument (see Theorem 7 by <cit.>) we can tend γ to zero, and then use an equality ( ,f) = ( , μ) = 0. Next, assume ≤ 2α_0. In this case we have f ≥ b, thus for any 0 ≤μ≤ b_w ∼(α)[ wf ≥μ]≥_ξ∼Β(α_0+1,- α_0)[ ξ≥μ] ≥_ξ∼Β(α_0+1,- α_0)[ξ≥1/2],where we first apply a lower bound f(j) ≥ 0for all j > 0 and f(0) =, and second apply a bound μ≤/2. Here we may apply the result of <cit.> and obtain the following lower bound _w ∼(α)[wf ≥μ] ≥Φ(-sign(α_0/ - 1/2) ·√(2 (α_0/, 1/2))) ≥ (1-ε)_g ∼(0,1)[ g ≥ 0]where we used α_0 /> 1/2. §.§ Rosenthal-type inequality In this section we state Rosenthal-type inequality for martingale differences by <cit.>. The exact constants could be derived from the proof. Let X_1,…,X_n be a martingale-difference sequence adapted to a filtration {_i}_i=1,…,n: [X_i | _i] = 0. Define _i = [X_i^2 | _i-1]. Then for any p ≥ 2 the following holds^1/p[ | ∑_i=1^n X_i |^p ] ≤ C_1 p^1/2^1/p[ | ∑_i=1^n _i |^p/2] + 2C_2 p^1/p[max_i∈[n]| X_i |^p],where C_1 = 60, C_2 = 60. Additionally, we need some additional lemma to use this inequality in our setting.A random variable X is called sub-exponential with parameters (σ^2, b) if the following tail condition holds for any t > 0[| X - [X] |≥ t ] ≤ 2exp( - t^2/2σ^2 + 2bt). By Theorem 1 of <cit.> we have for any ξ∈ B(α, β) with β≥α and any t > 0[ |ξ - [ξ] |≥ t ] ≤ 2exp( - t^2/2(v + ct/3)),where v =αβ/(α + β)^2 (α + β + 1)≤α/(α + β)^2,c =2(β - α)/(α + β)(α + β + 2)≤2/α + β,so ξ is (α/(α + β)^2, 2/(3(α + β))) sub-exponential. Let X_1,…,X_n be a sequence of centred (σ^2,b) sub-exponential random variables, not necessarily independent. Then for any p ≥ 2[ max_ℓ∈[n]| X_ℓ|^p] ≤max{√(8σ^2 log n), 8blog n}^p +(2σ)^p p^p/2 + 2 (8b)^p p^p.By Fubini theorem we have for any η≥ 0: [η^p] = p ∫_0^∞ u^p-1[η≥ u]u, thus for any a > 0 the following holds[max_ℓ∈ [n]| X_ℓ|^p ]= p ∫_0^∞ u^p-1[max_ℓ∈ [n]| X_ℓ - [X_ℓ] |≥ u]u ≤a^p + p ∫_a^∞ u^p-1[∃ℓ∈ [n] : | X_ℓ|≥ u]u ≤a^p + 2p ∫_a^∞ u^p-1 n exp( - u^2/2(σ^2 + bu))u.By selecting a = max{√(8σ^2 log n), 8blog n} we have n exp( - u^2/2(σ^2 + bu)) ≤exp( - u^2/4(σ^2 + bu)) ≤exp( - u^2/8σ^2) + exp( - u/8b) for any u ≥ a, thus[max_ℓ∈ [n]| X_ℓ|^p ]≤max{√(8σ^2 log n), 8blog n}^p + 2 p ∫_a^∞ u^p-1exp( - u^2/8σ^2)u + 2 p ∫_a^∞ u^p-1exp( - u/8b)u ≤max{√(8σ^2 log n), 8blog n}^p + p (2√(2)σ)^pΓ(p/2) + 2p (8b)^p Γ(p).By the bounds on Gamma-function we have pΓ(p/2) = Γ(p/2+1) ≤ (p+1)^(p+1)/2 2^-(p+1)/2^1-p/2≤ p^p/2 2^-p/2 and p Γ(p) = Γ(p+1) ≤ (p+1/2)^p+1/2^1-p≤ p^p (see <cit.>), thus[max_ℓ∈ [n]| X_ℓ|^p ] ≤max{√(8σ^2 log n), 8blog n}^p +(2σ)^p p^p/2 + 2 (8b)^p p^p. Let W_1,…,W_n be a sequence of Beta-distributed random variables W_i ∼Β(1/κ, (n-1)/κ) for κ > 0. Let {_i}_i∈[n] be a filtration such that W_i is independent from _i-1: [W_i | _i-1] = [W_i], and X_1,…,X_n be a sequence of bounded predictable random variables: [X_i | _i-1] = X_i, | X_i |≤ B.Then with probability at least 1-δ the following holds|∑_i=1^n W_i X_i - 1/n∑_i=1^n X_i |≤ 60 ^2 B √(κlog(1/δ)/n) + 1200 B κlog(n) log^2(1/δ)/nFirst we notice that Z_i = (W_i - [W_i]) · X_i forms a martingale-difference sequence: [Z_i | _i-1] = 0. Therefore, we can apply Theorem <ref>^1/p[ |∑_i=1^n Z_i |^p ] ≤ 60√(p)·^1/p[ | ∑_i=1^n _i |^p/2] + 120p ·^1/p[max_i∈[n]| Z_i |^p],where _i = [ Z_i^2 | _i-1] = X_i^2 (W_i). We can easily upper bound the variance of Beta-distributed random variable and obtain^1/p[ | ∑_i=1^n _i |^p/2] ≤^1/p[ | ∑_i=1^n κ X_i^2/n^2|^p/2] ≤√(κ B^2/n).For the second term we apply Lemma <ref> since W_i are (κ/n^2, 2κ/(3n))-sub-exponential^1/p[max_i∈[n]| Z_i |^p]≤ B(max{√(8κlog n/n^2), 16 κlog n/3n} + ^1/p√(κ/n^2)√(p)+ (2)^1/p16κ/3n· p ) ≤ 20 Bκ· p ·log n/n .Therefore we have^1/p[ |∑_i=1^n Z_i |^p ] ≤ 60· p^1/2√(κ B^2/n) + 1200 · p^2 B κ·log n/n.Next we turn from moments to tails. By Markov inequality with p = log(1/δ)[ |∑_i=1^n Z_i | ≥ t]≤( ^1/p[ |∑_i=1^n Z_i |^p ]/t)^p ≤( 60B √(κlog(1/δ)/n) + 1200 log^2(1/δ) Bκlog(n) / n /t)^log(1/δ).Taking t = 60 ^2 B √(κlog(1/δ)/n) + 1200 B κlog(n) log^2(1/δ)/n we conclude the statement. § TECHNICAL LEMMASLet ν∈([0,b]) be a probability measure over the segment [0,b] and let = (1-α) δ_ + α·ν be a mixture between ν and a Dirac measure on > b. Then for any μ∈ (0, b)(, μ) ≤ (1-α) (ν, μ).By a variational formula for(see Lemma <ref>)(, μ) = max_λ∈ [0, 1/(-μ)]_X ∼[ log( 1 - λ (X-μ) ) ].Sinceis a mixture, we have for any λ∈ [0, 1/(- μ)]_X ∼[ log( 1 - λ (X-μ) ) ] = (1-α) _X ∼[ log( 1 - λ (X-μ) ) ] + αlog( 1 - λ (-μ) ).Notice that max_λ > 0log(1-λ(-μ)) = 0. Thus, maximizing each term separately over λ, we have(, μ)≤ (1-α) max_λ∈ [0, 1/(-μ)]_X ∼[ log( 1 - λ (X-μ) ) ] ≤ (1-α) max_λ∈ [0, 1/(b-μ)]_X ∼[ log( 1 - λ (X-μ) ) ]= (1-α) (ν, μ).§ EXPERIMENTAL DETAILS In this section we detail the experiments we conducted for tabular and non-tabular environments. For all experiments we used 2 CPUs (Intel Xeon CPU 2.20GHz), and no GPU was used. Each experiment took approximately one hour.§.§ Tabular experiments In our initial experiment, we investigated a simple grid-world environment. Environments For tabular experiments we use two environments.The first one is a grid-world environment with 100 states (i, j) ∈ [10]×[10] and 4 actions (left, right, up and down). The horizon is set to H=50. When taking an action, the agent moves in the corresponding direction with probability 1-ϵ, and moves to a neighbor state at random with probability ϵ=0.2. The agent starts at position (1, 1). The reward equals to 1 at the state (10, 10) and is zero elsewhere. The second one is a chain environment described by <cit.> with L=15 states and 2 actions (left or right). The horizon is equal to 30, the probability of moving into wrong direction is equal to 0.1. The agent starts in the leftmost state with reward 0.05, also the largest reward is equal to 1 is the rightmost state.Variations of randomized Q-learning First we compare the different variations of randomized Q-learning on grid-world environment. Precisely we consider: * a randomized version of , detailed in Appendix <ref>.* a staged version of , described in Section <ref>.* a version of which samples one Q-value function in the ensemble to act, described in Appendix <ref>.For these algorithms we used the same parameters: posterior inflation κ=1.0, n_0=1/S prior sample (same as , see below), ensemble sizeJ=10. We use a similar ensemble size as the one used for the experiments with by <cit.>. For we use stage of sizes((1+1/H)^k)_k≥1 without the H factor, in order to have several epochs per state-action pair even for few episodes.The comparison is presented in Figure <ref>. We observe that and behave similarly with slightly better performance for . This is coherent with the experiment on the comparison between and <cit.> where the optimistic version performs worst than the fully randomized algorithm. We also note that even with the aggressive stage schedule, needs more episode to converge. We conclude that despite that stage simplifies the analysis, it artificially slows down the learning in practice.To ease the comparison with the baselines, for the rest of the experiments we only use because of its similarity with .Baselines We compare algorithm to the following baselines: * <cit.> a model-free optimistic Q-learning.* <cit.> a model-based optimistic dynamic programming.* <cit.> optimistic real-time dynamic programming.* <cit.> model-based posterior sampling.* <cit.> model-based randomized dynamic programming. The selection of parameters can have a significant impact on the empirical regrets of an algorithm. For example, adjusting the multiplicative constants in the bonus of or the scale of the noise in can result in vastly different regrets. To ensure a fair comparison between algorithms, we have made the following parameter choices:* For bonus-based algorithm, , we use simplified bonuses from an idealized Hoeffding inequality of the formβ_h^t(s,a) ≜min( √(1/n_h^t(s,a)) + H-h+1/n_h^t(s,a), H-h+1 ) .As explained by <cit.>, this bonus does not necessarily result in a true upper-confidence bound on the optimal Q-value. However, it is a valid upper-confidence bound for n_h^t(s,a)= 0 which is important in order to discover new state-action pairs. * For we use the variance of Gaussian noise equal to simplified Hoeffding bonuses described above in (<ref>).* For , we use a Dirichlet prior on the transition probability distribution with parameter (1/S,…,1/S) and for the rewards a Beta prior with parameter (1,1). Note that since the reward r is not necessarily in {0,1} we just sample a new randomized reward r'∼(r) accordingly to a Bernoulli distribution of parameter r, to update the posterior, see <cit.>. Results Figure <ref> shows the result of the experiments. Overall, we see that outperforms algorithm on tabular environment, but still degrades in comparison to model-based approaches, that is usual for model-free algorithms in tabular environments. Indeed, as explained by <cit.>, using a model and backward induction allows new information to be more quickly propagated. For example needs only one episode to propagate information about the last step h=H to the first step h=1 whereas or need at least H episodes. But as counterpart,has a better time-complexity and space- complexity than model based algorithm, see Table <ref>.§.§ Non-tabular experiments The second experiment was performed on a set of two dimensional continuous environments <cit.> with levels of increasing exploration difficulty. Environment We use a ball environment with the 2-dimensional unit Euclidean ball as state-space = {s∈^2, s_2≤ 1} and of horizon H=30. The action space is a list of 2-dimensional vectors = {[0.0, 0.0], [-0.05, 0.0], [0.05, 0.0], [0.0, 0.05], [0.0, -0.05]} that can be associated with the action of staying at the same place, moving left, right, up or down. Given a state s_h and an action a_h the next state is s_h+1 = proj_(s_h + a_h + σ z_h)where z_h∼([0,0] , I_2) is some independent Gaussian noise with zero mean and identity covariance matrix and proj_B is the euclidean projection on the unit ball . The initial position s_1 = σ_1 z_1 with z_1∼([0,0] , I_2) and σ_1=0.001, is sampled at random from a Gaussian distribution. The reward function independent of the action and the step r_h(s,a) = max( 0,1 -s - s'/c )where s'=[0.5,0.5]∈ is the reward center and c >0 is some smoothness parameter. We distinguish 3 levels by increasing exploration difficulty: * Level 1, dense reward and small noise. The smoothness parameter is c=0.5·√(2)≈ 0.71 and the transition standard deviation is σ = 0.01.* Level 2, sparse reward and small noise. The smoothness parameter is c=0.2 and the transition standard deviation is σ = 0.01.* Level 3, sparse reward and large noise. The smoothness parameter is c=0.2 and the transition standard deviation is σ = 0.025. algorithm Among the different versions of for continuous state-action space, see Section <ref>, we pick the algorithm, described in Appendix <ref>, as it is the closest version to the algorithm. It combines the algorithm and adaptive discretization. For we used an ensemble of size J=10 ≈log(T), κ = 10 ≈log(T) and a prior number of samples of n_0 =0.33. Note that we increased the number of prior samples in comparison to the tabular case as explained in Section <ref>. Baselines We compare algorithm to the following baselines: * <cit.>, an adaptation of algorithm to continuous state-space thanks to adaptive discretization;* <cit.>, a kernel-based version of the algorithm;* <cit.>, a deep RL algorithm;* <cit.>, a deep RL algorithm with an additional exploration given by bootstraping several Q-networks;For and baselines we employ the same simplified bonuses (<ref>) used for the tabular experiments. For we used Gaussian kernel of bandwidth 0.025 and the representative states technique, with 300 representative states, described by <cit.>.For and we use as netwrok a 2-layer multilayer perceptron (MLP) with hidden layer size equals to 64. For exploration, utilizes ε-greedy exploration with coefficient annealing from 1.0 to 0.1 during the first 10,000 steps. For we use ensemble of 10 heads and do not use ε-greedy exploration. Results Figure <ref> shows the results of non-tabular experiments.Overall, we see that outperforms in all environments, especially in the sparse reward setting.However, we see that model-based algorithm is much more sample efficient than model-free algorithm, as it was shown by <cit.>. This is connected to low dimension of the presented environment, where the difference in theoretical regret bounds is not so large. However, this performance come at the price of 3-times larger time complexity, see Table <ref>.Regarding the comparison to neural-network based algorithms, we see that approaches based on adaptive discretization always outperforms and on an environment with non-sparse rewards. We connect this phenomenon to the fact that neural network algorithms are solving two problems at the same time: exploration and optimization, whereas discretization-based approaches solve only exploration problem. In the setup of sparse rewards it turns out that neural network-based approaches are competitive with and . Notably, shows itself as the worst one, whereas and show similar performance, additionally justifying exploration effect of ensemble learning and randomized exploration.
http://arxiv.org/abs/2310.18186v1
{ "authors": [ "Daniil Tiapkin", "Denis Belomestny", "Daniele Calandriello", "Eric Moulines", "Remi Munos", "Alexey Naumov", "Pierre Perrault", "Michal Valko", "Pierre Menard" ], "categories": [ "stat.ML", "cs.LG" ], "primary_category": "stat.ML", "published": "20231027145944", "title": "Model-free Posterior Sampling via Learning Rate Randomization" }
Bogolyubov Institute for Theoretical Physics, National Academy of Sciences of Ukraine, vul. Metrologichna 14b, Kyiv 03143, Ukraine [email protected] The spectrum of aone-dimensional pseudospin-one Hamiltonian with a three-component potential is studied for two configurations: (i) all the potential components are constants over the whole coordinate space and (ii) the profile of some components is of a rectangular form. In case (i), it is illustrated how thestructure of three (lower, middle and upper) bandsdependson the configuration of potential strengths including the appearance of flat bands at some special values of these strengths. In case (ii), the set of twoequations for finding bound states is derived.The spectrum of bound-stateenergies is shown to depend cruciallyon the configuration of potential strengths. Each of these configurations isspecified by asingle strength parameter V. The bound-state energies arecalculated as functionsof the strength V and a one-point approach is developed realizing correspondent pointinteractions.For different potential configurations, the energy dependenceon the strength V is described in detail, including its one-point approximation. From a whole variety of bound-state spectra, four characteristic types are singled out.Keywords: Points interactions, bound states, flat bands, Dirac equation§ INTRODUCTIONExperimental discovery of graphene attracted attention to condensed matter systems with spectrum of quasiparticles similar to the relativistic one. It is well known that quasiparticle excitations in graphene are described at low energies by the massless Dirac equation in two space dimensions. Moreover, it was shown <cit.> that more complicated fermionic quasiparticles could be realized in crystals with special space groups with no analogues in particle physics, where the Poincaré symmetry provides strong restrictions allowing only three types: Dirac, Weyl and Majorana (not discovered yet) particles with spin 1/2. In condensed matter systems, besides fermions with pseudospin 1/2, other fermions with a higher pseudospin can appear in two- and three-dimensional solids. In particular, special attention is paid to fermionic excitations with pseudospin one, whose Hamiltonian is given by the scalar product of momentum and the spin-1 matrices <cit.>.Many aspects of pseudospin-1 Hamiltonians, such as the energy spectrum having a flat band along with two dispersive bands which are linear in momentum as in graphene, are fascinating. The dice model is an example of such a system, which hosts pseudospin-1 fermions with a completely flat band at zero energy <cit.>.The quenching of the kinetic energy in flat bands strongly enhances the role of electron-electron and other interactions and may lead to the realization of many very interesting correlated states such as ferromagnetism <cit.>, superconductivity in twisted bilayer graphene <cit.> and plethora of other quantum phases <cit.>.Currently, a whole body of literature has been accumulated, which is devoted tothe investigation of physical quantities in the presence of flat bands in two-dimensional systems such as orbital susceptibility <cit.>, optical conductivity <cit.>, magnetotransport <cit.>, RKKY <cit.> and Coulomb <cit.> interactions. However, one-dimensional pseudospin-1 systems have been much less studied. Here, one should note the recent works by Zhang with coauthors <cit.>, where the bound state problem in a one-dimensional pseudospin-1 Dirac Hamiltonian with a flat bandwas investigated in the presence of delta- and square well potentials.In particular, the existence of infinite series of bound states near the flat band appears to be of great interest<cit.>. Very recently, the transport properties and snake states of pseudospin-1 Dirac-like electrons have been analyzedby JakubskýandZelaya <cit.> in Lieb lattice under barrier- and well-like electrostatic interactions. In the present work we consider a one-dimensional spin-1 Hamiltonian H=H_0 +V(x) with its free-particle partH_0 = -i S_y ddx +m S_z ,  S_y = 1 √(2)([0 - i   0; i    0 - i; 0     i    0 ]),   S_z =([ 1   0    0; 0   0    0; 0   0 -1 ])and a potentialV(x)= ( [V_11(x)      0           0; 0       V_22(x)       0; 0          0        V_33(x) ]).We use the matrix S_y instead of S_x in <cit.> in order to have real coefficients in the equations as the Dirac equation in the Majorana representation.Let ψ(x) =col( ψ_1(x), ψ_2(x), ψ_3(x)) be a three-component wave function. Then the Schrödinger equation [H_0 +V(x)]ψ(x)= Eψ(x) with energy E is represented in the component form as the system of three equations:[- ψ'_2(x)/√(2) +[m +V_11(x)]ψ_1(x) = E ψ_1(x),; [ψ'_1(x) -ψ'_3(x)]/√(2) + V_22(x)ψ_2(x) =Eψ_2(x),;ψ'_2(x)/√(2) - [m - V_33(x)]ψ_3(x) = E ψ_3(x), ]where the prime stands for the differentiation over x. Notice that adding the first and third equations we get an algebraic relation between the functions ψ_1 and ψ_3. In fact, we have two differential equations andone algebraic constraint. This is due to the fact that the matrix S_y is singular, detS_y=0, and its rank equals two. Thus the system (<ref>) cannot be transformed to the canonical form ψ̇_i=M_ijψ_j for the system of differential equations.The free-particle spectrum of equations (<ref>), where V_11(x)=V_22(x)=V_33(x) ≡ 0, consists of the three bands:E=0 (), E= ±√(k^2 +m^2) ().The gap in this spectrum consists of the two intervals -m < E < 0 and 0 <E<m where possible bound states can exist in the presence of a potential term. The spectrum of the Hamiltonian H_0 is particle-hole symmetric with the isolated flat band at zero energy, which is a consequence of the existence of a matrix C,C=([ 0 0 1; 0 1 0; 1 0 0 ]),that anti-commutes with H_0. It is interesting that, for another type of the mass term m diag(1,-1,1), the flat band with the energy E=m exists and touches either the upper (m > 0) or the lower (m < 0) dispersive energy band, thus violating the particle-hole symmetry (similar to the two-dimensional α- T_3 model <cit.>).While in the non-relativistic case, in the presence of an external constant potential, the free-particle spectrum is simply shifted accordingly, the spectrum of system (<ref>), in a similar situation where the strength components (V_11, V_22 and V_33) are constant over the whole x-axis, depends on the configuration of these components in a non-trivial way. Therefore it is of interest to examine the spectrum structure of a pseudospin-1 Hamiltonian depending on all the vectors col(V_11,V_22,V_33) which forms a three-dimensional space ^3. The further task is to single out explicitly in this space the sets of the existence of flat bands.For realizing bound statesof the pseudospin-one Hamiltonian,the components V_11(x),V_22(x) and V_33(x), defined as functions on the whole x-axis, must decay to zero at |x| →∞.Then the bound states (if any) are expected to appear within the gap -m < E< m. Having the explicit solution of equations (<ref>) with constant strength components, it is reasonable to choose the components of the potential V(x) in the form of rectangles (barriers or wells). In simple terms, such rectangular potentials describe a heterostructure composed of parallel plane layers. The particle motion in these systems is confined only along the x-axis, being free in (perpendicular) planes. In this case, for some special configurations of the strengths V_11,V_22 and V_33, it is possible to examine the bound-state spectrum in an explicit form, exhibiting a number of interesting and intriguing features.Because of the rapid progress in fabricating nanoscale quantum devices, the investigation of extremely thin layers described by sharply localized potentials is of particular interest nowadays. In this regard, the so-called zero-range or point interaction models,which are widely used in various applications to quantum physics <cit.>, should also be elaborated for Dirac-like systems. In general, a point interaction, being a singular object, is determined by the two-sided boundary conditions on a wave function, which are given at the point of singularity (say, e.g., x=± 0). In the case of a heterostructure consisting of a finite number of parallel layers, it is quite useful to apply the transfer matrix approach asa starting point to implement such a modeling. Knowing the matrix that connects the values of a wave function and its derivative (in the non-relativistic case) or the components of a spinor (in the relativistic case) given at the boundaries of each mono-layer, the full transfer matrix of the system can easily be calculated as the product of all the mono-layer matrices. The further step is to shrink the thickness of the full multi-layered system to zero. For example, in this way, the exactly solvable model has been constructed for the non-relativistic Schrödinger equation with a delta derivative potential δ'(x) <cit.>. In other studies, performed for instance in <cit.>,the squeezing limit This squeezed ...may be applied separately to each layer, fixing the distances between the layers. Here, using the transfer matrix method, the bound states of a one-dimensional Dirac equation with multiple delta potentials have been studied. Based on this method, for a similar equationwritten in a more general form,the continuity between the states of perfect transmission and bound states has been established in <cit.>.Finally, it should be emphasized that the squeezed connection matrices, whichdefine the corresponding point interactions, depend on the shape of the functions used for the squeezing limit realization. This non-uniqueness problem refers to both the non-relativistic Schrödinger equation with a δ'-like potential <cit.> and the relativistic Dirac equation with a δ-potential <cit.>. In this regard, the piecewise representation of the potential profile of layers seems to be motivated from a physical point of view because of satisfying the principle of strength additivity <cit.>. § THREE-BAND STRUCTURE OF THE ENERGY SPECTRUM FOR THE HAMILTONIAN WITH CONSTANT POTENTIALSConsider the system for which V_jj(x) ≡ V_jj= const., V_jj∈, j=1,2,3, and rewrite equations (<ref>) in the form[-ψ'_2(x) = √(2)(E-V_1) ψ_1(x),;ψ'_1(x) -ψ'_3(x) =√(2) (E-V_2) ψ_2(x),; ψ'_2(x) = √(2)(E-V_3) ψ_3(x), ]whereV_1 := V_11+m,   V_2 ≡ V_22 ,   V_3 := V_33-mare renormalized potentials strengths (or intensities).Assume that ψ(k;x) =col(A_1, A_2, A_3)exp(± ikx) with unknowns A_j'sand a wave number k being real or imaginary. Inserting this representationinto equations (<ref>), we get a system of three linear equations. Calculating next the determinantof this system, we arrive at theequationF(E)= G(E)k^2,   k^2 ∈∖{0},G(E) =E - V_a ,   V_a := 12( V_1+V_3), where the function F(E) hasa cubic factorized form:F(E) = (E-V_1)(E-V_2)(E-V_3) Cubic equation (<ref>) with (<ref>) determines the dispersion laws that describe the relation between the energy E and the wave number k. Explicit solutions can be written using the well-known formulas for the roots of this cubic equation, but we prefer a qualitative analysis of the equation, which we will consider in the next section.A general solution of equations (<ref>) can be found if we express the constants A_1 and A_3 through A_2, using also equation (<ref>). As a result, it can be represented as the sum of linearly independent solutions:ψ(k;x) = B_1 col( -σ_1, 1 , σ_3) e^ ik x+ B_2col( σ_1,1,-σ_3 ) e^- ik x,where the constants B_1 and B_2 are arbitrary,σ_j :=ik √(2) (E-V_j) ,j=1,3,and the wave number k, beingreal or imaginary, is related to the energy E through the formula k = √((E-V_1)(E-V_2)(E-V_3)E- V_a) .§.§ Structure of dispersion and flat bands For the analysisof the three-band spectrum E =E(k), where k is real, it is convenient to usethe diagrams shown in figure <ref>.Here, without loss of generality, it is assumed that V_1 ≤ V_3. The solutions for the energy E are indicated by the points of intersectionof the cubic function F(E) and the straight line G(E)k^2 that passes throughthe middle point V_a located between the zeroes E=V_1 and E=V_3. The slope ofthis line is governed by k^2 >0 and rotating it around the `turning' point V_a,one obtains the energy bands E(k). `Moving' the zero E=V_2 of the cubicfunction F(E) along the E-axis,the all possible types of the three-bandspectrum E(k) are visually illustrated by the right diagrams in each panel. Each type consists of the lower and upper dispersive bands, whereas the middle band can be either dispersive or flat. Thus, in panels (a)–(c) and (e)–(g) for the case V_1 <V_3, we have the middlebands of the dispersivetype, which are bounded. Here, in the limits as V_2 ↗ V_a or V_2 ↘ V_a, the two-sided middledispersion bands shrinkto a flat band horizontal line continuously, asdemonstrated by panel (d). Finally, the case V_1 = V_3 is illustrated bypanels (h)–(j).Here,the flat band touches the upper dispersion band if V_2 < V_1 =V_3,the lower one if V_2 > V_1 =V_3 andboth the lower and upperdispersion bands ifV_2 =V_1 =V_3.§.§ Flat band planes As follows from equation (<ref>), the existence of flat bands is provided if the two equalitiesF(E) =G(E)=0 take place simultaneously. Then the dispersion law holds true regardless of the wave number k. In this case, the average strengthV_a must coincide with one of the zeroes E=V_j , j=1, 2, 3,of cubic function (<ref>). Therefore, one of the three relationsV_1 +V_3 =2V_j,   j=1, 2, 3,is the necessary and sufficient condition for the existence of flat bands.In the case j=2, the corresponding relationin (<ref>) becomesV_11 +V_33=2V_22 , describing a plane in the (V_11 ,V_22 ,V_33)-space, whichwe call from now on the A-plane. Hence, the flat band energy on this plane isE=V_2 =V_22 .Particularly, on the line V_11= V_22 =V_33≡ V, the flat band energy is shifted from E=0 (free-particle case) to E=V.In both the cases j=1,3, condition (<ref>) reduces to one equation V_1=V_3 . Consequently, this equation together with an arbitrary V_2, i.e.,V_33- V_11 = 2m,   V_22∈,defines a plane in the (V_11 ,V_22 ,V_33)-space, which we call from now on the B-plane. The flat band energy on this plane isE =V_1 =V_3= V_11+ m = V_33 -m . In the particular case V_1 =V_2 =V_3 , both equations (<ref>) and (<ref>)are satisfied. Therefore, there exists anintersection of the planes Aand B as shown in figure <ref>.Consequently, on the line A∩ B, we havethe flat band energy E = V_1 =V_2 =V_3=V_11+ m =V_22= V_33 -m . Thus, as illustrated by the diagrams in panels (d) and (h)–(j) of figure <ref>,the existence of flat bands is possible if and only if the line G(E)k^2 passes through any of zeroes E= V_j's, j=1, 2, 3, of the cubic function F(E). In the (V_11 ,V_22 ,V_33)-space, the flat bands are foundonly on the A- and B-sets, including the line of their intersection A∩ B. Therefore, these sets may be called from now on as the flat band planes. One can conclude thatpanels (d) and (h)–(j) in figure <ref>illustrate all the possible types of flat bands. In other words, any rotationof the straight (red) linearound the point E=V_a keeps the solution of equation (<ref>)regardless of the slope determined byk^2, visualizing the existence of flat bands. Panel (d) in figure <ref> describes the situation as the middle dispersion curves in panels (c) and (e)are squeezed to a (horizontal) flat line.The potential strengths in this case are found on the A-plane defined by equation (<ref>) where V_1 ≠ V_3 . The case V_1 =V_3 but V_2 ≠ V_1=V_3 is presented by panels (h) and (i). As illustrated by these panels, one of the lower or upper gaps disappears in the spectrum. The potential strengths in this case belong to the B-plane defined by equation (<ref>) where V_2 ≠ V_1=V_3 . Finally, if V_1 =V_2 =V_3 , the strengths are found on the intersection of the A- and B-planes and, as demonstrated by panel (j), both the gaps in the spectrum disappear. §.§ Eigenenergies and eigenfunctions of dispersion bands In general, if V_1 ≠ V_3 and V_a ≠ V_j , j=1, 2, 3, the situation isillustrated by panels (a)–(c) and (e)–(g) in figure <ref>,where all the three [lower E_-(k), middle E_0(k)and upper E_+(k)] bandsare dispersive. In this case,the eigenenergiesE_±(k) and E_0(k)are three roots of cubic equation (<ref>).In other cases, when the average strength V_a coincides with onethe strengths V_j's, cubic equation (<ref>)reduces to a quadratic form and the triad (V_11, V_22, V_33) ∈^3falls into one ofthe A- and B-planes or their intersection. Plane A: On the A-plane, owingto relation (<ref>),the energy of the upper and lower dispersion bands is a solution of the equation (E- V_1)(E-V_3) =k^2. Explicitly, this solution readsE= E^A_±(k) = E^A_0±√(k^2 + ( V_1 -V_3 2)^2) ,E^A_0 =V_2 ,illustrated bypanel (d) in figure <ref>, where the energies of the lower and upper dispersion bandsE^A_±(k) are depicted by the black curves and the energy of theflat band E^A_0by the blue horizontal line. Both the lower (V_1 < E < E_0^A) and upper(E_0^A < E< V_3) gaps are non-empty.The corresponding three eigenfunctions ψ_±^A(k;x) and ψ_0^A(k;x) are described by general solution(<ref>), where σ_1 and σ_3 are substituted byσ^A_1,±(k)= i√(2)k[ V_1 -V_32±√(k^2 + (V_1 -V_32)^2) ],  σ_1,0^A(k) = i√(2)k(V_2 -V_3) ,σ^A_3,±(k)= i√(2)k[ V_3 -V_12±√(k^2 + (V_1 -V_32)^2) ],  σ_3,0^A(k) = i√(2)k(V_2 -V_1) ,respectively.Plane B: Similarly, on the B-plane, we arrive at the equation (E-V_1)(E-V_2) =k^2, having the solutionE= E^B_±(k) = V_1 +V_22±√(k^2 + ( V_1 -V_2 2)^2), E_0^B = V_1 =V_3 . This solution is depictedin figure <ref>for two configurations, wherethe strength V_2 does not coincide with the flat band energy E_0^B:in panel (h) V_2 < E_0^B and in panel (i) V_2> E_0^B. Correspondingly,only the lower gap V_2 < E< E_0^B and the upper gap E_0^B < E< V_2are non-empty,while respectively the upper and lower gaps disappear. Similarly, the eigenfunctionsψ_±^B(k;x) and ψ_0^B(k;x) are given by wave function (<ref>),where σ_1 and σ_3 are replaced byσ^B_1,±(k)= σ^B_3,±(k)= i√(2)k[ V_1 -V_22±√(k^2 + (V_1 -V_22)^2) ] , σ^B_1,0(k) =σ^B_3,0(k) = i√(2)k(V_1-V_2)= i√(2)k(V_3-V_2). Line A∩ B: On the A∩ B-line, energies (<ref>) and (<ref>) reduce toE= E_±^A∩ B(k) =E_0^A∩ B± |k|,E_0^A∩ B = V_1 =V_2=V_3 .As illustrated by panel (j) in figure <ref>, both the gaps for this configuration disappear. Setting in (<ref>) and (<ref>) V_1=V_2 =V_3, we obtainσ^A ∩ B_1,±(k) = σ^A ∩ B_3,±(k) =±sgn(k) i√(2), σ^A ∩ B_1,0(k) = σ^A ∩ B_3,0(k) =0. Finally, notice that, as follows from the general formula for energy (<ref>), in the particular caseV_11 = V_22 =V_33≡ V, we have the energy shift E =V ±√(k^2 +m^2) from the free-particle spectrum, similarly to the one-dimensional non-relativistic case. The components of the wave function, in this case, are as followsψ_+(k;x) = B_1 ( [ m+√(k^2 +m^2); i√(2) k; m-√(k^2 +m^2) ]) e^ ikx + B_2 ( [ m+√(k^2 +m^2);-i√(2) k; m-√(k^2 +m^2) ]) e^- ikx,ψ_0(k;x) = B_1 ( [ m; i√(2) k; m ]) e^ ikx + B_2 ( [m; -i√(2) k;m ]) e^- ikx,ψ_-(k;x) = B_1 ( [ m-√(k^2 +m^2); i√(2) k; m+√(k^2 +m^2) ]) e^ ikx + B_2 ( [ m-√(k^2 +m^2);-i√(2) k; m+√(k^2 +m^2) ]) e^- ikx ,where B_1 and B_2 are arbitrary constants. Particularly, on the line A∩ B, where the gapless spectrum occurs,wave function components (<ref>) are simplified tothe formψ_+(k;x) =B_1 ( [ 1; sgn(k)i√(2);-1 ]) e^ ikx + B_2 ( [ 1; - sgn(k)i√(2);-1 ]) e^- ikx ,ψ_0(k;x) =B_1 ( [ 0; sgn(k)i√(2); 0 ]) e^ ikx + B_2( [0; - sgn(k) i√(2);0 ]) e^- ikx, ψ_-(k;x) = B_1( [-1; sgn(k)i√(2); 1 ]) e^ ikx + B_2( [ -1; - sgn(k) i√(2);1 ]) e^- ikx,with arbitrary constants B_1 and B_2 . § BOUND STATES OF THE HAMILTONIAN WITH RECTANGULAR POTENTIALS In the previous section we have examined the solutions of system (<ref>) with athree-component potential V(x) =(V_11,V_22,V_33), which isconstant on the whole x-axis. Bound states can be materialized if thepotential V(x) is compactly supported on some finite interval. In this regard, we focus here on the potential components, each being ofa rectangular form. More precisely, we assumeV(x)= {[ col(V_11,V_22,V_33) x_1 ≤ x ≤ x_2 ,;col(0, 0, 0)(-∞, x_1)∪ (x_2, ∞), ].where the points x_1 and x_2 are arbitrary.Within the interval x_1 < x < x_2 , the representation of general solution (<ref>) can also be rewritten in the terms of trigonometric functions as followsψ(x) =C_1 ( [ ( E -V_1)^-1ksin(kx); √(2) cos(kx); (V_3 -E)^-1k sin(kx) ]) +C_2 ( [ ( E -V_1)^-1kcos(kx); - √(2) sin(kx); (V_3 -E)^-1k cos(kx) ]) ,where C_1 and C_2 are arbitrary constants. For realizing bound states, beyond the interval x_1 < x < x_2, wave function (<ref>) must decrease to zero at infinity.Settingin (<ref>)–(<ref>),k=iκ, κ >0, V_1=m, V_2=0 and V_3 =-m, we arrive at the following finite representation of general solution (<ref>) outside the interval x_1 < x < x_2:ψ(x) = {[ D_1col(ρ^-1, √(2), ρ)e^κ (x-x_1) - ∞ <x ≤ x_1 ,; D_2col( ρ^-1, - √(2),ρ)e^- κ (x-x_2)x_2 ≤ x < ∞ , ].where D_1 and D_2 are arbitrary constants,κ : = √(m^2-E^2)ρ := √(m-E m+E) .The four constants C_1 , C_2 and D_1 , D_2 in expressions (<ref>) and (<ref>) can be determined by using matching conditions imposed on the boundaries x=x_1 and x=x_2 . The requirement for continuity of all the three components of the wave function ψ(x) at x=x_1 and x=x_2 leads tosix equations that involve only the four constants. Therefore such matching conditions are not appropriate. However, as can be seen from the structure of system(<ref>), it is not necessary to require the continuity of the components ψ_1(x) and ψ_3(x). Instead, it is sufficient to impose the continuity ofψ_1(x) -ψ_3(x) and ψ_2(x), so that the components ψ_1(x) and ψ_3(x), each alone, may in general be discontinuous at x_1 and x_2. Thus, from (<ref>) and (<ref>), weobtain the following four equations:[C_1 sin(kx_1) +C_2 cos(kx_1) = D_1/γ ,; C_1 cos(kx_1) - C_2 sin(kx_1) = D_1 ,;C_1 sin(kx_2) +C_2 cos(kx_2) = D_2/γ ,;C_1 cos(kx_2) -C_2 sin(kx_2) = - D_2 , ]where k is given by (<ref>) andγ :=κ k(1 -V_2 E).Note that the boundary conditions, imposed above on the components ψ_1(x) - ψ_3(x) and ψ_2(x) at x=x_1 and x=x_2, provide the continuity of the net currentj(x)=ψ^† S_yψ = i√(2) [ψ_2^*(ψ_1-ψ_3)-(ψ_1^*-ψ_3^*)ψ_2].Equating the determinant of the system of equations (<ref>) to zero, one can derive a necessary condition for the existence of bound states. In general, the solution inside the interval x_1 ≤ x ≤ x_2can be given throughamatrixconnecting the values of the functionsψ_1(x) -ψ_3(x) and ψ_2(x) atthe boundarypoints x=x_1 and x=x_2.We define this connection matrix as follows ( [ (ψ_1-ψ_3)(x_2); ψ_2(x_2) ])=Λ( [ (ψ_1-ψ_3)(x_1); ψ_2(x_1) ]),  Λ := ( [ λ_11  λ_12; λ_21  λ_22 ]).Using then the boundary values of the componentsψ_1(x) -ψ_3(x) and ψ_2(x) obtained from wave function (<ref>)and excluding the constants D_1 and D_2, we get the equation for the bound state energy E=E_b given in terms of the connection matrix Λ that describes any potential profile inside the interval x_1 ≤ x ≤ x_2:λ_11+ λ_22 + κ√(2) Eλ_12+ √(2) E κλ_21 =0,where κ is defined in (<ref>). In one dimension, similar equations have been establishedin <cit.> for the non-relativistic Schrödinger equation and in<cit.> for the Dirac equation.§.§ Explicit formula for the connection matrix Λ In the particular case of solution (<ref>), the connection matrix Λcan be calculated explicitly.Indeed, using this solution on the intervalx_1 ≤ x ≤ x_2, we write[(ψ_1 -ψ_3)(x) =iη( -B_1 e^ ikx + B_2 e^- ikx),; ψ_2(x)= B_1 e^ ikx + B_2 e^- ikx, ]withη := - i(σ_1 +σ_3)= √(2) k(E-V_2) ,where k is given by (<ref>). Fixing equations (<ref>) at x =x_1, we find from these equations the constants B_1 and B_2 and then substitute these values again into equations (<ref>), but now fixed at x=x_2. As a result, we get the Λ-matrix in the formΛ=( [ cos(kl)       ηsin(kl); -η^-1sin(kl)   cos(kl) ]),l := x_2 -x_1 ,wherek and η are given by formulas (<ref>) and (<ref>), respectively. §.§ Basic equations for bound state energies Inserting the elements of Λ-matrix (<ref>) into general equation (<ref>), we obtain the equation for the energy of bound states E=E_b in the form2 + ( γ - 1 γ) tan(kl)=0,where the functions k=k(E) and γ(E) are defined by formulas (<ref>) and (<ref>), respectively.Equation (<ref>) splits into twosimple equations with respect to the unknowns, which we denote from now on as E =E^+ and E= E^-. As a result, these equations readγ= {[ -(kl/ 2)E =E^+,; tan(kl/ 2)E =E^-. ].The solutions to these equations, where k(E) and γ(E) are given by (<ref>) and (<ref>),describebound state energies E= E^±_b, the total number of which at a given three-component strength(V_11, V_22, V_33) ∈^3 may be finite or even infinite. Each of these energies must belong to the gap (-m,m). The existence of the solutionsE^±_b ∈ (-m, m)follows from argument that each of equations (<ref>) can be represented in the formκ /E = f(E), where the function f(E) varies on the interval -m < E < mslowly thanκ/E.§.§ Bound state eigenfunctions From matching conditions (<ref>), one can write the relations between the constants C_1 and C_2 as followsC_1[cos(kx_1)-γsin(kx_1)]= C_2[ sin(kx_1)+γcos(kx_1)],C_1[ cos(kx_2)+γsin(kx_2)]= C_2[ sin(kx_2)- γcos(kx_2)],which are equivalent because of equation (<ref>). Inserting here γ from equations (<ref>), we find[ C_1 sin(ka) = -C_2cos(ka) E^+ ,;C_1 cos(ka) =  C_2 sin(ka)E^-, ] a := 1/2(x_1 +x_2).Using next equations (<ref>) and relations(<ref>) in generalsolution (<ref>),we obtain on the interval x_1 < x < x_2the following two (even and odd parity)forms for the wave function:ψ^+(x)=C_1 cos(ka)( [(E -V_1)^-1k sin[k(x-a)];√(2) cos[k(x-a)]; (V_3 -E)^-1 k sin[k(x-a)] ])     E=E^+,    ψ^-(x)=C_2 cos(ka)( [( E -V_1)^-1k cos[k(x-a)]; - √(2) sin[k(x-a)]; ( V_3 -E)^-1 k cos[k(x-a)] ])    E=E^-.The parity transformation of a three-component fermion ψ(x) is defined as the reflection with respect to a point x=a: ψ(x+a) →ψ^P(x)=P ψ(-x+a), where the matrix P= diag(-1,1,-1) anti-commutes with S_y and commutes with S_z.Beyond the interval x_1 ≤ x ≤ x_2 ,from matching conditions (<ref>), one can find the constants D_1 and D_2.Using then relations (<ref>), weget the wave functions ψ^±(x) in the formψ^+(x) =C_1 cos(kl/2) cos(ka){[ col(ρ^-1,√(2) ,ρ) e^κ (x-x_1), -∞ < x < x_1 ,; col(-ρ^-1,√(2) ,- ρ) e^-κ (x-x_2),x_2 < x < ∞ , ].for E=E^+ andψ^-(x) = C_2 sin(kl/2) cos(ka){[ col(ρ^-1,√(2) ,ρ) e^κ (x-x_1), -∞ < x < x_1 ,; col(ρ^-1, - √(2) ,ρ) e^-κ (x-x_2),x_2 < x < ∞ , ].for E=E^-. Note that representation (<ref>)–(<ref>) allows us to set here a=0. Particularly, a=0 if x_1 =-l/2 and x_2 =l/2. The shape of the eigenfunctions ψ^±(x)given by formulas (<ref>)–(<ref>)is illustrated by figure <ref>.Notice thatthe discontinuity of the components ψ_1(x) and ψ_3(x) at the boundaries x=x_1 and x=x_2 is calculated as follows[ψ^+_j(x_1-0)-ψ^+_j(x_1+0)=ψ^+_j(x_2-0)-ψ^+_j(x_2+0)=C_1Δ^+,; ψ^-_j(x_1-0)-ψ^-_j(x_1+0)=ψ^-_j(x_2+0)-ψ^-_j(x_2-0)=C_2Δ^- , ]where j=1, 3 andΔ^± = μκcos(ka){[ cos(kl/2) E= E^+,; sin(kl/2) E= E^-, ]. μ : = m -E (V_1 - V_3)2E- V_1 -V_3 .In the particular case with V_11 =V_33≡ 0 and an arbitrary V_22, we have μ =0 for any energy E, so that ψ_1(x) and ψ_3(x) are continuous at x_1 and x_2 in this particular case. § CHARACTERISTIC SPECTRA OF BOUND STATES Based on equations(<ref>), where k and γ are given by expressions (<ref>) and (<ref>), a whole variety of bound states can be materialized, regarding various valuesof the rectangular potential strengths in formula (<ref>), where k may be either real or imaginary. In particular, on theflat bandsdefined by equations (<ref>) and (<ref>),expression (<ref>) is simplified, reducing to the equationsk= {[ √((E-V_1)(E- V_3))(V_2=V_a) A,; √((E-V_1)(E- V_2))(V_1 = V_3) B,;|E-V_2|(V_1=V_2=V_3)A∩ B. ].However, the bound states can also exist if the strengths are found beyond the flat band planes A and B. For the following analysis of theexistence of bound states, we restrict ourselves to the investigation on two pencils of straight linesinthe (V_11, V_22,V_33 )-space using a single strength parameter V.In general, a pencil of lines is defined as the set of lines passing througha common point (vertex) in the space. We have chosen two such points inthe (V_11, V_22,V_33 )-space:(0, 0 0) and (-m, 0, m). More precisely, we consider the following two pencil representations:V_11= α_1V,  V_22= α_2V,   V_33= α_3V ,with the vertex at the origin (0, 0, 0) andV_1= α_1V ,   V_2= α_2V,    V_3= α_3V ,with the vertex at the shifted point (-m, 0, m). Here, α_j ∈, j=1, 2, 3, with certainconstraintsto be imposed below in each particularcase. In the following, we refer to these representations as to the pencils P_1 and P_2, respectively. Correspondingly, wave number (<ref>) takes the following forms:k = √(2(E-α_1V-m)(E-α_2V)(E-α_3V +m)2E - (α_1 + α_3)V) P_1andk = √(2(E-α_1V)(E-α_2V)(E-α_3V )2E - (α_1 + α_3)V) P_2 . Some of the lines from the pencils P_1 and P_2 fall into the flat band planes A and B. For example, the line with α_1 =α_2= α_3 from the pencil P_1corresponds to thepotential referredin <cit.> as the potential oftype I and the corresponding line falls into the A-plane. The other two particular cases of the pencil P_1 are α_1 =α_3 =0, α_2 ≠ 0(type II, as referred in <cit.>) and α_1 ≠ 0, α_2 =α_3 =0 (type III, as defined in <cit.>). Both these examples correspond to the potentials withstrengths found outside the flat band planes. The lines with α_1 = α_3 (V_1 =V_3) from the pencil P_2 fall into the B-plane. Finally, the particular exampleα_1 =α_2= α_3 (V_1 =V_2 =V_3) in the pencil P_2 corresponds to the A∩ B-line.Solving equations (<ref>) with γ and k given by (<ref>), (<ref>) and (<ref>), for admissible fixed values of the coefficients α_j's, one can investigatea bound state energy E=E_b (if any)as a function of the strength V on the whole V-axis. As demonstrated below,different scenarios of such a behavior occur that depend on the configuration of α_j's in the pencils P_1 and P_2. Thus, thenumber of bound states at a given value of V may be finite or even infinite. For some configurations of the coefficients α_j's, the number of bound states may increase owing to detachments from the thresholds E= ± m.Notice that, due to equations (<ref>) where κ >0, for any configuration of potential strengths, the energy E_b must be found in the gap (-m,m).§.§ Bound states with asymptotically periodic energy behaviorHere we introduce the notion `asymptotic periodicity', whichmeans that the solutions to equations (<ref>), consistingof repeating pieces on the V-axis, in the limit as |V| →∞,become exactly periodic. Such a behavior can occur if γ→ const.≠ 0 and k ∝ |V| for large V. This happens if all the coefficients α_j's in both representations (<ref>) and (<ref>) are non-zero. Without loss of generality, one can put here α_2 =1.For large V, the asymptotic representation of equations (<ref>) can be treated as follows.For both the pencils P_1 and P_2, according to (<ref>), (<ref>) and (<ref>), we have k ∼√(β) |V| and γ∼ -sgn(V) κ/√(β) E whereβ := 2α_1 α_3 α_1 +α_3∈ (α_1+ α_3 ≠ 0),so that asymptotically equations (<ref>) becomeκ√(β) E∼{[(√(β) Vl/2) E=E^+,; - tan(√(β) Vl/2) E=E^-. ].From these asymptotic relations, for β >0, we get the periodic behavior:([ E^+_b; E^-_b ]) ≃ sgn(tan√(β) Vl2)m ( [ [ 1 + β^2(√(β) Vl/2)]^-1/2; - [ 1+ βtan^2(√(β) Vl/2)]^-1/2 ])thatconfirms the asymptotic periodicity of the bound state energies E^±_b for both thepencils P_1 and P_2. In the particular case β=1, solutions (<ref>) are simplified reducing to the form([ E^+_b; E^-_b ]) ≃ m( [sgn[cos(Vl / 2)] sin(Vl/2); -sgn[sin(Vl / 2)] cos(Vl/2) ]). Notice that the lines ofP_1 satisfying the conditionα_1 +α_3 =2 fall intothe A-plane [see equation (<ref>)], whilethe lines of P_2 with α_1 =α_3 appear in the B-plane[see equation(<ref>)]. The other values ofα_1 and α_3correspond to thepotentials located outsidethe flat band planes Aand B. In the particular case of P_1 with α_1 =α_3=1(the potential of type I), we have β=1 and, as a result, equations (<ref>) reduce to κ k(1 -VE) = {[- (kl/2)E=E^+,; tan(kl/2) E=E^- , ]. k=√((E-V)^2-m^2) .As follows from the form of these equations,their solutions existon the whole V-axis, where k is real (in the region |E-V| >m) and imaginary (in the region |E-V| <m), including the lines |E-V| =m. For fixed l, these solutions are depicted in figure <ref>.In the region |E-V| <m (k is imaginary), one of the solutions (for E^+_b)connects (on both the lines |E-V|=m) the pieces of the solution with k>0, while the other solution for E^-_b (completely located in the region |E-V| <m) appears as an additional branch in the almost periodic series of the solutions displayed on the whole V-axis.§.§ Spectra with an asymptotically double bound states In the previous subsection, we have established that the sufficient conditionfor the periodicity of the bound state spectrum in the limit as |V| →∞is β >0, where β is given by relation (<ref>) andthe coefficientα_2 in both the pencils P_1 and P_2 is non-zero(α_2 =1). Let us assume now that the parameter β is negative. Then,setting i√(-β) in asymptotic equations(<ref>) instead of √(β), we arrive at the following two monotonic solutionsfor large V:([ E^+_b; E^-_b ]) ≃ sgn(V) m( [[ 1 - β^2(√(-β) Vl/2)]^-1/2; [ 1- βtanh^2(√(-β) Vl/2)]^-1/2 ])→ sgn(V) m √(1-β) . In the particular case of the pencil P_2 with α_1 =α_3 =-1(β =-1) and α_2 =1, according to (<ref>), we have k = √(E^2 -V^2), so that the explicit form of equations (<ref>) in the cone region |E| >|V| (where k is real) becomessgn(E-V) κ E√(E-VE+V) ={[-( √(E^2 -V^2)l/ 2)E=E^+ ,; tan( √(E^2 -V^2) l/ 2)E=E^- . ]. Beyond the cone, k is imaginary and instead of equations (<ref>), we havesgn(V-E) κ E√(V-EV+E) ={[-( √(V^2-E^2 )l/ 2)E=E^+ ,; tanh( √(V^2-E^2) l/ 2)E=E^- . ].The cone boundary |E|=|V| separates the regions with k real and imaginary (k=0)and on this setthere are two simple solutions of this equation for V ∈ (-m,m): E_b^± =∓ V. The solutions of equations (<ref>) and (<ref>) are depicted in figure <ref>. One can specify the E^-_b solution as a ground state and theE^+_b solutionas an excited state for V<0, while for V>0 their roles are reversed. §.§ Spectra consisting of an infinite number of bound statesConsider now the pencils P_1 and P_2, in which α_1 + α_3 = 0 and α_2 =1. Then, setting in this subsection α_1 ≡α =- α_3, for the pencil P_1, expressions (<ref>) and (<ref>)becomeγ = κ k( 1 - VE), k=√([ E^2 -(α V +m )^2 ] ( 1 - VE) ) .For the pencil P_2, in this expression for k, it is sufficient to set formallym=0. Assume first that α≠ 0. Then, using for large Vexpressions(<ref>), one can represent asymptotically equations (<ref>) in the form κ VkE∼{[ (kl/2) E=E^+,; -tan(kl/2) E=E^-, ]. k∼ |α|√(V^3E) .Since V/k → 0 as|V| →∞, and takingintoaccount that |E| < m,from the right-hand sides ofequations (<ref>), we obtain the followingapproximate solutions for the bound state energies for large V:E_b^± = E_n ≃( α ln π)^2V^3,|V| < ( nπα l)^2/3m^1/3,n=1,2, … , where odd n's stand for E^+_b and even n's for E^-_b.Here, withincreasing the nth level, the energies E_n are successively cuttingat the thresholds E= ± m. For the illustration of the behavior of the bound state energieson the whole V-axis, let us consider the line in the pencil P_2,for which α_1 = α_2 =- α_3=1. Then, equations (<ref>)take the explicit form as follows sgn(1 -VE) κ√(E(E+V)) ={[-(k l/ 2)E=E^+ ,; tan(kl/ 2)E=E^- , ].withk= √((E^2 -V^2)(1 - V / E)) [instead of k in (<ref>)]. The series of exact solutions of equations (<ref>) is depicted in figure <ref>, one of which issimple: E^+_b = -V. In this figure,the region where k is real consists of the cone |E| > |V|plus the two strips (0<E< m,0<V< ∞, E<V) and (- m <E< 0, -∞ <V< 0,E>V). In the region consisting of the two strips (0<E< m,-∞ <V< 0, E + V<0) and (-m <E< 0,0<V< ∞, E+V>0), k is imaginary. Setting k=i√((V^2 -E^2)(1 - V / E))into equations (<ref>)and taking into account that 1 -V/E >0 in these strips, we conclude thatthe left- and right-hand sides of equations have opposite signs. Therefore there are no solutions in the regionwhere k is imaginary. Since at the thresholds E=± m, we have κ =0, the cutoff of the bound state energies E_b is determined by the solutions of the equations(kl/2)=0 and tan(kl/2)=0 or correspondingly by the explicit equations(V+m)^2(1-V/m)=(nπ/l)^2 for V≤ -m and(V-m)^2(1+V/m)=(nπ/l)^2 for V≥ m. A similar spectrum consisting of an infinite number of bound state energies,but without cutoffs at the thresholds E = ± m, takes also place in the particular case α =0, where we are dealing with the potential of type II (V_11=V_33 =0,V_22 =V), studied in <cit.>. More precisely, this is the line from the pencil P_1 with α_1 =α_3 =0 and α_2 =1, which does not belong to either the A- orB-planes.Thus, owing to (<ref>) and (<ref>),we havek= κ√(VE-1) , γ =- √(VE-1) , so thatequations (<ref>) can be rewritten in the explicit form asfollows√(VE-1) ={[ [(κ l/ 2)√(V/ E-1) ]E=E^+ ,; -tan[(κ l/ 2)√(V/ E-1) ]E=E^- . ].Here, k is real in the two strips: (0<E< m, 0<V< ∞ and E<V) and(-m <E<0, -∞ <V<0 and E>V). In the case if k is imaginary, the left- and right-hand sides of equations (<ref>)have opposite signs, therefore there are no solutions with imaginary k. The solution E^- =V that corresponds to k=0, splits the regions of real and imaginary k's. This means that the sign of the bound state energy E_b^+ must coincide with the sign of the strength V, i.e., the bound state energies E_b^± must be both positive if V>0, and negative if V < 0. The solution of equations (<ref>) on the whole V-axis is depicted in figure <ref>, whereit isshown that the E_b^+- and E_b^--levels alternate. Here, as follows from the form ofequations (<ref>), for each V, there existsan infinite number of energy levels.For large V,the approximate solution of equations (<ref>), which coincides with formula (33) in <cit.>, readsE_b^± =E_n≃ sgn(V)√((nπ/l)^44V^2 +m^2) -(nπ/l)^22V <m,n=1, 2 …,where even n's stand for E_b^+ and odd n's for E_b^-. There is also a solution that corresponds to n=0:E_b^+=E_0≃ sgn(V)m/√(1+ (2/Vl)^2),which faster approaches the threshold values E=± m as |V|→∞. If V→ 0, the energy of all the levels is proportional to the potential strength (E_n∝V).§.§ Bound states with a successive detachment from the thresholds In this subsection, we describe the spectrum of bound states, the number of which is finite for each fixed strength V, however, this number increaseswith growth of V because of a successive detachment of new bound states from the thresholds E = ± m. To this end, let us consider both the pencils P_1 and P_2 with α_2 =0. Notice that the lines withα_1 = -α_3 in the pencil P_1 fall into the A-plane (V_11 + V_33 =2V_22=0), while the lines with α_1 =α_3 in the pencil P_2 appear in the B-plane (V_1 = V_3).In general, ifα_2 =0, but α_1 ≠ 0 andα_3 ≠ 0, equations (<ref>) for both the pencils P_1 and P_2can be solved exactly on the whole V-axis. The solutions exist only if k>0 because for imaginary k, both the sides of equations (<ref>) have opposite signs,including the limit k → 0.Analytically,one can investigate the asymptotic behavior of the bound state spectrum if V is sufficiently large. Thus, for both the pencils P_1 and P_2,we have k ∼√(-β EV) with βgiven by (<ref>), so that asymptotically for large V, equations (<ref>) becomeκ√(-β EV)∼{[ -(√(-β EV) l/ 2)E =E^+,; tan(√(-β EV) l/ 2)E =E^-. ].Solving these asymptotic equations and using that |E|<m, we getE_b^± = E_n ≃ -(nπ/ l )^2 β V ,(nπ/l)^2|β | m < |V| < ∞ , n=1, 2, … ,where odd n's stand for E^+_b and even n's for E^-_b. One more solution,E_b^- = E_0 ≃- sgn(β V) m √(1 + (β V l/2)^2)that corresponds to n=0,is obtained studying the limit as EV → 0. These solutions are illustrated by figure <ref>, where an exact solution to equations (<ref>) on the whole V-axis is represented for the particular case β =1. In this case, in equations (<ref>), we have k= √(E(E-V)) and γ = κ/k. One can consider another configuration of the coefficients α_j in the pencils P_1 and P_2, namely α_2 ≠ 0 but α_3 =0. Let us consider the configuration α_1 ≡α > 0, α_2 =1 and α_3 =0 in the pencil P_1. Then, for large V, we have k ∼√(-α (m+E)V) and therefore equations (<ref>) becomeκE√(- V α (m+E))∼{[- (√(-α (m+E)V) l/2) E =E^+,; tan(√(-α (m+E)V) l/2) E =E^-. ].Solving these asymptotic equations for large V and using that |E |< m, we get the following approximate solution:E_b^± = E_n ≃ -[m + (nπ/ l )^2 α V], - ∞ < V < - (nπ/l)^22α m ,n=1, 2, … ,where even n's stand for E^+_b and odd n's for E^-_b. Except for this solution, there exists also a solution (assigned by the number n=0) that approaches the thresholds E=± mmore rapidly. For k >0, only the first equation (<ref>) admits a solution in the limit as E → -m, which must be negative. This solution is valid only for V <0 and itcoincides with that given by formula(<ref>). On the other hand, for positiveV, k is imaginary and, as a result, in the limit as E → m and V →∞, both equations (<ref>) have also solutions, whichcoincide in the limit as V →∞. Thus, on the whole V-axis, the n=0 bound state energy solution approximately readsE_b^+ = E_0 ≃ m {[ - (1+ 4/V^2l^2)^-1/2 V<0,; (1+ 2α m /V )^-1/2 V>0. ]. Consider the particular case ofthe line from the pencil P_1 with α_1 =2, α_2 =1 and α_3 =0.This line falls into the A-plane and, as follows from representation(<ref>),k = √((m+E)(E-2V-m)) , so that equations (<ref>) with this k becomeκ k (1 -V E) ={[ -(kl/ 2)E =E^+,; tan(kl/ 2)E =E^-. ]. Solving these equations, we obtain the bound state spectrum onthe whole V-axis, which is depicted in figure <ref>. In the region where E > 2V+m and E ∈ (-m,m), we have k>0 and, as a result, along the negative half-axis V, the successive detachment of bound state energies occurs. For imaginary k, there are two solutions, which are displayed for positive V. In the limit as V →∞, these solutions merge to a single bound state energy.Thus, we have examined some types of bound state spectra, in which the energy levels E=E_b^± crucially depend on the strength components in the (V_11, V_22, V_33)-space. The set of admissible vectors in this space has been restricted by the two pencils of straight lines P_1 and P_2with the vertices at the points (0, 0, 0) and (-m,0, m). As defined by equations (<ref>) and (<ref>), both the pencils are parametrized by the strength parameter V and the coefficients α_j ∈, j=1, 2, 3. Therefore, for a given set of these coefficients,it is possible to describe the bound state energiesas functions of theparameter V. According to the asymptotic behavior of the energy levels for large values of V, from the whole variety of spectra, we have singled out at leastfour characteristic species, referred in the following to as P, D, H and Wtypes: (i) The spectra of type P are described by the two-valuedalmost periodic levels E= E_b^+ and E= E_b^- as illustrated by figure <ref>. This type is realized on the setA_P := {α_j   P_1  P_2  | α_j ≠ 0, j=1, 2, 3,  α_1 α_3 α_1 + α_3 >0}.The spectra of this type are periodic in the limit as |V| →∞.(ii) The spectra of type D are described by thedouble-valued levels E = E_b^+ and E=E_b^- with a monotonic behavior for |V| >m. In the limit as |V| →∞, the double levels merge to single levels. This type is materialized on the setA_D:= {α_j   P_1  P_2  | α_j ≠ 0, j=1, 2, 3,  α_1 α_3 α_1 + α_3 <0 }andillustrated by figure <ref>.(iii) The spectraof type H, consisting of an infinite number of energy levels that obey the law E_b^± = E_n ∝ n^-2, n ∈, resemblethe hydrogen atom spectrum. One of the spectra of type H includesthe levels with successive cutoffs at the thresholds E =± m as illustrated by figure <ref>, so that at a given V, the number of levels is finite but it increases to infinity as |V| →∞. This spectrum is realized on the setA_H,1:= {α_j   P_1  P_2  | α_1=- α_3 ≠ 0, α_2 ≠ 0 }.The other H-spectrum, shown in figure <ref>, is materialized on the setA_H,2:= {α_j   P_1 | α_1=α_3 = 0, α_2 ≠ 0 }.Unlike the previous spectrum, the number of levels is infinite at a given V for this spectrum.(iv) The spectra of type W, consisting of the energy levels E_b^± = E_n ∝ n^2, n ∈, withtheir successive detachment from the thresholds E=± m,resemble the spectrum of a potential well with its increasing depth but fixed width.One of these spectra, illustrated by figure <ref>, is realized on the set A_W,1:= {α_j   P_1  P_2  | α_1≠ 0, α_3 ≠ 0, α_1 +α_3 ≠ 0, α_2 = 0 }. The other W-spectrum, shown infigure <ref>, is materializedon the set A_W,2:= {α_j   P_1  | α_1> 0, α_2 ≠ 0, α_3 =0 }.§ POINT INTERACTIONS REALIZED FROMRECTANGULAR POTENTIALS One-center point interactions can be obtained as the rectangular potentials of type (<ref>) are squeezed to a point.Particularly,based on equations (<ref>), one can materialize one-pointinteractions with finite bound state energies E_b for various types of the potential V(x) in the squeezing limit as l → 0. To accomplish this limit properly, first we have to derive the asymptotic behavior of k, γ and η for small l [according to expressions (<ref>), (<ref>) and (<ref>)] and then to findthe l → 0 limit of equations (<ref>) for bound state energies and finally connection matrix (<ref>). One of these point interactions is the δ-limit of the rectangular potentialsV_jj(x) =gl{[1 x_1 ≤ x ≤ x_2,;0]. → gδ(x) l→ 0,   g ∈,where g is a dimensionless strength constant of the δ-potential. This a particular case of the regularization of a delta function. More generally, for the approximation of the potential gδ(x) by regular functions in the one-dimensional Dirac equation, a scaled sequence l^-1h(x/l) with ∫_-∞^∞ h(ξ)dξ=g has been applied in paper <cit.>. It should be emphasized that, as proven in this work, the realizedpoint interactions do not depend on the shape ofthe function h(ξ).In our case, as shown below for some examples of the rectangular potential V(x), the l^-1-approximation is valid only for ground bound states and it does not `cover' excited states. To describe properly the excited states in a one-point approximation, another type of squeezing is used below, namely with the strength parameter V ∼ g/l^2m as l → 0.In the following, we referthis type of squeezingto as a `l^-2-limit'. Another type of squeezing to be used is the asymptotic representation V ∼g(m/l^2)^1/3, referred to as a `l^-2/3-limit'. Thus, using below the formulas for k,γ and η, we will calculate these squeezing limits of equations (<ref>) for several configurationsof the potentials V_jj(x), j=1, 2, 3.We perform thel → 0 limit at the origin of the (x_1, x_2)-plane,settingfirst x_1 → -0 and then x_2 → +0 as one of the ways ofapproaching the origin. In this case, a → 0 but the repeated limitof the ratio l/a is finite: lim_x_2 → +0lim_x_1 → -0(l/a) =2. Using then the second relation (<ref>), wave functions (<ref>) and (<ref>) are transformed to the form that involves only one arbitrary constant C_1: [ ψ^+(x)=C_1{[ col(ρ^-1,√(2) ,ρ) e^κ x,-∞ < x < 0,; col(-ρ^-1,√(2) ,- ρ) e^-κ x,0 < x < ∞ , ].    E=E^+,; ψ^-(x)= C_1{[ col(ρ^-1,√(2) ,ρ) e^κ x,-∞ < x < 0,; col(ρ^-1, - √(2) ,ρ) e^-κ x,0 < x < ∞ , ].       E=E^-. ] Here, E = E_b^± are the l → 0 limit values of the solutions to equations (<ref>). Explicitly, using that ρ = (m-E)/κ and ρ^-1 =(m +E)/κ, the two-sided (at x = ± 0)boundary conditions for bound states can be represented in the following form:[ ψ^+(± 0)= C_1col( ∓ (m +E^+_b)/κ^+_b,√(2),∓ (m -E^+_b)/κ^+_b ),; ψ^-(± 0)= C_1col((m +E^-_b)/κ^-_b,∓√(2),(m -E^-_b)/κ^-_b ), ]where κ^±_b := κ(E_b^±). ThenΛ-matrix (<ref>) connects the squeezed boundary conditions ofthe components (ψ^±_1-ψ^±_3)(x) and ψ^±_2(x):([ (ψ_1^+ -ψ_3^+)(± 0);ψ_2^+(± 0) ])=C_1 ([ ∓ 2E^+_b /κ^+_b;√(2) ]), ([ (ψ_1^- -ψ_3^-)(± 0); ψ_2^- (± 0) ])= C_1( [ 2E^-_b /κ^-_b; ∓√(2) ]).Finally, we note that an infinite number of bound statesalso existsfor the potential of type III (V_11= V ∈∖{0}andV_22 = V_33≡ 0), as proven in <cit.>. However, in the limit as l → 0,for k [see relation (<ref>)] and γ we have the limits: k →√(2E (m+E)) and γ→√((m-E)/2E). Since both these expressions are finite and kl → 0,equations (<ref>)have no solutions, i.e., there are no bound states for the potential of type III in the squeezing limit.§.§ The δ-limitType P: The δ-limit of the bound states of type P is obtained immediately byreplacing the productVl in solutions (<ref>) and (<ref>) withthe strength g. In formula (<ref>), the solution can be combined inthe form ofthe two-valued periodic (increasing) discontinuous functionE = mE(g), where E(g) isconstructed from the pieceε(g) = ( [ ε^+(g); ε^-(g) ]) : = ( [ sin(g/2)0 ≤ g < π; - cos(g/2)0< g ≤π ])that repeats itselfforward and backward along the g-axis. The period of the function E(g) isπ, so that E(g +π)= E(g), g ∈. Furthermore, sincekl →√(β)|g| and η→ -sgn(g)√(2/β) [see equation (<ref>)], in the limit as l → 0, Λ-matrix (<ref>) reduces to the formΛ=( [ cos(√(β)g)      -√(2/β)sin(√(β)g); √(β/2)sin(√(β)g)       cos(√(β)g) ])where β >0. Using expressions (<ref>) for the squeezed energies E_b^±, one can check that matrix (<ref>)connects the two-sided boundary values of components (<ref>) at x =± 0.Type D: Similarly, replacing sgn(V) and Vl in bound state energy (<ref>) with sgn(g) and g, respectively, we obtain the δ-limit of the squeezed energies E_b^±. In this case, the connection matrix is described by the same formula (<ref>) where β <0. In a similar way, using equations (<ref>), one can check that theboundary values (<ref>) are connected by matrix (<ref>) withβ < 0.Type H: For realizing a point interaction that corresponds to the ground state, we use the δ-limit defined by (<ref>), setting V ∼ g/l in equations (<ref>). Hence,we have √(V/E -1)∼√(g/E l) and only the first of equations (<ref>) admits a finite solution in the limit as l → 0.Explicitly,this equation reduces toκ^+_b/E^+_b =2/g, having thesolution forthe bound state energy:E^+_b=E_0 = m g√(4 +g^2)thatcoincides exactly with formula (13) in <cit.>.Furthermore, in the limit as l → 0, we have kl → 0 and, according to definition (<ref>), ηsin(kl) → -√(2) g. Therefore,matrix (<ref>), connecting the two-sided boundary conditions (<ref>) for the ground state energy E_0 , becomesΛ = Λ_0=( [ 1    -√(2) g;0         1 ])=( [ 1    -2√(2) E_0/√(m^2 -E_0^2); 0             1 ]).Type W: Considerthe realization of the δ-limit for the pencils P_1 and P_2 with α_1 ≠ 0, α_2 = 0 and α_3 ≠ 0. Setting V ∼ g/l in expressions (<ref>) and (<ref>), we find thatk ∼√(-β gE/l) where β gE <0. Using this asymptotic representation in equations (<ref>), one can see that only the equation for E^- admits a solution in the l → 0 limit. Indeed, in this limit, the second equation (<ref>) reduces to κ/E^-=-β g/2, resulting to the ground state energyE^-_b = E_0 = - sgn(β g)m √(1+β^2 g^2/4) .Furthermore, η = √(2)E / k ∼√(2) E / √(-β gE/l) and therefore ηsin(kl) → 0, while -η^-1sin(kl) →β g/√(2) . Thus, the connection matrix becomesΛ =Λ_0=( [ 1         0; β g/ √(2)     1 ]) =( [ 1                    0; -√(2(m^2 -E_0^2))/E_0      1 ]). For the other configuration with α_1 ≡α > 0, α_2 =1 and α_3 =0, using the representation V ∼ g/l, we get k ∼√(-α (m+E)g/l) . Using this relation in equations (<ref>), we find that only the first equation for E^+ admits a finite limit, i.e., 2E^+ =gκ, which can be solved explicitly. Takingintoaccount that gE>0, the solution coincides with expression (<ref>). Furthermore, we have the limit ηsin(kl) → -√(2) g, resulting in the same connection matrix (<ref>). §.§ The l^-2/3-limit Type H: For realizing the point interactions that describe the series of bound states (<ref>), we use the asymptotic representation V ∼ g(m/l^2)^1/3with a dimensionless strengthg ∈∖{0}.Then, E_n → (α/nπ)^2g^3m defined on the intervals0< |g| < (nπ/α)^2/3 and kl → |α| √(g^3m/E) = nπ,n =1, 2, …, with odd n's for E^+_b and even n's for E_b^-.Due to these relations as well as equations (<ref>),using thatsin^2(kl/2) =1 if E_n=E^+_band cos^2(kl/2)=1 if E_n=E^-_b, one can use thefollowing representation:sin( |α| √(g^3mE) ) =2{[sin^2[(|α|/2) √(g^3m/E)][(|α|/2) √(g^3m/E)]; cos^2[(|α|/2) √(g^3m/E)] tan[ (|α|/2) √(g^3m/E)] ]. ∼2(-1)^n+1 2 κ V kE , where V/k → 0 as l → 0. On the other hand, η∼ - √(2) V/k and,as a result, ηsin(kl) → 0 and-η^-1sin(kl) →(-1)^n+1√(2(m^2 - E^2))/E as l → 0. Sincecos(kl) = (-1)^n, connection matrix (<ref>) becomes Λ = Λ_n = (-1)^n ([ 1                   0; -√(2(m^2 -E^2_n))/E_n       1 ]),   E_n = (α nπ)^2 g^3m,  n ∈. §.§ The l^-2-limit Type H: For realizing the point interactions that describe the excited bound states with energies (<ref>), we use the asymptotic representation V ∼ g/l^2m with a dimensionless strengthg ∈∖{0}. Then equations (<ref>) asymptotically become√(mEg) l∼ {[ tan[(κ/ 2)√(g/ mE) ]E=E^+ ,; -[(κ / 2)√(g / mE) ]E=E^- . ].In the limit as l → 0, both these asymptotic relations lead to the series of equationskl →κ√(g / mE) = nπ, n=1, 2, …, where E=E^+_b stands for even n's and E=E^-_b for odd n's. The solution of these equations with respect to E ∈ (-m,m) readsE_b^± = E_n =n^2 π^2 m2g ( √(1+ 4 g^2n^4π^4) -1)≃gmn^2π^2 ,   n ∈,supporting the 1/n^2 law only for the excited states.Note that E_n∈ (0,m) if g >0 and E_n∈ (-m,0) if g <0.Furthermore, from (<ref>) we have η∼ -(g/ κ l) √(2E/ mg) . On the other hand, using representation (<ref>), we obtainsin(κ√(gmE) )=2{[ tan[(κ / 2)√(g / mE) ]cos^2[(κ / 2)√(g / mE) ]; sin^2[(κ / 2)√(g / mE) ][(κ/ 2)√(g / mE) ] ]. ∼2(-1)^n√(mEg) l ,where even n's correspond to E^+_b and odd n's to E^-_b. As a result, in the limit as l → 0, we obtain ηsin(kl) → -2√(2)(-1)^n E_n/√(m^2 -E_n^2). Since cos(kl) → (-1)^n, taking for account matrix (<ref>),the connection matrix for all the bound states reads as followsΛ = Λ_n = (-1)^n ([ 1    -2√(2)E_n/√(m^2 -E^2_n);0                       1 ]),    n ∈∪{0},where E_n =E_b^+ for even n and E_n =E_b^- for odd n. Here, the ground state energy E_0 is given by expression (<ref>) and excited state energies by equations (<ref>).These energies are arranged as m > |E_0| > |E_1|> … > |E_n| >… forall g ∈∖{0}.Notice that in matrix (<ref>), for n=1,2, …, we have E_n / √(m^2 -E_n^2)≃g / n^2π^2 . Type W:To implement point interactions that describe the excited bound statesfor the pencils P_1 and P_2 with α_1, α_3 ≠ 0 and α_2 =0, we substitute the asymptotic representationV ∼ g/l^2m into equations (<ref>). As a result,these equations becomeκ√(-β gE/m) l∼ {[ -(√(-β gE/m)/ 2)E =E^+,; tan(√(-β gE/m)/ 2)E =E^-. ].In the limit as l → 0, from these equations we obtain the solutionE_b^± = E_n = - n^2 π^2 m β g ,   n ∈,where odd n's stand for E_b^+ and even n's for E_b^-. Using furtherasymptotic representation (<ref>), one can writesin√(-β g E m) =2{[ sin^2[√(-β gE/ m) / 2][√(-β gE/ m)/ 2]; tan[ √(-β gE/ m) / 2]cos^2[√(-β gE/m) / 2] ]. ∼2(-1)^nκ√(-β gE/m) l ,where odd n's correspond to E^+_b and even n's to E^-_b. Therefore, in the limit as l → 0, we have η^-1sin(kl) → (-1)^n √(2(m^2 -E_n^2)) /E_n . Since cos(kl) → (-1)^n, taking for account matrix (<ref>),the connection matrix for all the bound states reads as followsΛ = Λ_n = (-1)^n ([1                       0; - √(2(m^2-E_n^2)) /E_n       1 ]),    n ∈∪{0},where the bound state energies E_n are given by expressions(<ref>) and (<ref>) with E_n =E_b^+ for odd n and E_n =E_b^- for even n. Here E_0 is a ground state energy and the energies E_n with n=1, 2, … correspond to excited states. These energies are arranged as m > |E_0| > |E_1|> … > |E_n| > … for all g ∈∖{0}.Similarly, forthe case of the pencil P_1 with α_1 ≡α >0, α_2=1 and α_3 =0, owing to the relation V ∼ g/l^2m, we have k ∼√(-α(1 +E/m)g)/l with g<0. Further, the asymptotic representation of equations (<ref>) reads E √(α m(m+E))κ√(-g) l ∼ {[ -tan[ √(-α (1 +E/m)g) / 2 ] E =E^+,;[ √(-α (1 +E/m)g) / 2] E =E^-. ].In the limit as l → 0, from these equations, we obtain √(-α (1 + E/m) g) =nπ,  n =1, 2, …, where even n's stand for E^+ and odd n's for E^-. Solving the lastequation and taking for account that |E|<m, we get the series of excitedbound states with the energiesE_b^± =E_n = - ( 1+ n^2 π^2 α g) m, -∞ < g < - n^2π^22α ,   n ∈.These energies are detached successively from the upper threshold E=m. Using relations (<ref>) and (<ref>), one can prove that the bound state energiesare arranged in the orderE_0< E_1 < … E_n < …. Using further asymptotic representation (<ref>), one can writesin√(-α (1 +E/m)g) =2{[cos^2[√(-α (1 +E/m)g)/ 2] tan[√(-α (1 +E/m)g)/ 2]; sin^2[√(-α (1 +E/m)g)/ 2][√(-α (1 + E/m)g)/2 ] ].∼2(-1)^n+1E√(α m(m+E))κ√(- g) l ,where even n's correspond to E^+_b and odd n's to E^-_b. Next, in the limit as l → 0, we haveηsin(kl) →(-1)^n+1 2√(2)E_n/ √(m^2 -E_n^2) . Since cos(kl) →cos√(-α (1+E/m)g) = (-1)^n, taking for account matrix (<ref>),the connection matrix for all the bound states is the same as for the potential of type II given by matrix (<ref>).Here, the energies E_n with n=1, 2, … correspond to the excited states.Thus, the equations derived above for the bound state energies in the squeezing limit indicate that only those energy levels, which are stretched on the V-axis to infinity, as illustrated by figures from <ref> to <ref>, admit a point approximation. The levels with a finite support, which are shown in figures <ref>–<ref> and <ref>, are not appropriate for implementing point interactions. The energies E_b^± in the squeezing limit become functions of the dimensionless strength constant g.The point interactions considered above are determined by the matrices that connect the two-sided boundary conditions for a wave functions ψ^±(x) at the origin x =± 0. To implement these interactions, we have applied three rates of squeezing as l → 0. One of these is the l^-1-limit resulting in the typical δ-interaction. In this limit, for the spectra of types Pand D, the connection matrix is given by (<ref>),where β >0 and β <0 correspond to perfectly periodic energies (<ref>)andto double-valued energies (<ref>), respectively, with sgn(V) and Vl substituted by sgn(g) and g. For the spectra of types H and W, the l^-1-limit leads to the existence of ground states, which are indicated in table <ref> with n=0.The l^-2/3- and l^-2-limits generate the countable sets of point interactions that describe the excited states in the H- and W-spectra. As indicated in table <ref>, for these interactions, there are two connection matricesΛ_n = ([1     0; 2χ_n   1 ])   ([ 1     2/χ_n;0      1 ]),   χ_n := -√((m^2 -E_n^2)/2) E_n ,  n ∈∪{0}.It should be noticed that the l^-2-limit has been applied in many publications (see, e.g., <cit.>, a few to mention), mainly for regularizing a potential in the form of the derivative of a delta function in the non-relativistic Schrödinger equation.Finally, knowing the bound state energies in the squeezing limit and therefore the values ρ(E_b^±), one can plot the eigenfunctions given by (<ref>). Here, we have restricted ourselves to the bound states of type P (see figure <ref>) and H (see figure <ref>). § CONCLUDING REMARKS The energy spectrum of the ordinaryone-dimensional non-relativistic Hamiltonian for a particle in a constantpotential field V(x) ≡ V ∈ is quite trivial: it is just the shift of a free-particle spectrum by the strength V. The spectrum of the one-dimensional pseudospin-one Hamiltonian with a constantthree-component potential V(x) =col(V_11, V_22,V_33) consists of three (upper, middle and lower) bands, which are described by cubic equation (<ref>). The structure of this spectrum crucially depends on the relative configuration of the strengths V_11, V_22, V_33 and all the possible forms of the bands are illustrated by figure <ref>. In this regard, each strength configuration can be associated with a vectorin the three-dimensional space (V_11, V_22,V_33) and only in the particular case of the line V_11 = V_22 = V_33≡ V in this space, free-particle spectrum (<ref>) with eigenfunctions (<ref>)is shifted by V, equally in each band, similarly to the situation with a non-relativistic Hamiltonian.Almost all the points in the (V_11,V_22, V_33)-space correspond to the energy spectrum, in which the middle band is dispersive. At some limiting points in this space, the middle band as a function of the wave number k shrinks to a line, the so-called flat band, as shown in panels (d) and (h)–(j) of figure <ref>. This set, shown in figure <ref>, consists of the two intersecting planes A and B, which aredefined by equations (<ref>) and (<ref>).The energy of the upper and lower dispersion bands on these planes are described by solutions (<ref>) and (<ref>).The corresponding eigenfunctions are given through the general formula (<ref>).To implement bound states in the pseudospin-one Hamiltonian, the components of the potential V(x) must be localized on the x-axis. To this end, we have chosen these components in the form of rectangles that represent a layer of thickness l =x_2-x_1, where x=x_1 and x=x_2 are arbitrary points. Then one can use the general solution (<ref>) for the interval x_1 ≤ x ≤ x_2complementing it by the free-particle solution beyond this interval that decreases as |x| →∞ and using the matching conditions at the edges x_1 and x_2. Within this approach, a pair of general equations (<ref>) has been derived for finding bound state energies. The solutions to these equationsare conditionally denoted as E^+ =E_b^+ for the first equation (<ref>)and E^- =E_b^- for the second one. Thecorresponding eigenfunctionsψ^+(x) and ψ^-(x) are illustrated by expressions (<ref>)–(<ref>) and figure <ref>. As demonstrated by figures from <ref> to <ref>, the structure of the bound state spectrum crucially depends on the configuration of the strengths V_11, V_22 and V_33 that determine the rectangular potentials.For simplicity, instead of these three independent strengths as vectors inthe ^3-space, we have restricted ourselves to the investigation on the two pencilsof straight lines in ^3 defined by equations (<ref>) and (<ref>), where onlyone strength parameter V is incorporated. Even on these particular sets, a whole variety of bound states has been proven to exist. Based on the asymptotic behavior of the solutions to equations (<ref>) in the limit as |V| →∞, one can single out the four types of bound states, which we call in the present work as P, D, H and W.The energies for two of these types (P and H) have already been investigatedearlierinpaper <cit.>. In the present work, the study of these energieshas been supplemented bythe solutionswith imaginary wave number k. Particularly, for the potentials with all the three strengths V_11, V_22 and V_33 ∝ V, the energy spectrum of the type P consists of two levels and the dependence on V is almost periodic (and exact periodic in the limit as |V| →∞). Surprisingly, the energy spectrum of the type H consists of an infinite number of levels, resembling the hydrogen atom spectrum (E_n∝ 1/n^2, n=1,2, …). It should be noticed that a successive cutoff of energy levels with the growth of the strength V is possible for the type H (compare figures <ref> and <ref>).In addition to the types P and H, we have examined the spectrum that consists of two levels for large V (type D) merging into a single level in the limit as |V| →∞. Another behavior of the bound-state energies, which has been observed in the present work, is a successive detachment of energy levels from the thresholds of upper and lower continuums E = ± m with increasing the strength V (type W). This behavior resembles the energy spectrum of an ordinary potential well as its depth V tends to infinity at fixed width.The energy levels for this typehave been shown to behave asE_n ∝ n^2, n=1,2, ….For some bound state energy levels, it is possible to use a one-point approximation through the squeezing limit l → 0and V→∞. To implement such a procedure explicitly, we consider the strength parameter V as a function of l imposing an appropriate behavior as l → 0. For realizing a well-defined point interaction, it is sufficient to construct a matrix that connects the two-sided values of the wave function given at the point of singularity (for instance, at x = ± 0). To this end, we have derived the general form of such a matrix for arbitrary points x_1 and x_2 given by equation (<ref>). Further, in each special case, setting x_1 → -0 and x_2 → +0, we have realized the following three families of one-center point interactions using the asymptotic behavior of V as l → 0: (i) V ∼ g/l (δ-limit),(ii) V ∼ g(m/l^2)^1/3 (l^-2/3-limit) and (iii) V ∼ g/l^2m(l^-2-limit), where g is a dimensionlesscoupling constant.In conclusion, it would be interesting to develop a more general approach forstudying bound states, which avoids the presentation of the three strengthsV_11, V_22 and V_33 through one parameter V. In this way, it could be possible to obtain additional types of bound state spectra. The investigation of the systems with a long-range Coulomb potential, instead of piecewise potentials, is of big interestas well. The study of the scattering problem in the presence of multiple point potentials, including resonance effects as well as bound states in the continuum, is alsoof great interest. Data availability statement The data that support the findings of this study are available upon reasonable request from the authors. AcknowledgmentsWe would like to thank the Armed Forces of Ukraine for providing security to perform this work. A.V.Z. acknowledges financial support from the National Academy of Sciences of Ukraine, Project No. 0123U102283. Y.Z. and V.P.G. acknowledge financial support by the National Research Foundation of Ukraine grant (2020.02/0051) `Topological phases of matter and excitations in Dirac materials, Josephson junctions and magnets'. Finally, we are indebted to the anonymous Referees for the careful reading of this paper, their questions and suggestions, resulting in the significant improvement of the paper. References 99Bradlyn Bradlyn B, Cano J, Wang Z, Vergniory M G, Felser C, Cava R J and Bernevig B A 2016 Beyond Dirac and Weyl fermions: Unconventional quasiparticles in conventional crystals Science 353 558Bercioux2009 Bercioux D,Urban D F, Grabert H and Häsler W 2009 Massless Dirac-Weyl fermions in a T_3 optical lattice Phys. Rev. A 80 063603Raoux2014 Raoux A, Morigi M, Fuchs J-N, Piéchon F and Montambaux G 2014 From dia- to paramagnetic orbital susceptibility of massless fermions Phys. Rev. Lett. 112 026402laf Leykam D, Andreanov A and Flach S 2018 Artificial flat band systems: from lattice models to experiments Adv. Phys.: X 3 1473052Tasaki1998 Tasaki H 1998 From Nagaoka’s ferromagnetism to flat-band ferromagnetism and beyond: An introduction to ferromagnetism in the Hubbard model Prog. Theor. Phys.99 489Cao2018 Cao Y, Fatemi V, Fang Ah, Watanabe K, Taniguchi T, Kaxiras E and Jarillo-Herrero P 2018 Unconventional superconductivity in magic-angle graphene superlattices Nature556 43Andrei2020 Andrei E Y andMacDonald A H 2020 Graphene bilayers with a twist Nature Materials 19 1265Illes2015 Illes E, Carbotte J P and Nicol E J 2015 Hall quantization and optical conductivity evolution with variable Berry phase in the α- T_3 model Phys. Rev. B 92 245410Kovacs2017 Kovacs A D, David G, Dora B and Cserti J 2017 Frequency-dependent magneto-optical conductivity in the generalized α- T_3 model Phys. Rev. B 95 035414Iurov2020 Iurov A, Zhemchuzhna L, Dahal D, Gumbs G and Huang D 2020 Quantum-statistical theory for laser-tuned transport and optical conductivities of dressed electrons in α- T_3 materials Phys. Rev. B 101 035129Biswas2016 Biswas T and Ghosh T K 2016 Dynamics of a quasiparticle in the α- T_3 model: role of pseudospin polarization and transverse magnetic field onzitterbewegung J. Phys.: Condens. Mattter 28 495302Islam2017 Islam Firoz S K and Dutta P 2017 Valley-polarized magnetoconductivity and particle-hole symmetry breaking in a periodically modulated α- T_3 lattice Phys. Rev. B 96 045418Oriekhov2020 Oriekhov D O and Gusynin V P 2020 RKKY interaction in a doped pseudospin-1 fermion system at finite temperature Phys. Rev. B 101 235162Roslyak2021 Roslyak O, Gumbs G, Balassis A and Elsayed H 2021 Effect of magnetic field and chemical potential on the RKKY interaction in the α- T_3 lattice Phys. Rev. B 103 075418Oriekhov2019 Gorbar E V, Gusynin V P and Oriekhov D O 2019 Electron states for gapped pseudospin-1 fermions in the field of a charged impurity Phys. Rev. B 99 155124Pottelberge2020 Van Pottelberge R 2020 Comment on `Electron states for gapped pseudospin-1 fermions in the field of a charged impurity'Phys. Rev. B 101 197102Zhang2022 Yi-Cai Zhang 2022 Wave function collapses and 1/n energy spectrum induced by a Coulomb potential in a one-dimensional flat band system Chin. Phys. B 31 050311Zhang2022JPB Yi-Cai Zhang and Guo-Bao Zhu 2022 Infinite bound states and hydrogen atom-like energy spectrum induced by a flat band J. Phys. B: At. Mol. Opt. Phys. 55, 065001Zhang2022PS Yi-Cai Zhang 2022 Infinite bound states and 1/n energy spectrum induced by a Coulomb-like potential of type III in a flat band system Phys. Scr. 97 015401Jakubsky2023_1 Jakubský V andZelaya K 2023 Lieb lattices and pseudospin-1 dynamics under barrier- and well-like electrostatic interactions Physica E: Low-dimensional Systems and Nanostructures 152 115738Jakubsky2023_2 Jakubský V andZelaya K 2023 Landau levels and snake states of pseudo-spin-1 Dirac-like electrons in gapped Lieb latticesJ. Phys.: Condens. Matter 35 025302Piechon2015 Piéchon F,Fuchs J-N, Raoux A and Montambaux G 2015 Tunable orbital susceptibility in α-T_3 tight-binding models J. Phys.: Conf. Ser. 603 012001Demkov1975 Demkov Y N and Ostrovskii V N 1975 Zero-Range Potentials and Their Applications in Atomic Physics (Leningrad: Leningrad University Press)Demkov1988 Demkov Y N and Ostrovskii V N 1988 Zero-Range Potentials and Their Applications in Atomic Physics (New York: Plenum)Albeverio2005 Albeverio S, Gesztesy F,Høegh-Krohn R and Holden H 2005Solvable Models in Quantum Mechanics (With an Appendix by Pavel Exner) 2nd revised edn(Providence: RI: American Mathematical Society: Chelsea Publishing)Albeverio1999 Albeverio S and Kurasov P 1999 Singular Perturbations of Differential Operators: Solvable Schrödinger-Type Operators (Cambridge: Cambridge University Press)ZZ2011 Zolotaryuk A V and Zolotaryuk Y 2011 Controlling a resonant transmission across the δ^'-potential: the inverse problem J. Phys. A: Math. Theor. 44 375305; 2012 Corrigendum: Controlling a resonant transmission across the δ^'-potential: the inverse problem J. Phys. A: Math. Theor. 45 119501ZZ2015 Zolotaryuk A V and Zolotaryuk Y 2015 A zero-thickness limit of multilayer structures: a resonant-tunnelling δ^'-potential J. Phys. A: Math. Theor. 48 035302Gusynin2022 Gusynin V P, Sobol O O, Zolotaryuk A V and Zolotaryuk Y 2022 Bound states of a one-dimensional Dirac equation with multiple delta-potentials Low Temp. Phys. 48 1022Ibarra2023 Ibarra-Reyes M, Pérez-Álvarez R and Rodríguez-Vargas I 2023 Transfer matrix in 1D Dirac-like problemsJ. Phys.: Condens. Mattter 35 395301zpi Zolotaryuk A V, Christiansen P L and Iermakova S V 2006 Scattering properties of point dipole interactions J. Phys. A: Math. Gen.39 9329Golovaty2009Golovaty Y D and Man'ko S S 2009 Ukr. Math. Bull.6 169 (in Ukrainian);e-print arXiv:0909.1034v2 [math.SP]Golovaty2013 Golovaty Y 2013 1D Schrödinger operators with short range interactions: Two-scale regularization of distributional potentials Integr. Equ. Oper. Theor. 75 341ZZ2014 Zolotaryuk A V and Zolotaryuk Y 2014 Intrinsic resonant tunneling properties of the one-dimensional Schrödinger operator with a delta derivative potential Int. J. Mod. Phys. B 28 1350203ZZ2021 Zolotaryuk A V and Zolotaryuk Y 2021 Scattering data and bound states of a squeezed double-layer structure J. Phys. A: Math. Theor. 54 035201Tusek2020 Tušek M 2020 Approximation of one-dimensional relativistic point interactions by regular potentials revised Lett. Math. Phys. 110 2585Seba Šeba P 1986 Some remarks on the δ'-interaction in one dimension Rep. Math. Phys. 24 111 Griffiths Griffiths D j 1993 Boundary conditions at the derivative of a delta function J. Phys. A: Math. Gen. 26 2265Christiansen Christiansen P L, Arnbak N C, Zolotaryuk A V, Ermakov V N and Gaididei Y B 2003 On the existence of resonances in the transmission probability for interactions arising from derivatives of Dirac’s delta function J. Phys. A:Math. Gen. 36 7589
http://arxiv.org/abs/2310.17934v1
{ "authors": [ "A. V. Zolotaryuk", "Y. Zolotaryuk", "V. P. Gusynin" ], "categories": [ "quant-ph", "cond-mat.other" ], "primary_category": "quant-ph", "published": "20231027071758", "title": "Bound states and point interactions of the one-dimensional pseudospin-one Hamiltonian" }
latexCommand has changed. latexCommand has changed. latexCommand has changed. 0cm 0cm20pt21.5cm 16cmℂ ℝ ℒ z 𝖤 thm12pt 12pt 1em thmtheoremTheoremdefinition[theorem]Definition1]Ludwig Schmid2]Enrico Zardini3,4]Davide Pastorello[1]Chair for Design Automation, Technical University of Munich, Arcisstrasse 21, Munich, 80333, Bavaria, Germany ([email protected])[2]University of Trento, Department of Information Engineering and Computer Science, via Sommarive 9, Povo, 38123, Trento, Italy ([email protected]) [3]University of Bologna, Department of Mathematics, Piazza di Porta San Donato 5, 40126 Bologna, Italy. [4]TIFPA-INFN, via Sommarive 14, Povo, 38123, Trento, Italy ([email protected]) A general learning scheme for classical and quantum Ising machines [ ==================================================================Abstract. An Ising machine is any hardware specifically designed for finding the ground state of the Ising model. Relevant examples are coherent Ising machines and quantum annealers. In this paper, we propose a new machine learning model that is based on the Ising structure and can be efficiently trained using gradient descent. We provide a mathematical characterization of the training process, which is based upon optimizing a loss function whose partial derivatives are not explicitly calculated but estimated by the Ising machine itself. Moreover, we present some experimental results on the training and execution of the proposed learning model. These results point out new possibilities offered by Ising machines for different learning tasks. In particular, in the quantum realm, the quantum resources are used for both the execution and the training of the model, providing a promising perspective in quantum machine learning.Keywords: machine learning, Ising model, quantum annealing, quantum machine learning§ INTRODUCTIONMachine learning models are algorithms that provide predictions about observed phenomena by extracting information from a set of collected data (the training set). In particular, parametric models capture all relevant information within a finite set of parameters, with the set being independent of the number of training instances <cit.>. A celebrated example is represented by artificial neural networks<cit.>. In the context of quantum computers, a common approach to machine learning is to employ variational quantum circuits, which can be trained by backpropagation as done with classical feedforward neural networks <cit.>. In addition to gate-based quantum computing, quantum annealing has also been considered to develop machine learning algorithms <cit.>. In any case, a crucial point in quantum machine learning is the implementation of quantum procedures for model training as alternatives to classical methods. An example in this sense is the quantum support vector machine, trained by running the HHL quantum algorithm <cit.>, which, however, presents the shortcoming of an impractical implementation on the currently available quantum devices. Therefore, a general challenge in quantum machine learning is to define learning schemes that can be efficiently implemented on quantum machines of the Noisy Intermediate-Scale Quantum (NISQ) era <cit.>. This is the motivation behind the present proposal of a learning model for quantum annealers in which the quantum resources are used both in the model execution and in the training process. The obtained theoretical and experimental results apply also to classical implementations of the model. Indeed, the key aspect of the training and execution of the proposed learning mechanism is the computation of the ground state of the Ising model, which can, in principle, be solved using classical or quantum procedures.An Ising machine can be considered a specific-purpose computer designed to return the absolute or approximate ground state of the Ising model. The latter is described by the energy function of a spin glass system under the action of an external field, namely,𝖤()=∑_i=1^Nθ_i z_i+∑_(i,j)Γ_ijz_iz_j,with ∈{-1,1}^N, θ_i∈,and Γ_ij∈,where the sum ∑_(i,j) is taken over the pairs of connected spins, counting each pair only once. The ground state is the spin configuration ^*∈{-1,1}^N that minimizes the function (<ref>). Therefore, in practice, an Ising machine solves a combinatorial optimization problem that can be represented as a quadratic unconstrained binary optimization (QUBO) problem, which is an NP-hard problem, by means of the change of variables x_i=z_i+1/2∈{0,1}. In particular, an Ising machine can be an analog computer that evolves toward the Ising ground state due to a physical process like thermal or quantum annealing. Alternatively, it can also be implemented on a digital computer in terms of simulated annealing. Ising machines are conceptually related to Boltzmann machines in the sense that they are both defined in terms of the Ising model, with couplings among spins and the action of an external field. In the case of a Boltzmann machine, the coefficients θ and Γ of the energy function (<ref>) are tuned so that, by sampling the spin configuration over the state of the system at thermal equilibrium (at a finite temperature T), a probability distribution resembling an input distribution defined on the training set <cit.> is generated. In detail, the output distribution of a Boltzmann machine is given byp_T()=Z^-1exp[-𝖤()/k_BT],where Z:=∑_exp[-𝖤()/k_BT] is the partition function and k_B is the Boltzmann constant. Usually, only a subset of spins is sampled, the so-called visible nodes, and the output distribution is given by the marginal distribution of (<ref>). Instead, in the ideal case, the output of an Ising machine is deterministic and corresponds to the absolute minimum of (<ref>). However, in a realistic scenario in which the Ising machine operates by thermal annealing, the output is probabilistic and distributed according to (<ref>) with a value of T as low as possible.The difference between Boltzmann and Ising machines lies in the fact that Boltzmann machines are parametric generative models. In contrast, Ising machines are considered as solvers of combinatorial optimization problems <cit.>. However, in this paper, we propose a supervised learning model for Ising machines whose training is inspired by the training of Boltzmann machines. A peculiar aspect of a Boltzmann machine is that it can be trained by gradient descent of a loss functiondepending on the weights θ and Γ, like the average negative log-likelihood between the input distribution and the generated distribution, iteratively changing the parameters by a step in the opposite direction of the gradient. However, the partial derivatives ofare not explicitly calculated but are estimated by sampling the network units. For instance, let us consider the update rule Γ_ij^→Γ_ij^+δΓ_ij, which updates the coupling terms toward the minimum of the average negative log-likelihood. The update step (δΓ_ij) is given by <cit.>: δΓ_ij=-η(⟨ z_iz_j⟩ - ∑_ v p_data( v)⟨ z_iz_j⟩_ v) i,j=1,...,N,where η>0 is the learning rate (user-specified), the sum is taken over the visible nodes v, p_data is the input distribution, ⟨ ⟩ is the Boltzmann average, and ⟨ ⟩_ v is the Boltzmann average with clamped visible nodes. In other words, both the training and the execution of a Boltzmann machine are performed by sampling the units of the network at thermal equilibrium. A quantum version of the Boltzmann machine has also been proposed <cit.>, and the simulations have shown that the presence of a transverse field Hamiltonian improves the training process with respect to the classical model, generating distributions that are closer to the input one in terms of the Kullback-Liebler divergence. This paper adopts a similar viewpoint for training an Ising machine. After defining a parametric predictive model based on the ground state of the Ising model, we prove that it can be trained by gradient descent of a mean squared error loss function, executing the model itself to obtain the gradient estimates. In particular, the structure of the model does not require that the Ising machine returns the true ground state with infinite precision, and a suboptimal output works for training and executing the predictive model. In addition, our results apply to both classical and quantum machines. However, in the second case, the impact may be more significant since the quantum annealing resources are also exploited for the training process. In this sense, the purpose is similar to that of the parameter-shift rule, which is used in gate-based quantum computing to train a parametric quantum circuit without explicitly calculating the partial derivatives <cit.>. The paper is structured as follows: in <ref>, we introduce generalities and elementary notions about the Ising model and Ising machines, with a particular focus on quantum annealing; <ref> deals with the proposed parametric learning model, to be executed by an Ising machine, and the main theoretical result of the paper, i.e., the proof that the model can be trained by running the Ising machine itself; in <ref>, an empirical evaluation of the proposed machine learning method is provided; in <ref>, we discuss the perspectives of the proposal, and we draw our conclusions on the proposed parametric model. § ISING MACHINESThis section introduces the formal definition of the Ising model and the concept of using specific Ising machines to solve the corresponding groundstate problem. Afterward, we briefly describe the two Ising machines employed in this work, namely simulated and quantum annealing.The Ising model is a mathematical description extensively utilized in the study of ferromagnetism. Renowned for its versatility and simplicity, it stands as a fundamental paradigm in the domain of statistical mechanics <cit.>. In its general formulation, the Ising model is defined on a graph (V,E), wherein each vertex represents a discrete variable z_i∈{-1,1}. These variables correspond to spins, with associated biasesθ_i ∈ denoting the inclination of each spin toward one of the two available values. Furthermore, the weighted edges Γ_ij∈ connecting two spins i and j define the coupling dynamics between the spins, indicating their preference to align or oppose each other in value. This graph structure is illustrated in <ref>. The total energy of a spin configuration ∈{-1,1}^| V| is expressed as (θ, Γ, )=∑_i=1^| V|θ_i z_i+∑_(i,j)∈ EΓ_ij z_i z_j= θ + ^T Γ,where the biasesθ_1,...,θ_| V|∈ and the couplingsΓ_ij∈ ∀ (i,j) ∈ E are conveniently consolidated into the vector θ and the matrix Γ (with Γ_ij=0 when (i,j)∉E), respectively. Realistically, the values of the parameters are bounded. Hence, it is possible to assume that biases and couplings take values into compact intervals of . Within the realm of statistical physics, these quantities are typically referred to as the external magnetic field strength and spin interactions due to their fundamental roles in the physical manifestation of the Ising model.An Ising machine can be defined as a non-von Neumann computer for solving combinatorial optimization problems <cit.>. More precisely, its input is represented by the energy function of the Ising model (<ref>), with biases and coupling terms properly initialized. The machine effectively operates by minimizing the energy function and providing the optimal spin configuration ^* as the output. Actually, the quest to determine the ground state of an Ising model is of significant importance, as any problem within the NP complexity class can be formulated as an Ising problem with only a polynomial increase in complexity <cit.>. An elementary and abstract definition of an Ising machine, motivated by the general approach adopted in this paper, is the following:Given the energy function defined in (<ref>), an(abstract) Ising machine is any map (θ,Γ)↦^* := _(θ, Γ, ).Additionally, we can also consider the minimum value of the energy _0 (θ, Γ) :=(θ,Γ, ^*) as the output of an Ising machine. This ground state energy of the Ising model is obtained by substituting the spin configurationinto (<ref>). In this context, the Ising machine consistently yields a numerical result with a negative sign. An illustration of an Ising machine that finds the ground state of a small Ising model is shown in <ref>.Relevant examples of Ising machines as specific-purpose hardware devices are quantum annealers <cit.> or coherent Ising machines with optical processors <cit.>. However, an Ising machine can also be simulated on a classical digital computer. In this respect, simulated annealing is a standard approach and addresses the Ising model as a combinatorial optimization problem. In more detail, simulated annealing is a probabilistic metaheuristic inspired by the analogical notion of controlling the cooling process observed in physical materials  <cit.>. The algorithm employs stochastic acceptance criteria, resembling a Boltzmann probability, to navigate the solution space and escape local optima. Over time, usually indicated by a temperature parameter T that mimics the cooling process, less favorable moves are increasingly rejected. In practice, simulated annealing employs random search and local exploration to converge toward near-optimal or optimal solutions. However, although the algorithm is easy to implement and robust from a theoretical point of view, it may present a slow convergence rate <cit.>. A promising alternative path is the development of analog platforms like coherent Ising machines. They represent optical parametric oscillator (OPO) networks in which the collective mode of oscillation beyond a certain threshold corresponds to an optimal solution for a given large-scale Ising model <cit.>. The learning scheme proposed here is agnostic and can be implemented on this kind of Ising machines. Nevertheless, in the experimental part we have considered only simulated and quantum annealing. Quantum annealing is a type of heuristic search used to solve optimization problems <cit.>. The procedure is implemented by the time evolution of a quantum system toward the ground state of a problem Hamiltonian. More precisely, let us consider the time-dependent HamiltonianH(t)=γ(t) H_D+H_P t≥ 0,where H_P is the problem Hamiltonian, H_D is the transverse field Hamiltonian, and γ:^+→ is a decreasing function. Roughly speaking, H_D gives the kinetic term inducing the exploration of the solution landscape by means of quantum fluctuations, and γ attenuates the kinetic term driving the system toward the ground state of H_P. Quantum annealing can be physically realized by considering a network of qubits arranged on the vertices of a graph (V,E), with | V|=n and whose edges E represent the couplings among the qubits. In detail, the problem Hamiltonian is defined as the following self-adjoint operator on the n-qubit Hilbert space 𝖧=(ℂ^2)^⊗ n:H_P=∑_i∈ Vθ _i σ_z^(i) +∑_(i,j)∈ EΓ_ijσ_z^(i)σ_z^(j),with real coefficients θ_i, Γ_ij, which are identified again as biases and couplings due to their similar role in the Ising model. In the computational basis, the 2^n× 2^n matrix σ_z^(i) acts locally as the Pauli matrixσ_z=( 1 00 -1 )on the i-th tensor factor and as the 2× 2 identity matrix on the other tensor factors. In fact, the eigenvectors of H_P form the computational basis of 𝖧, and the corresponding eigenvalues are the values of the classical energy function (<ref>). On the other hand, for the transverse field Hamiltonian a typical form, isH_D=∑_i∈ Vθ _i σ_x^(i),where the local operator σ_x^(i) is defined in a similar way to σ_z^(i) in terms of the Pauli matrixσ_x=( 0 11 0 ). H_D does not commute with H_P and provides the unbiased superposition of all the conceivable solutions as the system initial state. Eventually, it is worth highlighting that quantum annealing is related to adiabatic quantum computing (AQC) as the solution of a given problem can be encoded into the ground state of a problem Hamiltonian. However, the two notions do not coincide. Indeed, in quantum annealing, the quantum system is not assumed to be isolated; therefore, it can be characterized by a non-unitary evolution. Another difference is that, in quantum annealing, the entire computation is not required to take place in the instantaneous ground state of the time-varying Hamiltonian like in AQC <cit.>.§ THE PROPOSED MODELThis section formally introduces the proposed parametric model, followed by an in-depth discussion on the training using gradient descent and the estimation of the relevant partial derivatives of a quadratic loss function. The final part presents some practical considerations required to operate and train the model in real-world scenarios. §.§ DefinitionIn the context of supervised learning, the goal of an algorithm is to approximate a function f: X → Y given a training set {(x_1,f(x_1)),...,(x_N,f(x_N))}, which is a collection of elements in the set X with the corresponding values of f. An approximation of f can be obtained through a parametric function after an optimal choice of its parameters, generalizing the information encoded into the training set. In fact, the notion of a parametric model is closely related to the existence of a parametric function that can be used to approximate the target function. Let X and Y be non-empty sets respectively calledinput domain andoutput domain. A (deterministic)parametric model is a functionx↦ y=F(x|Γ) x∈ X, y ∈ Y,with Γ being a set of real parameters.In practice, given a training set of input-output pairs, the task consists in finding the parameters Γ such that the model assigns the correct or approximately correct output, with high probability, to any previously unseen input. The parameters are typically determined by optimizing a loss function such asℒ(Γ)=1/N∑_i=1^N 𝖽(y_i,F(x_i|Γ)),where 𝖽 is a metric defined over Y, and the procedure is commonly referred to as training.A preliminary depiction of the general problem considered in this paper is the following: given a real-valued function f:X→, with X⊂^n and n ∈ℕ, the objective consists in training a predictive model F that approximates the original function f within the supervised learning framework. This function approximation task encompasses a wide range of conventional machine learning endeavors such as regression and classification. In particular, the proposed parametric model is defined over the concept of Ising machines as introduced in <ref>. The input information is encoded into the biases θ of an Ising model, while the adjustable parameters are represented by the couplings Γ of (<ref>). The Ising machine is then used to find the ground state of the Ising model, and the corresponding ground state energy is used as the model output. Note that the ground state energy invariably assumes a negative value, and the magnitude of the input biases significantly influences its absolute magnitude. To account for this, we introduce an ancillary scaling factor denoted as λ and an energy offset indicated as ϵ. This yields the subsequent formulation of the model.Given an Ising machine, an input vector θ=(θ_1, …,θ_n)∈ X⊂^n, and the parameters {Γ_ij} with i,j = 1 … n (the nonzero Γ_ij are specified by the topology graph of the machine), one can define a parametric model F based on the ground state energy of an Ising model asF(θ|Γ,λ, ϵ): =λmin_∈{-1,1}^n(θ,Γ, ) + ϵ= λ _0(θ, Γ) + ϵ,where λ∈ and ϵ∈ are additional tunable parameters that do not influence the Ising model energy. The model definition reveals a general neural approach in the sense that data are represented by the biases of the spins, which can be associated with neurons, and the parameters are the weights attached to the connections between spins (neurons). It is worth noting that, for the model execution, there is no requirement that the Ising machine returns the true ground state. More precisely, the fact that an approximated ground state does not match the exact solution of the combinatorial problem underlying the minimization is not a severe drawback for the learning process. Indeed, assuming that the deviation of the energy output from _0 is systematic (e.g., due to the finite precision of the Ising machine), this deviation becomes a characteristic of the model itself, and the training procedure accordingly provides optimized parameters. Despite its simplicity, the model presents interesting training properties that we mathematically characterize in the next section.§.§ Training processTraining the proposed parametric model for the approximation of a real-valued function entails minimizing the empirical risk across a provided dataset, denoted as 𝒟, encompassing input-output pairs derived from the original function. To this aim, we employ the conventional approach of optimizing the model parameters to minimize the mean squared error (MSE) between the predicted output and the actual data values. Given the training set 𝒟 = {(θ^(a), y^(a))}_a=1,...,N, with y^(a)=f(θ^(a)), where f:X→, with X⊂^n, is an unknown function to approximate, the model (<ref>) can be trained by minimizing the MSE loss function(Γ,λ,ϵ)=1/N∑_a=1^N [F(θ^(a)|Γ,λ,ϵ)-y^(a)]^2.Our objective is to address this minimization task employing a gradient descent approach, iteratively updating the parameters Γ, λ, and ϵ by taking steps in the direction opposite to the gradient of the loss function : δΓ=-η∇_Γ,δλ=-η∂/∂λ,δϵ=-η∂/∂ϵ,where η>0 is the learning rate, which controls the optimization step size. Let us remark that each parameter is assumed to take values into a compact interval in ; consequently, the parameter space is a hyperrectangle. On one hand, the partial derivatives ofwith respect to λ and ϵ are well-defined and trivial to calculate. On the other hand, the following theorem, which provides the update rules for the optimization ofby gradient descent, implies that the gradient ∇_Γ is defined almost everywhere in the parameter hyperrectangle. Let F be the parametric model defined in (<ref>), 𝒟 = {(θ^(a), y^(a))}_a=1,...,N be a training set for F,be the MSE loss function defined in (<ref>), and η>0 be the learning rate. Then, the partial derivatives of F with respect to the couplings Γ are defined almost everywhere in the parameter space, and the update rules for Γ, λ, ϵ for the gradient descent ofare:Γ_ij^(k+1)=Γ_ij^(k)- η2λ^(k)/N∑_a=1^N[λ^(k) _0(θ^(a), Γ^(k)) + ϵ^(k) -y^(a)]z^*_iz_j^*,λ^(k+1) =λ^(k)- η2/N∑_a=1^N[λ^(k) _0(θ^(a), Γ^(k)) + ϵ^(k) -y^(a)] [ ∑_i=1^nθ^(a)_i z^*_i+ ∑_(i,j)∈ EΓ_ij^(k) z^*_i z^*_j],ϵ^(k+1) =ϵ^(k)-η2/N∑_a=1^N[λ^(k) _0(θ^(a), Γ^(k)) + ϵ^(k) -y^(a)],where Γ^(k), λ^(k), ϵ^(k) are the values of the parameters within the k-th iteration of the gradient descent, and z^*=_(θ^(a),Γ^(k),).By direct calculation, the partial derivative of F with respect to Γ_ij is ∂ F(θ|Γ,λ,ϵ)/∂Γ_ij=λ∂/∂Γ_ij( ∑_i=1^n θ_i z^*_i+∑_(i,j)Γ_ij z^*_i z^*_j,) = λ z_i^*z_j^*,where z_i^* and z_j^* are the i-th and j-th components of ^*=_(θ,Γ,), respectively.Since the optimal spin configuration ^* also depends on Γ (and θ), we should consider the derivatives ∂ z^*_l/ ∂Γ_ij for l=1,...,n in the final step outlined in (<ref>). However, it must be noted that the function z^*_l=z^*_l(θ,Γ) is piecewise constant. Hence, its derivative is zero almost everywhere in its domain, and the remaining points, corresponding to spin flips of z^*_l, turn out to be points of non-differentiability of z^*_l(θ,Γ). Substituting (<ref>) into (<ref>), we obtain the following update step (δΓ_ij) for the MSE loss function (<ref>):δΓ_ij=-η∂/∂Γ_ij =-η2/N∑_a=1^N [F(θ^(a)|Γ,λ,ϵ)-y^(a)]∂ F/∂Γ_ij=-η2λ/N∑_a=1^N[F(θ^(a)|Γ,λ,ϵ)-y^(a)] z^*_iz_j^*=-η2λ/N∑_a=1^N[λ _0(θ^(a), Γ) + ϵ -y^(a)] z^*_iz_j^*.Therefore, the parameter update rule for the (k+1)-th iteration turns out to be Γ_ij^(k+1)=Γ_ij^(k)- η2λ^(k)/N∑_a=1^N[λ^(k) _0(θ^(a), Γ^(k)) + ϵ^(k) -y^(a)]z^*_iz_j^*,wherein we have omitted the explicit dependence of z_i^* and z_j^* on a and k for the sake of brevity of notation. The update rules for λ and ϵ can be derived analogously. Specifically, the partial derivatives of F with respect to λ and ϵ are∂ F(θ|Γ,λ,ϵ)/∂λ=∑_i=1^n θ_i z_i^*+∑_(i,j)Γ_ij z^*_i z^*_j,∂ F(θ|Γ,λ,ϵ)/∂ϵ=1.Then, the claims (<ref>) and (<ref>) follow.In this way, the model parameters can be optimized for a certain number of steps N_epochs. The complete training process is described as pseudocode in <ref> and illustrated as a flow diagram in <ref>. In particular, for each training step k, the model is evaluated on each (θ^(a),y^(a)) pair in the training set 𝒟 and the parameters are updated according to <ref>. The trained model is defined by the final iteration as F_model(θ) = F(θ|Γ^N_epochs,λ^N_epochs,ϵ^N_epochs) .Therefore, the training process bears similarities to that of a neural network but with a noteworthy distinction. Indeed, in our model, the conventional backpropagation step for calculating the partial derivatives is replaced by the Ising machine computation of _0 and ^*. In particular, we propose the usage of quantum annealing as a well-suited Ising machine, which serves a dual purpose: executing the model according to (<ref>) and facilitating the model training through the iterative assessment of the loss function gradient. In detail, the spin configuration ^*, retrieved from the annealer and representing the ground state of the qubit network, can be used to compute the parameter adjustments according to (<ref>), (<ref>) and (<ref>). Instead, the corresponding energy value is used to compute the model prediction.A model trained in this manner possesses the capability to predict inputs beyond those present in 𝒟. Analogously to other machine learning models, this rests upon the expectation that, if the model is trained on an extensive dataset, it can assimilate and generalize from those examples, ultimately serving as an approximation of the original function within a certain value range. Moreover, although the Ising energy (<ref>) depends only linearly on the input vector θ, determining the minimum energy entails a complex interplay between the input and the model parameters Γ. Consequently, an open theoretical question regarding the class of functions that can be approximated through the proposed methodology arises. In other words, given an Ising model, what is its expressibility in terms of ground state energies by varying only the qubit couplings? From a practical perspective, the limitations of the quantum annealer architecture (number of qubits, topology connectivity, value bounds for θ and Γ) impose additional obvious constraints.§.§ Hidden spinsIn the proposed model, assuming a complete topology graph, the number of tunable parameters Γ_ij scales quadratically with respect to the input dimension n. In practice, the number of model parameters is intrinsically fixed by the input dimensionality, akin to a neural network featuring only input and output layers. In the neural network scenario, to enhance the model expressiveness, the number of parameters is typically augmented by introducing additional hidden layers. In a similar way, we consider additional hidden spins, represented by additional nodes in the topology graph. These additional spins increase the number of couplings and, therefore, the number of parameters of the model. This is accomplished by adding a preprocessing step, h_pre: ^n →^n_total,mapping the original input vector θ from the feature space ^n to a higher-dimensional space characterized by n_total = n + n_hidden dimensions, with n_hidden representing the number of additional hidden spins. An illustration of this preprocessing step and the increase in the number of coupling parameters is given in <ref>.The preprocessing step does not affect the training process. Indeed, the model can still be trained as described in <ref>. Instead, the choice of the preprocessing function exerts a significant influence on the model's performance. For instance, let us consider a trivial preprocessing procedure that appends zero values to the input vector in order to reach the desired dimension. Although this approach would increase the number of model parameters, the hidden spins would be indistinguishable from each other, resulting in a very similar learning behavior and making them redundant. In contrast, initializing the additional dimensions with random values would mitigate this issue, but these values may overshadow the original input, especially if n_hidden≫ n. In this work, we propose and evaluate a first simple scheme to initialize additional spins based on a constant real-valued offset. This offset initialization approach is defined as θ∈^n→ h_offset(θ) = [ θ; θ + 1 · d; ⋮; θ + (l-1) · d ]∈^n_total,where d ∈^n, l ∈ℤ^+, and n_total = ln (i.e., n_total is a multiple of n). This corresponds to a repeated concatenation of the original input θ with an increasing real-valued offset d. § EMPIRICAL EVALUATIONThis section provides an initial proof of concept of the model's capabilities. Indeed, this is neither a benchmarking exercise nor an in-depth analysis of the model's expressiveness but a demonstration of possible use cases and applications of the model. A detailed performance evaluation of the model, entailing the necessary statistical repetitions and the comparison to alternative models, is left for future work. To simplify the usage of the model, a Python package that automates the repeated calls to the Ising machines during the training of the model and also facilitates the cross-usage with other common Python machine learning packages (such as PyTorch) was published on Github <cit.>. As a first experiment, the model has been trained on randomly sampled datasets to demonstrate the trainability of the model itself according to the update rules of <ref>. Then, as real-world demonstrations, the model has been trained for the function approximation task and also as a binary classifier for the bars and stripes dataset. §.§ Experimental setupAs discussed in <ref>, the model supports different Ising machines. In this work, we have considered simulated annealing and quantum annealing, both provided by the D-Wave Ocean Software SDK <cit.>. While the former represents a software implementation of simulated annealing, the latter directly accesses the superconducting annealing hardware supplied by D-Wave. In particular, the Advantage_system5.4 has been used here. More in detail, the quantum annealing hardware in question is characterized by 5760 qubits and is based on the Pegasus topology, with an inter-qubit connectivity of 15. To control the hardware, D-Wave provides the Ocean SDK, which includes multiple software packages facilitating the handling of the annealing hardware. Among them, it is worth mentioning the minorminer package, which has been used to embed the problems into the annealer topology. In practice, to achieve the desired connectivity (all-to-all in this case), multiple physical qubits are chained together to form logical qubits; the drawback lies in the reduced number of available qubits. In particular, in each run, the embedding has been computed once for a fully connected graph of the required size and reused in the subsequent calls to the annealer; for this aim, the FixedEmbeddingComposite class of the Ocean SDK has been employed. Regarding the actual annealing process, the default setup has been used, namely, automatic rescaling of bias and coupling terms to fit the available hardware ranges, chain strength settings according to uniform_torque_compensation, an annealing time of 20 μ s, and a twelve-point annealing schedule. To account for the high number of calls to the annealing hardware throughout training and save hardware access time, a number of reads (sampling shots) equal to 1 has been used for each annealing process. For more information, refer to Zenodo <cit.>, where the set of notebooks used have been made available.Concerning the model parameters, in all experiments, the couplings Γ_ij^(0) have been initialized to zero and updated according to (<ref>). Instead, λ and ϵ have been kept fixed throughout the training process and considered as hyperparameters to facilitate the learning process. Specifically, the selection of the λ value has been done manually to ensure that the model output was reasonably well-aligned with the range of values of the training data. By contrast, the ϵ value has been set according to the outcomes of a first round of sampling. In detail, the following rule has been used:ϵ = 1/N∑_a=1^N[y^(a) - F(θ^(a)|Γ^(0), λ, 0)] = 1/N∑_a=1^N[y^(a) + λ∑_i=1^n |θ^(a)_i |] ,with the last equivalence being valid only if Γ_ij^(0) = 0 for i,j∈{1,…,n}. §.§ Random dataTo demonstrate the trainability of the model, 30 distinct datasets, each comprising N=20 data points with input dimension n=10, have been considered. In particular, the input and target output values have been randomly sampled from a uniform distribution over the interval [-1,1]. In addition, in this experiment, the simulated annealing algorithm bundled in the Ocean SDK has been employed as the Ising machine for estimating the ground state and the corresponding energy value. Hence, no quantum annealing hardware has been used in this case. The parameters used for simulated annealing can be found directly in the source code at <cit.>. Instead, regarding the parameters of the proposed model, λ has been set to 1, and ϵ has been set according to (<ref>) (taking a different value for each dataset). For the training process, N_epochs = 50 epochs have been executed, with η = 0.2. The MSE loss progression through the training is shown in <ref>, where the error bars represent the standard deviation across the datasets. Although this particular example lacks practical significance, it serves as a simple demonstration that the proposed Ising-machine-based parametric model can be effectively trained by utilizing its own output according to the update rules presented in <ref>. Furthermore, it highlights the fact that the discontinuity observed in the derivative of the optimal spin configuration ^*, as discussed in the proof of <ref>, does not hinder the model's ability to minimize the loss function. In essence, the assumption made in (<ref>) regarding the computation of the partial derivatives proves to be sufficiently accurate. §.§ Function approximationIn this second experiment, datasets comprising N=20 data points sampled from polynomial functions have been considered. Due to the limited quantum annealing time available on the D-Wave hardware, the analysis has been limited to two straightforward cases, and no statistical repetition has been performed. Although this shortage prohibits any general conclusion on the model's performance, it serves as a first demonstration of the possibility of using the model to approximate simple functions. Specifically, the following two polynomial functions of first and second degree, respectively, have been considered:f_lin(x)= 2x - 6, f_quad(x)= 1.2 (x - 0.5)^2 -2 .In both cases, the coefficients have been chosen manually and arbitrarily, and the input domain has been restricted to the interval [0,1]. As the input dimensionality is n=1, additional n_hidden hidden spins (see <ref>) have been considered. In particular, two different total sizes n_total={50,150} have been analyzed in order to study the effect of the number of hidden spins on the model learning. Additionally, the spins have been initialized using the offset technique described in <ref>. Regarding the model parameters, fixed values have been manually chosen for the scaling factor λ, whereas the offset ϵ has again been set according to (<ref>). All model parameters used for the two total sizes considered are summarized in <ref>. In this case, simulated and quantum annealing have been employed as Ising machines and compared. The simulated annealing parameters are the same as those used in <ref>.The MSE loss throughout the training epochs for the two functions is shown in <ref>. In the case of the linear function (<ref>), the model demonstrates a significant reduction in the mean squared error (MSE), over nearly three orders of magnitude, after approximately 200 optimization steps. Instead, in the case of the quadratic function (<ref>), the initial loss was already low, indicating that the offset method chosen for the hidden layers was appropriate for this dataset. Nevertheless, the model has managed to decrease the loss by nearly additional three orders of magnitude. It is also worth noting that, in both cases, for equal model sizes, the results achieved using the quantum annealing hardware align closely with those obtained employing the simulated annealing algorithm. Specifically, the fluctuations in the quantum annealing loss are caused by the very low number of reads (1), resulting in non-optimal solutions occasionally returned by the annealer. Finally, the higher number of hidden spins (150) has shown significant advantages only for the linear function.Instead, <ref> displays the output of the trained models compared to the original functions. It is clear that the model has successfully learned to approximate the target functions. Specifically, as expected from the low final loss value, the model closely aligns with the original function in the case of the quadratic function. Instead, in the linear case, the model performance deteriorates significantly toward the interval edges, and the output values exhibit a tendency toward a shape resembling an even-degree polynomial, especially for the case with less hidden spins (n_total=50). This behavior stems from the initialization method chosen for the hidden spins and the symmetry properties of the Ising model. At extreme bias values, located near the interval boundaries, the biases exert a dominant influence on the energy term in Equation (<ref>), causing F(θ) →∞ as |θ|→±∞. Consequently, the behavior resembles that of even polynomials, thus explaining the outliers in Figure <ref>. Using more hidden spins (n_total=150) reduces this effect by providing more trainable parameters to the model. It is also worth mentioning that different initialization methods for the hidden spins (e.g., taking the inverse values) influence this behavior. §.§ Bars and stripesIn this last experiment, the proposed model has been applied to a different machine learning task: binary classification. For this purpose, the well-known bars and stripes (BAS) dataset has been used. In detail, the dataset consists of square matrices with binary entries such that the values in the rows/columns are identical within each row/column; the resulting patterns can be identified as bars/stripes, giving the dataset its name. Actually, the cases in which all entries of the matrix are the same have been left out as the label is not unique. Some examples are shown in <ref>. Regarding the classification task, it consists in assigning a label l ∈{bars,stripes} to each matrix, corresponding to the pattern it represents. In particular, the dataset was created by randomly deciding the label of each data point and randomly assigning one of the two binary values to each row/column. This procedure has been repeated N times, without accounting for duplicates.In order to apply the proposed model to the BAS dataset, the input matrices have been flattened row-wise, and the binary values have been directly provided as input to the model. The binary labels l ∈{bars,stripes} have been encoded into y and decoded from the model output F_model according toy =0,l = bars10 ,l = stripes l_model = bars,F_model≤ 5stripes,F_model > 5,with the factor 10 being arbitrarily chosen (different values can be used, but the λ and ϵ parameters must be adjusted accordingly). For the training, a randomly generated dataset comprising N=80 data points, with each data point representing a BAS matrix of size 12 × 12, has been used. In particular, the model has been trained for N_epochs = 8 epochs, with η = 0.02, and has been evaluated on a separate test set consisting of other 80 data points. Since no additional hidden spins have been employed, n = n_total = 144 in this case. Concerning λ and ϵ, the former has been manually set to λ = -0.3, while the latter has been set to ϵ = -15.43 according to (<ref>). Due to the large number of spins n_total=144, only the quantum annealing hardware was used to train the model.The results obtained are shown in <ref>. Specifically, <ref>a displays the model output during training for the training set and test set, respectively. The values shown are the average output values across all the data points with the same label, with the corresponding standard deviations indicated by the transparent envelopes. The dotted horizontal line represents the classification threshold from (<ref>). In practice, the average output value for the two labels diverges, approaching toward 0 and 10, respectively, as the number of epochs increases. This means that the model has learnt to increase the output value for stripe data points and lower it for samples labeled as bars. This generalizes also to the unseen examples of the test set, but the separation between the two classes is more marked for the training set. This effect is also visible in <ref>b, where the MSE loss for the training set and test set is shown. In detail, the training loss decreases in a monotone way, while the test loss stagnates after a few epochs. This is a typical indicator of model overfitting, which could be addressed in different ways, among which increasing the number of training samples N in order to help the model generalize. A similar conclusion can be drawn considering the accuracy of the model shown in <ref>c. The trained model is able to correctly classify 79 out of 80 training samples, but the accuracy on the test set saturates at only about 75%. In conclusion, this experiment has demonstrated the possibility of using the proposed model to address also binary classification tasks by choosing an appropriate encoding-decoding procedure for the model input and output. Indeed, the model has proven to be able to generalize to unseen examples while exhibiting overfitting effects, at least for the chosen dataset. §.§ Choice of hyperparametersSelecting appropriate values for the model's hyperparameters is a common issue in machine learning. Multiple hyperparameters have been manually set in the experiments presented in this work. These include the learning rate η, the number of epochs N_epochs, the problem encoding (see <ref>), the Ising machine parameters like the number of samples per step for simulated annealing or the embedding procedure, the annealing time, and the number of reads for quantum annealing. Choosing appropriate values may reduce, for example, the fluctuations observed in <ref>. The values used here have been selected based on observations resulting from trial and error runs; the analysis of different configurations and a more systematic approach to choosing appropriate values are left for future work.Among the model-related hyperparameters, the choice of the initialization strategy for the additional hidden spins has a significant impact. Specifically, when the input dimension is low, a large number of hidden spins n_hidden≫ n may be necessary in order to have enough trainable model parameters. However, particular care must be put in choosing the corresponding new bias terms. Indeed, in preliminary experiments, it has been observed that initializing the biases in the wrong way may negatively affect the performance to the point that the model is unable to approximate the target function. Finding a suitable ansätze for different tasks is still an open question. § CONCLUSIONIn this paper, we have proposed a novel parametric learning model that leverages the inherent structure of the Ising model for training purposes. We have presented a straightforward optimization procedure based on gradient descent and we have provided the rules for computing all relevant derivatives of the mean squared error loss. Notably, if the Ising machine is realized by a quantum platform, our approach allows for the utilization of quantum resources for both the execution and the training of the model. Experimental results using a D-Wave quantum annealer have demonstrated the successful training of our model on simple proof-of-concept datasets, specifically for linear and quadratic function approximations and binary classification. This novel approach unveils the potential of employing Ising machines, particularly quantum annealers, for general learning tasks. In addition, it raises intriguing theoretical and practical questions from both computer science and physics perspectives. From a theoretical standpoint, questions regarding the expressibility of the Ising model arise, as well as inquiries into the classes of functions that the model can represent. These questions are non-trivial due to the non-linear minimization step involved. From a practical point of view, given the broad definition of the model and its similarity to other classical parametric models, a wide range of machine learning tools and methods can be explored to enhance its training. Advanced gradient-based optimizers and general learning techniques such as mini-batching, early stopping, and dropout, among others, offer promising avenues for improvement.In addition to function approximation and binary classification, we aim to investigate the application of the model to other machine learning tasks, especially tasks in which the feature space is large, to reduce the necessity of additional hidden spins.§ ACKNOWLEDGMENTS.This work was partially supported by project SERICS (PE00000014) under the MUR National Recovery and Resilience Plan funded by the European Union - NextGenerationEU. In addition, E.Z. was supported by Q@TN, the joint lab between University of Trento, FBK-Fondazione Bruno Kessler, INFN-National Institute for Nuclear Physics and CNR-National Research Council. Eventually, the authors gratefully acknowledge CINECA for providing computing time on the D-Wave quantum annealer within the project Testing the learning performances of quantum machines, and the Jülich Supercomputing Center for providing computing time on the D-Wave quantum annealer through the Jülich UNified Infrastructure of Quantum computing (JUNIQ).
http://arxiv.org/abs/2310.18411v1
{ "authors": [ "Ludwig Schmid", "Enrico Zardini", "Davide Pastorello" ], "categories": [ "cs.LG", "quant-ph" ], "primary_category": "cs.LG", "published": "20231027180702", "title": "A general learning scheme for classical and quantum Ising machines" }
http://arxiv.org/abs/2310.18216v1
{ "authors": [ "Gernot Schaller", "Friedemann Queisser", "Seyedeh Parya Katoorani", "Christian Brand", "Christian Kohlfürst", "Mark R. Freeman", "Alfred Hucht", "Peter Kratzer", "Björn Sothmann", "Michael Horn-von Hoegen", "Ralf Schützhold" ], "categories": [ "cond-mat.stat-mech" ], "primary_category": "cond-mat.stat-mech", "published": "20231027154441", "title": "Sequential Kibble-Zurek dynamics in the anisotropic Ising model of the Si(001) surface" }
Mixed pairwise cross intersecting families (I)This work is supported byNSFC (Grant No. 11931002).E-mail addresses: [email protected] (Yang Huang), [email protected] (Yuejian Peng, corresponding author). Yang Huang, Yuejian Peng^† School of Mathematics, Hunan University Changsha, Hunan, 410082, P.R. China2023-10-25 ============================================================================================================================================================================================================== Families 𝒜 and ℬ are cross-intersecting if A∩ B∅ for any A∈𝒜 and B∈ℬ. If n<k+l, all families 𝒜⊆[n] k and ℬ⊆[n] l are cross intersectingand we say that𝒜 and ℬ are cross intersecting freely. An (n, k_1, …, k_t)-cross intersecting system isa set of non-empty pairwise cross-intersecting families ℱ_1⊂[n] k_1, ℱ_2⊂[n] k_2, …, ℱ_t⊂[n] k_twith t≥ 2 and k_1≥ k_2≥⋯≥ k_t. If an (n, k_1, …, k_t)-cross intersecting system contains at least two families which are cross intersecting freely and at least two families which are cross intersecting but not freely, then we say that the cross intersecting system is of mixed type.All previous studies are on non-mixed type, i.e, under the condition thatn ≥ k_1+k_2. In this paper, we study for the first interesting mixed type, an (n, k_1, …, k_t)-cross intersecting system with k_1+k_3≤ n <k_1+k_2, i.e.,families ℱ_i⊆[n] k_i and ℱ_j⊆[n] k_j are cross intersecting freely if and only if {i, j}={1, 2}.Let M(n, k_1, …, k_t) denote the maximum sum of sizes of families in an (n, k_1, …, k_t)-cross intersecting system.We determine M(n, k_1, …, k_t) and characterize allextremal (n, k_1, …, k_t)-cross intersecting systems for k_1+k_3≤ n <k_1+k_2. A result of Kruskal-Katonaallows us to consider only families ℱ_i whose elements are the first |ℱ_i| elements in lexicographic order (we call themL-initial families). Since n <k_1+k_2, ℱ_1⊆[n] k_1 and ℱ_2⊆[n] k_2 are cross intersecting freely. Thus, when we try tobound ∑_i=1^t|ℱ_i| by a function, there are two free variables I_1 (the last element of ℱ_1) and I_2 (the last element of ℱ_2). This causes more difficulty to analyze properties of the corresponding function comparing to non-mixed type problems (single-variable function). To overcome this difficulty, we introduce new concepts `k-partner' and `parity', develop some rules to determine whether two L-initial cross intersecting families are maximal to each other, andprove one crucial property that in an extremal L-initial(n, k_1, …, k_t)-cross intersecting system(ℱ_1, ℱ_2, ⋯, ℱ_t), the last element of ℱ_1 and the last element of ℱ_2 are `parities' (see Section <ref> for the definition) to each other. This discovery allows us to bound ∑_i=1^t|ℱ_i| by a single variable function g(I_2), where I_2 is the last element of ℱ_2. Another crucial and challenge part isto verify that -g(I_2) has unimodality. Comparing to the non-mixed type, we need to overcome more difficulties in showing the unimodality offunction -g(I_2) since there are more terms to be taken care of. We think that the characterization of maximal cross intersectingL-initial families and the unimodality of functions in this paper are interesting in their own, in addition to the extremal result.The most general condition on nis thatn≥ k_1+k_t (If n< k_1+k_t, then ℱ_1 is cross intersectingfreely with any otherℱ_i, andconsequentlyℱ_1=[n] k_1 in an extremal (n, k_1, …, k_t)-cross intersecting system and we can remove ℱ_1 .). This paper provides foundation work for the solution to the most general condition n≥ k_1+k_t. Key words: Cross intersecting families; Extremal finite sets2010 Mathematics Subject Classification.05D05, 05C65, 05D15.§ INTRODUCTIONLet [n]={1, 2, …, n}.For 0≤ k ≤ n, let [n] k denote the family of all k-subsets of [n]. A family 𝒜 is k-uniform if 𝒜⊂[n] k. A family 𝒜 isintersecting if A∩ B∅ for any A and B∈𝒜. Many researches in extremal set theory are inspired by the foundationalresult of Erdős–Ko–Rado <cit.> showing that a maximum k-uniform intersecting family is a full star.This result of Erdős–Ko–Rado has many interesting generalizations. Two families 𝒜 and ℬ are cross-intersecting if A∩ B∅ for any A∈𝒜 and B∈ℬ. Note that 𝒜 and 𝒜 arecross-intersecting is equivalent to that 𝒜 is intersecting. We call t (t≥ 2) families𝒜_1, 𝒜_2,…, 𝒜_t pairwise cross-intersecting families if 𝒜_i and 𝒜_j are cross-intersecting when 1≤ i<j ≤ t. Additionally, if 𝒜_j∅ for each j∈ [t], then we say that 𝒜_1, 𝒜_2,…, 𝒜_t are non-empty pairwise cross-intersecting. For given integers n, t, k_t, …, k_t, if families 𝒜_1⊆[n] k_1, …, 𝒜_t⊆[n] k_t are non-empty pairwise cross intersecting and k_1≥ k_2…≥ k_t, then we call (𝒜_1, …, 𝒜_t)an (n, k_1, …, k_t)-cross intersecting system.DefineM(n, k_1, …, k_t) =max{∑_i=1^t|𝒜_i|: (𝒜_1, …, 𝒜_t)is an(n, k_1, …, k_t)-crossintersecting system }.We say that an (n, k_1, …, k_t)-cross intersecting system (𝒜_1, …, 𝒜_t)is extremal if ∑_i=1^t|𝒜_i|=M(n, k_1, …, k_t). In <cit.> (Theorem <ref>), the authors determined M(n, k_1,…, k_t) for n≥ k_1+k_2andcharacterized the extremal families attaining the bound.Let 𝒜_1⊂[n] k_1, 𝒜_2⊂[n] k_2, …, 𝒜_t⊂[n] k_t be non-empty pairwise cross intersecting families with t≥ 2, k_1≥ k_2≥⋯≥ k_t, and n≥ k_1+k_2. Then∑_i=1^t|𝒜_i|≤{n k_1-n-k_t k_1+∑_i=2^tn-k_t k_i-k_t, ∑_i=1^tn-1 k_i-1}.The equality holds if and only if one of the following holds.(i) n k_1-n-k_t k_1+∑_i=2^tn-k_t k_i-k_t>∑_i=1^tn-1 k_i-1, and there is some k_t-element set T⊂ [n] such that 𝒜_1={F∈[n] k_1: F∩ T∅} and 𝒜_j={F∈[n] k_j: T⊂ F} for each j∈ [2, t];(ii)n k_1-n-k_t k_1+∑_i=2^tn-k_t k_i-k_t≤∑_i=1^tn-1 k_i-1, there are some i≠ j such that n>k_i+k_j, and there is some a∈ [n] such that 𝒜_j={F∈[n] k_j: a∈ F} for each j∈ [t];(iii) t=2, n=k_1+k_2, 𝒜_1⊂[n] k_1 and 𝒜_2=[n] k_2∖𝒜_1;(iv) t≥ 3, k_1=k_2=⋯=k_t=k, n=2k, fix some i∈ [t], 𝒜_j=𝒜 for all j∈ [t]∖{i}, where 𝒜 is an intersecting family with size n-1 k-1, and 𝒜_i=[n] k∖𝒜. Theorem <ref> generalized the results of Hilton-Milner in <cit.>, Frankl-Tokushige in <cit.> and Shi-Qian-Frankl in <cit.>. In Theorem <ref>, taking t=2 and k_1=k_2, we obtain the result of Hilton-Milner in <cit.>. Taking t=2, we obtain the result of Frankl-Tokushige in <cit.>. Taking k_1=k_2=…=k_t, we obtain the result of Shi-Qian-Frankl in <cit.>. Note that if n<k+ℓ, any family 𝒜⊆[n] k and any family ℬ⊆[n]ℓ are cross intersecting. In this case, we say that families 𝒜⊆[n] k and ℬ⊆[n]ℓ arecross intersectingfreely. If an (n, k_1, …, k_t)-cross intersecting system contains at least two families which are cross intersecting freely and at least two families which arecross intersecting but not freely, then we say that the cross intersecting system is of mixed type. Otherwise, we say that it is of non-mixed type. All previous studies are of non-mixed type, i.e., n ≥ k_1+k_2.It would be interestingto study for mixed type. The first interesting case is to study an (n, k_1, …, k_t)-cross intersecting system (ℱ_1, …, ℱ_t) fork_1+k_3≤ n <k_1+k_2. In this case, ℱ_i⊆[n] k_i and ℱ_j⊆[n] k_j are cross intersecting freely if and only if {i, j}={1, 2}. In this paper, we focus on this case, we determine M(n, k_1, …, k_t) and characterizeall extremal (n, k_1, …, k_t)-cross intersecting systems. Let us look at the following (n, k_1, …, k_t)-cross intersecting systems. Let k_1≥ k_2≥…≥ k_t and k_1+k_3≤ n <k_1+k_2. For each i∈ [t],we denote𝒢_i={ G∈[n] k_i: 1∈ G }.Note that∑_i=1^t|𝒢_i|=∑_i=1^tn-1 k_i-1=:λ_1.Let k_1≥ k_2≥…≥ k_t and k_1+k_3≤ n <k_1+k_2.For i=1 or 2, letℋ_i={ H∈[n] k_i: H∩ [k_t]∅},and for i∈[3, t], letℋ_i={ H∈[n] k_i: [k_t]⊆ H}.Note that∑_i=1^t|ℋ_i|=∑_i=1^2(n k_i-n-k_t k_i)+∑_i=3^tn-k_t k_i-k_t=:λ_2. Our main resultis that an extremal(n, k_1, …, k_t)-cross intersecting system must be isomorphic to Construction <ref> or Construction <ref> if k_1+k_3≤ n<k_1+k_2.Let ℱ_1⊂[n] k_1, ℱ_2⊂[n] k_2, …, ℱ_t⊂[n] k_t be non-empty pairwise cross intersecting families with t≥ 3, k_1≥ k_2≥⋯≥ k_t and k_1+k_3≤ n<k_1+k_2. Then∑_i=1^t|ℱ_i|≤{∑_i=1^tn-1 k_i-1, ∑_i=1^2(n k_i-n-k_t k_i)+∑_i=3^tn-k_t k_i-k_t}.The equality holds if and only if (ℱ_1, …, ℱ_t) is isomorphic to (𝒢_1, …, 𝒢_t) in Construction <ref> or (ℋ_1, …, ℋ_t) in Construction <ref>.In provingTheorem <ref> in <cit.>, the authors applied a result of Kruskal-Katona (Theorem <ref>)allowing us to consider only families ℱ_i whose elements are the first |ℱ_i| elements in lexicographic order (we call themL-initial families). We bounded ∑_i=1^t|ℱ_i| by a functionf(R) of the last element R in the lexicographic order of anL-initial family ℱ_1 (R is called the ID ofℱ_1 ),and showed that -f(R) has unimodality.To prove our main result (Theorem <ref>) in this paper, by the result of Kruskal-Katona (Theorem <ref>), we canstill consider onlypairwise cross-intersecting non-empty L-initial families ℱ_1⊂[n] k_1, ℱ_2⊂[n] k_2, …, ℱ_t⊂[n] k_t. However, in this paper, the condition on n is relaxed to k_1+k_3≤ n <k_1+k_2, soℱ_1⊆[n] k_1 and ℱ_2⊆[n] k_2 are cross intersecting freely. When we try tobound ∑_i=1^t|ℱ_i| by a function, there are two free variables I_1 (the ID of ℱ_1) and I_2 (the ID of ℱ_2). This causes more difficulty to analyze properties of the corresponding function, comparing to the problem in <cit.>. To overcome this difficulty,we introduce new concepts `k-partner' and `parity', develop some rules to determine whethera pair of L-initial cross intersecting families are maximal to each other (see precise definition), and prove one crucial property that for an extremal L-initial(n, k_1, …, k_t)-cross intersecting system(ℱ_1, ℱ_2, ⋯, ℱ_t), the ID I_1 of ℱ_1 and the ID I_2 of ℱ_2 are `parities' to each other (Lemma <ref>). This discovery allows us to bound ∑_i=1^t|ℱ_i| by a single variable function g(I_2). Another crucial and challenge part isto verify theunimodality of -g(I_2) (Lemmas <ref>, <ref>and <ref>). Comparing to the function in<cit.>, we need to overcome more difficulties in dealing withfunction -g(I_2) since there are more `mysterious' terms to be taken care of. We take advantage of some properties of function f(R)obtained in <cit.> and come up with some new strategies in estimating the change g(I'_2)-g(I_2) as the ID of ℱ_2 increases from I_2 to I'_2 (Sections <ref> and <ref>).The most general condition on nis thatn≥ k_1+k_t. If n< k_1+k_t, then ℱ_1 is cross intersectingfreely with any otherℱ_i, andconsequentlyℱ_1=[n] k_1 in an extremal (n, k_1, …, k_t)-cross intersecting system and ℱ_1 can be removed.The most general case is more complex and we will deal with it in a forthcoming paper <cit.>. The work inthis paper provides important foundationfor the solution to the most general condition n≥ k_1+k_t.To obtain the relationship between the ID of ℱ_1 and the ID of ℱ_2, we build some foundation work in Section <ref>, for example, we come up with new concepts `k-partners' and `parity', and give a necessary and sufficient condition on maximal cross intersectingL-initial families.In Section <ref>, we give the proof of Theorem <ref> by assuming the truth of Lemmas <ref>, <ref>, <ref>and <ref>.In Section <ref>, we list some results obtained for non-mixed typein <cit.> which we will apply. In Sections <ref> and <ref>, we give the proofs of Lemmas <ref>, <ref>, <ref>and <ref>. § PARTNER AND PARITYIn this section, we introduce new concepts `k-partner' and `parity', develop some rules to determine whether a pair ofL-initial cross intersecting families are maximal to each other (precise definitions will be given in this section). We prove some properties which are foundation for the proof of our main results.When we write a set A={a_1, a_2, …, a_s}⊂ [n], we always assume that a_1<a_2<…<a_s throughout the paper. Let max A denote the maximumelement of A, let min A denote the minimum element of A and(A)_i denote the i-th element of A. Let us introduce the lexicographic (lex for short) order of subsets of positive integers. Let A and B be finite subsets of the set of positive integers ℤ_>0. We say that A≺ B if either A⊃ B or min(A∖ B) < min(B∖ A). In particular, A≺ A. Let A and B in [n] k with A⪵ B. We write A<B if there is no other C∈ [n] k such that A⪵ C⪵ B.Let ℒ(n, r, k) denote the first r subsets in [n] k in the lex order. Given a set R, we denote ℒ(n, R, k)={F∈[n] k: F≺ R}. Whenever the underlying set [n] is clear, we shall ignore it and write ℒ(R, k), ℒ(r, k) for short.Let ℱ⊂[n] k be a family, we say ℱ is L-initial if ℱ=ℒ(R, k) for some k-set R.We call R theID of ℱ. The well-known Kruskal-Katona theorem <cit.> will allow us to consider only L-initial families. An equivalent formulation of this result was given in <cit.> as follows. For 𝒜⊂[n] k and ℬ⊂[n] l, if 𝒜 and ℬ are cross intersecting, then ℒ(|𝒜|, k) and ℒ(|ℬ|, l) are cross intersecting as well.In <cit.>, we proved the following important result. <cit.> Let a, b, n are positive integers and a+b≤ n. For P⊂ [n] with |P|≤ a, let Q be the partner of P. Then ℒ(Q, b) is the maximum L-initial b-uniform family that are cross intersecting to ℒ(P, a). In <cit.>,we worked on non-mixed type: Let t≥ 2, k_1≥ k_2≥⋯≥ k_t and n≥ k_1+k_2and families 𝒜_1⊂[n] k_1, 𝒜_2⊂[n] k_2, …, 𝒜_t⊂[n] k_t be non-empty pairwise cross-intersecting (not freely). Let R be the ID of 𝒜_1, and Tbe the partner of R. In the proof of Theorem <ref>, one important ingredient is that by Proposition <ref>, ∑_i=1^t|𝒜_i| can be bounded by a function of R as following. f(R)=∑_j=1^t|𝒜_j|≤ |ℒ(R, k_1)| +∑_j=2^t |ℒ(T, k_j)| .By Theorem <ref>,to prove the quantitative part of Theorem <ref> we mayalso assume that ℱ_i is L-initial, that is, ℱ_i=ℒ(|ℱ_i|, k_i) for each i∈ [t]. In the rest of this chapter, we assume that (ℱ_1, …, ℱ_t) is an extremal non-empty (n, k_1, …, k_t)-cross intersecting system with t≥ 3, k_1+k_3≤ n< k_1+k_2, and ℱ_j is L-initial for each j∈ [t] with ID I_j. However,the condition on n is relaxed to k_1+k_3≤ n <k_1+k_2, soℱ_1⊆[n] k_1 and ℱ_2⊆[n] k_2 are cross intersecting freely. When we try tobound ∑_i=1^t|ℱ_i| by a function, there are two free variables I_1 (the ID of ℱ_1) and I_2 (the ID of ℱ_2). This causes more difficulty to analyze properties of the corresponding function, comparing to the problem in <cit.>. To overcome this difficulty,we introduce new concepts 'k-partner' and 'parity', develop some rules to determine whethera pair of L-initial cross intersecting families are maximal to each other (see precise definition). Let ℱ⊆[n] f and𝒢⊆[n] g be cross intersecting families. We say that (ℱ, 𝒢) is maximal or ℱ and 𝒢 are maximal cross intersecting familiesif whenever ℱ'⊆[n] f and𝒢'⊆[n] g are cross intersecting with ℱ⊆ℱ' and 𝒢⊆𝒢', then ℱ=ℱ' and 𝒢=𝒢'. Let F and G be two subsets of [n]. Wesay (F, G) is maximal if there are two L-initial families ℱ⊆[n] |F| and 𝒢⊆[n] |G| with IDs F and G respectively such that (ℱ, 𝒢) is maximal. We say two families 𝒜_1 and 𝒜_2 are maximal pair families if |𝒜_1|=|𝒜_2| and for every A_1∈𝒜_1, there is a unique A_2∈𝒜_2 such that (A_1, A_2) is maximal.Let F={x_1, x_2, …, x_k}⊆ [n]. We denoteℓ(F)= max{x: [n-x+1, n]⊆ F},if max F=n;0,if max F<n.Let F⊆ [n] be a set. We denoteF^t= ∅,if ℓ(F)=0;[n-ℓ(F)+1, n],if ℓ(F)≥ 1.Let F and H be two subsets of [n] with size |F|=f and |H|=h. We say that F and H strongly intersect at their last element if there is an element q such that F∩ H={q} and F∪ H=[q]. We also say F is the partner of H, or H is the partner of F. Let k≤ n-f be an integer, we define the k-partner K of F as follows. For k=h, let K=H.If k>h, then let K= H∪{n-k+h+1, …, n}. We can see that |K|=k. Indeed, since F and H intersect at their last element, n'=max H=f+h-1<n-k+h+1, so |K|=| H∪{n-k+h+1, …, n}|=k.If k< h, then let K be the last k-set in [n] k such that K ≺ H, in other words, there is no k-set K' satisfying K⪵ K'⪵ H. By the definition of k-partner, we have the following remark.Let F⊆ [n] with |F|=f and k≤ n-f. Suppose thatH is the partner of F, and K is the k-partner of F, then we have ℒ(H, k)=ℒ(K, k).Let F⊆ [n] with |F|=f and k≤ n-f. Then the k-partner of F is the same as the k-partner of F∖ F^t.If ℓ(F)=0, then we are fine. Suppose ℓ(F)>0 and F={x_1, …, x_y}∪{n-ℓ(F)+1, …, n}. Let H and H' be the partners of F and F∖{n-ℓ(F)+1, …, n} respectively. Then |H|>n-f, consequently k<|H|,H=H∩ [x_y-1]∪ [x_y+1, n-ℓ(F)]∪{n} and H'=H∩ [x_y-1]∪{x_y}. Suppose that K and K' are the k-partners of F and F∖{n-ℓ(F)+1, …, n} respectively. By the definition of k-partner, we can see that if k≤ |H∩ [x_y-1]|,then K=K', as desired; if k= |H'|, then K'=H' and K=H∩ [x_y-1]∪{x_y}=H', as desired; if k>|H'|, then K'=H'∪ [n-k+|H'|+1, n] and K=H∩ [x_y-1]∪{x_y}∪ [n-k+|H'|+1, n]=K', as desired. By Remark <ref> and Proposition <ref>, we have the following fact.Let a, b, n be positive integers and n≥ a+b. ForA⊂ [n] with |A|=a, let B be the b-partner of A, then ℒ(B, b) is the maximum L-initial b-uniform family that are cross intersecting to ℒ(A, a). We also say that ℒ(B, b) is maximal to ℒ(A, a), or say B is maximal to A.Note thatfamilies ℒ(A, a) and ℒ(B, b) which mentioned in Fact <ref> may not be maximal cross intersecting, since we don't know whether ℒ(A, a) is maximal to ℒ(B, b). For example, let n=9, a=3, b=4 and A={2, 4, 7}. Then the b-partner of A is {1, 3, 4, 9}. Although ℒ({1, 3, 4, 9}, 4) is maximal to ℒ({2, 4, 7}, 3), ℒ({1, 3, 4, 9}, 4) and ℒ({2, 4, 7}, 3) are not maximal cross intersecting families since ℒ({2, 4, 7}, 3)⊊ℒ({2, 4, 9}, 3), and ℒ({2, 4, 9}, 3) and ℒ({1, 3, 4, 9}, 4) are cross intersecting families. Frankl-Kupavskii <cit.> gave a sufficient condition for a pair of maximal crossintersecting families, and a necessary condition for a pair of maximal crossintersecting families as stated below.Let a, b∈ℤ_>0, a+b≤ n. Let P and Q be non-empty subsets of [n] with |P|≤ a and |Q|≤ b. If Q is the partner of P, then ℒ(P, a) and ℒ(Q, b) are maximal cross intersecting families. Inversely, if ℒ(A, a) and ℒ(B, a) are maximal cross intersecting families, let j be the smallest element of A∩ B, P=A∩ [j] and Q=B∩ [j]. Then ℒ(P, a)=ℒ(A, a), ℒ(Q, b)=ℒ(B, b) and P, Q satisfy the following conditions:|P|≤ a, |Q|≤ b, and Q is the partner of P. Based on Proposition <ref>, we point out a necessary and sufficient condition for a pair of maximal cross intersecting families in terms of their IDs.Let A and B be nonempty subsets of [n] with |A|+|B|≤ n. Let A'=A∖ A^t and B'=B∖ B^t. Then (A, B) is maximal if and only if A' is the partner of B'.Let |A|=a and |B|=b. From the definitions of A' and B', we can see that ℒ(A, a)=ℒ(A', a) and ℒ(B, b)=ℒ(B', b). First we show the sufficiency.Suppose that A' is the partner of B'. Since |A'|≤ |A| and |B'|≤ |B|, by Proposition <ref>, ℒ(A', a) and ℒ(B', b) are maximal cross intersecting families. Thus ℒ(A, a) and ℒ(B, b) are maximal cross intersecting families, in other words, (A, B) is maximal. Next, we show the necessity. Suppose that (A, B) is maximal. Let j be the smallest element of A∩ B, P=A∩ [j] and Q=B∩ [j]. By Proposition <ref>,ℒ(A, a)=ℒ(P, a), ℒ(B, b)=ℒ(Q, b); |P|≤ a, |Q|≤ b andP is the partner of Q. Since ℒ(A, a)=ℒ(P, a) and P⊆ A, then we A=P∪{n-a+|P|+1, …, n}. Similarly,B=Q∪{n-b+|Q|+1, …, n}. By the definitions of A' and B', we have A'=P and B'=Q. SinceP is the partner of Q, A' is the partner of B'.By Fact <ref> and the definition of the k-partner, we have the following property.Let a, b, k, n be integers withn≥max{a+b, a+k}. Let A be an a-subset of [n].Suppose that K is the k-partner of A∖ A^t and there exists a b-set Bsuch that (A, B) is maximal and let b'=|B∖ B^t|. Then (A, K) is maximal if and only if k≥ b'.Let a, b, n be positive integers and n≥ a+b. Suppose that A is an a-subset of [n], and B is the b-partner of A. Let A' be thea-partner of B, then (A', B) is maximal. Moreover, if A' A, then A⪵ A'.If (A, B) is maximal, then A is the a-partner of B and A'=A, we are fine. Suppose that (A, B) is not maximal. By Fact <ref>,B is maximal to A, so A is not maximal to B. Let A' be the a-partner of B. By Fact <ref> again,A' is maximal to B and A⪵ A'. Since B is maximal to A, for any b-set B' satisfyingB⪵ B', we haveℒ(B', b) and ℒ(A', a) are not cross intersecting. So B is maximal to A'. Hence (A', B) is maximal. Let h_1≤ h_2 be positive integers, H_1 and H_2 be subsets of [n] with sizes h_1 and h_2 respectively. We say H_1 is the h_1-parity of H_2, or H_2 is the h_2-parity of H_1 if H_1∖ H_1^ t=H_2∖ H_2^ t and ℓ(H_2)-ℓ(H_1)=h_2-h_1.Let d≤ f≤ h be positive integers and F⊆ [n] with |F|=f. Then F has an h-parity if and only if h-f≤ n-ℓ(F)-max(F∖ F^ t)-1, andF has an d-parity if and only if d≥ f-ℓ(F).Note thatfor a given integer k and a subset A⊆ [n], if A has a k-parity, then it has the unique one. The following fact is derived from the above definition directly.Let h_1≤ h_2≤ h_3 and H_i be an h_i-set for i∈ [3]. If H_1 is the h_1-parity of H_2 and H_2 is the h_2-parity of H_3, then H_1 is the h_1-parity of H_3. Also, if H_3 is the h_3-parity of H_1 and H_2 is the h_2-parity of H_1, then H_3 is the h_3-parity of H_2. Let a, b, k, n be positive integers and n≥max{a+k, b+k}. Let Aand B be two subsets of [n] with sizes |A|=a and |B|=b. Suppose that K_a and K_b are the k-partners of A and B respectively. If A≺ B, then K_b≺ K_a. In particular, if A is the a-parity of B, then K_b= K_a.Let h_1≤ h_2. For two families ℋ_1⊆[n] h_1 and ℋ_2⊆[n] h_2, we say that ℋ_1 is the h_1-parity of ℋ_2, or ℋ_2 is the h_2-parity of ℋ_1 if(i) for any H_1∈ℋ_1, the h_2-parity of H_1 exists and must be in ℋ_2;(ii) for any H_2∈ℋ_2, either H_2 has no h_1-parity or it's h_1-parity is in ℋ_1.Let f, g, h, n be positive integers with f≥ g and n≥ f+h. Letℱ={ F∈[n] f:there existsH∈[n] h such that(F, H)is maximal },𝒢={ G∈[n] g:there existsH∈[n] h such that(G, H)is maximal }.Let ℋ_ℱ⊆[n] h and ℋ_𝒢⊆[n] h be the families such that ℱ and ℋ_ℱ are maximal pair families and 𝒢 and ℋ_𝒢 aremaximal pair families. Then ℱ is the f-parity of 𝒢; ℋ_𝒢⊆ℋ_ℱ; and for any G∈𝒢, let F∈ℱ be the f-parity of G and H∈ℋ such that (F, H) is maximal, then (G, H) is maximal.If f=g, then ℱ=𝒢 and ℋ_𝒢=ℋ_ℱ. So we may assume that f=g+s for some s≥ 1. For any G∈𝒢, there is the unique H∈ℋ such that (G, H) is maximal. Let G'=G∖ G^ t=G∖{n-ℓ(G)+1, …, n} and H'=H∖ H^ t=H∖{n-ℓ(H)+1, …, n}. Then, by Fact <ref>, G' and H' are partners of each other . So max G'≤ g+h-1≤ f+h-s-1. Let F=G'∪{n-ℓ(G)+1-s, …, n}. Then G⊆ F, |F|=f and ℓ(F)-ℓ(G)=f-g=s. Then, by Definition <ref>, F is the f-parity of G.Let F'=F∖ F^ t. So F'=G'. Furthermore, F' and H' are partners of each other. By Fact <ref> again, we can see that (F, H) is maximal. So F∈ℱ, and ℱ is the f-parity of 𝒢, as desired. For any G∈𝒢,let F be the f-parity of G and H∈ℋ be the set such that (F, H) is maximal. By Fact <ref>,(G, H) is maximal, as desired. And this implies ℋ_𝒢⊆ℋ_ℱ.The proof is complete. Let a, b, c, n be positive integers, a≥ b and n≥ a+c, and let C be a c-subset of [n]. Suppose that A is the a-partner of C and B is the b-partner of C, then B≺ A or A is the a-parity of B.If a=b, then A=B, we are done. Assume a>b. Let T be the partner of C and let |T|=c'. If c'≤ b, then c'<a.By the definitions of a-partner and b-partner and n≥ a+c> b+c, we can see that A is the a-parity of B. If b<c'≤ a, then we have min B∖ A < min A∖ B, so B≺ A. At last,suppose b<a<c'. Let C'={x_1, …, x_b, …,x_a}⊆ C be the first a elements of C. If x_i+1=x_i+1 for all i∈[b, a-1], then A is the a-parity of B. Otherwise, we have B≺ A.§ PROOFS OF THEOREM <REF>Recall that k_1+k_3≤ n <k_1+k_2,(ℱ_1, …, ℱ_t) is anextremal (n, k_1, …, k_t)-cross intersecting system, each ℱ_i is L-initial, and I_1, I_2, …, I_t are the IDs of ℱ_1, ℱ_2, …, ℱ_t respectivelythroughout the paper.From Constructions <ref> and <ref>,we have∑_i=1^t|ℱ_i|≥{λ_1, λ_2}. We are going to prove that ∑_i=1^t|ℱ_i|≤{λ_1, λ_2}. We may assume that k_t≥ 2.Weconsider the case k_t=1. Suppose that |ℱ_t|=s. Then ℱ_t={{1}, …, {s}}. Since (ℱ_1, …, ℱ_t) is a cross intersecting system, thenfor any i∈ [t-1] and F∈ℱ_i, we have [s]⊆ F. Thus,∑_i=1^t|ℱ_i|=∑_i=1^t-1n-s k_i-s+s≤λ_1.So we may assume that k_t≥ 2.From now on, k_t≥ 2. For the case k_1=k_2, the authors in <cit.> (Corollary 1.12)have already given a positive answer. Although we can prove for this case in this paper, for simplicity,weassume that k_1>k_2. In <cit.>, the authors proved the following result.<cit.> Let n, t≥ 2, k_1, k_2, …, k_t be positive integers and d_1, d_2, …, d_t be positive numbers. Let 𝒜_1⊂[n] k_1, 𝒜_2⊂[n] k_2, …, 𝒜_t⊂[n] k_t be non-emptycross-intersecting families with |𝒜_i|≥n-1 k_i-1 for some i∈ [t]. Let m_i be the minimum integer among k_j, where j∈ [t]∖{i}. If n≥ k_i+k_j for all j∈ [t]∖{i}, then∑_1=j^td_j|𝒜_j|≤max{d_in k_i-d_in-m_i k_i+∑_j=1, j i^td_jn-m_i k_j-m_i, ∑_j=1^td_jn-1 k_j-1}.The equality holds if and only if one of the following holds.(1) If d_in k_i-d_in-k_t k_i+∑_j i^td_jn-m_i k_j-m_i≥∑_j=1^td_jn-1 k_j-1, thenthere is some m_i-element set T⊂ [n] such that 𝒜_i={F∈[n] k_i: F∩ T∅} and 𝒜_j={F∈[n] k_j: T⊂ F} for each j∈ [t]∖{i}.(2) If d_in k_i-d_in-k_t k_i+∑_j i^td_jn-m_i k_j-m_i≤∑_j=1^td_jn-1 k_j-1, then there is some a∈ [n] such that 𝒜_j={F∈[n] k_j: a∈ F} for each j∈ [t]. (3) If t=2 and n=k_i+k_3-i.If d_i≤ d_3-i,then 𝒜_3-i⊆[n] k_3-i with |𝒜_3-i|=n-1 k_3-i-1 and 𝒜_i=[n] k_i∖𝒜_3-i. (4) If n=k_i+k_j holds for every j∈ [t]∖{i} and ∑_j id_j=d_i, then𝒜_j=𝒜 for all j∈ [t]∖{i}, where 𝒜⊆[n] k is an intersecting family with size |𝒜|=n-1 k-1, and 𝒜_i=[n] k_i∖𝒜. |ℱ_1|≥n-1 k_1-1 and |ℱ_2|≥n-1 k_2-1, in other words, {1, n-k_1+2, …, n}≺ I_1 and {1, n-k_2+2, …, n}≺ I_2.If|ℱ_i|< n-1 k_i-1 for each i∈ [t], then ∑_i=1^t|ℱ_i|< λ_1, a contradiction to (<ref>). So there is i∈ [t] such that |ℱ_i|≥n-1 k_i-1.Suppose that i∈ [3, t] firstly. Since k_1≥ k_2, n≥ k_1+k_3 and therefore n≥ k_i+k_j for all j∈ [t]∖{i}. Let m_i=min{k_j, j∈ [t]∖{i}}.Taking d_1=d_2=…=d_t in Theorem <ref>, we obtain∑_j=1^t|ℱ_j|≤{∑_j=1^tn-1 k_j-1, n k_i-n-k_t k_i+∑_j=1,j i^tn-m_i k_j-m_i}.In <cit.> (Proposition 2.20 <cit.>), we have shown that n k_i-n-k_t k_i+∑_j=1,j i^tn-m_i k_j-m_i≤n k_1-n-k_t k_1+∑_j=2^tn-k_t k_j-k_t.So∑_j=1^t|ℱ_j| ≤{∑_j=1^tn-1 k_j-1,n k_1-n-k_t k_1+∑_j=2^tn-k_t k_j-k_t}≤{λ_1, λ_2}, where the last inequality holds by k_t≥ 2 and n k_2-n-k_t k_2>n-k_t k_2-k_t. Thus if max{λ_1, λ_2}=λ_2, then the last inequality holds strictly, it makes a a contradiction to (<ref>).Suppose that max{λ_1, λ_2}=λ_2. Then Claim <ref> follows from Theorem <ref> (see the item (2) of Theorem <ref>).So i∈ [2].Without loss of generality, let i=1. Since n<k_1+k_2, thenany two families𝒢⊆[n] k_1 andℋ⊆[n] k_2 are cross intersecting freely. Since |ℱ_1|≥n-1 k_1-1 and ℱ_1 is L-initial, {1, n-k_1+2, …, n}≺ I_1. Note that n> k_1+k_j for each j∈ [3, t] and ℱ_1 is cross intersecting with ℱ_j but not freely for each j∈ [3, t],every member of ℱ_j contains 1. Since (ℱ_1, …, ℱ_t) is extremal, all k_2-subsets contianing 1 are contained in ℱ_2, so {1, n-k_2+2, …, n}≺ I_2, this implies |ℱ_2|≥n-1 k_2-1. This completes the proof of Claim <ref>. The following observation is simple and frequently used in this paper.Let d≥ 2 be an integer, we consider d L-initial families ℒ(A_1, a_1), …, ℒ(A_d, a_d). For i∈ [d] and let S⊆ [d]∖{i} be the set of all j∈ [d]∖{i} satisfying that n≥ a_i+a_j. Suppose that {1, n-a_i+2, …, n}≼A_i. If ℒ(A_i, a_i) and ℒ(A_j, a_j) are cross intersecting for each j∈ S, then ℒ(A_j, a_j), j∈ S are pairwise cross intersecting families since for any j∈ S, every member of ℒ(A_j, a_j) contains 1. Combining Claim <ref>, Proposition <ref> and (ℱ_1, …, ℱ_t) is extremal, we have that for i∈ [3, t], I_i is k_i-partner of I_1 (if I_2≺ I_1) or I_2 (if I_1≺ I_2). Since (ℱ_1, …, ℱ_t) is extremal, I_i (ID of ℱ_i) must be contained inℛ_i, which are defined as below.ℛ_1={R∈[n] k_1:{1, n-k_1+2, …, n}≺ R ≺{k_t, n-k_1+2, …, n}}. ℛ_2={ R∈[n] k_2:{1, n-k_2+2, …, n}≺ R ≺{k_t, n-k_2+2, …, n}}. For i∈[3, t-1], let ℛ_i ={ R∈[n] k_i: [k_t]∪[ n-k_i+k_t+1, n] ≺ R ≺{1, n-k_i+2, …, n}}, ℛ_t ={ R∈[n] k_t:{1, 2, …, k_t}≺ R ≺{1, n-k_t+2, …, n}}.For a family 𝒜_1⊆[n] k_1 with ID A_1, let m(n, A_1)= max{∑_j∈ [t]∖{1}|𝒜_j|: 𝒜_j⊆[n] k_j is L-initial and(𝒜_1, …, 𝒜_t)is an(n, k_1, …, k_t)-cross intersecting system, wherek_1+k_3≤ n<k_1+k_2}.For L-initial cross intersecting families 𝒜_1⊆[n] k_1 and 𝒜_2⊆[n] k_2 with IDs A_1 and A_2 respectively,letm(n, A_1, A_2)= max{∑_j∈ [t]∖{1, 2}|𝒜_j|: 𝒜_j⊆[n] k_j is L-initial and(𝒜_1, …, 𝒜_t)is an(n, k_1, …, k_t)-cross intersecting system, wherek_1+k_3≤ n<k_1+k_2}. If I_1={1, n-k_1+2, …, n} or I_1={k_t, n-k_1+2, …, n} (or I_2={1, n-k_2+2, …, n} or I_2={k_t, n-k_2+2, …, n}), then ∑_i=1^t|ℱ_i|= {λ_1, λ_2}.The proofs for I_1 and I_2 are similar, so we only provefor I_1. Suppose that I_1={1, n-k_1+2, …, n} firstly. Since {1, n-k_2+2, …, n}≺ I_2, I_1≺ I_2. ByFact <ref>, we have m(n, I_1)=M(n, k_2, k_3, …, k_t).Since (ℱ_1, …, ℱ_t) is extremal,∑_i=1^t|ℱ_i|=n-1 k_1-1+m(n, I_1)          (<ref>)=n-1 k_1-1+M(n, k_2, k_3, …, k_t)      Theorem <ref>=n-1 k_1-1+{∑_i=2^tn-1 k_i-1,n k_2-n-k_t k_2+∑_i=3^tn-k_t k_i-k_t}.Since k_t≥ 2,n-1 k_1-1+n k_2-n-k_t k_2+∑_i=3^tn-k_t k_i-k_t<λ_2.By (<ref>), ∑_i=1^t|ℱ_i|=λ_1, as desired.Next we consider I_1={k_t, n-k_1+2, …, n}. For each i∈ [3, t], since ℱ_i and ℱ_1 are cross intersecting and n> k_1+k_3≥ k_1+k_i,every element of ℱ_i must contain [k_t].Since ℱ_i and ℱ_1 are cross intersecting for every i∈ [3, t], k_2+k_3≤ n<k_1+k_2 and (ℱ_1, …, ℱ_t) is extremal,we can see that I_2={k_t, n-k_2+2, …, n}, that is the last element of ℛ_2. Therefore, ∑_i=1^t|ℱ_i|=λ_2, as desired.By Proposition <ref>, the proof of the quantitative part of Theorem <ref> will follow from the following proposition.If {1, n-k_1+2, …, n}⪵ I_1⪵{k_t, n-k_1+2, …, n}and {1, n-k_2+2, …, n}⪵ I_2⪵{k_t, n-k_2+2, …, n}, then ∑_i=1^t|ℱ_i|<max{λ_1, λ_2}.In order to prove Proposition <ref>, we need some preparations.Let ℱ_2,3⊆ℛ_2 and ℱ_3^2⊆ℛ_3 be such that ℱ_2,3 and ℱ_3^2 are maximal pair families.The followinglemma whose proof will be given in Section <ref> is crucial.It tells us that for an extremal (n, k_1, …, k_t)-cross intersecting system (ℱ_1, …, ℱ_t) with k_1+k_3≤ n <k_1+k_2, the ID I_1 of ℱ_1 is the parity of the ID I_2 of ℱ_2. Let (ℱ_1, …, ℱ_t) be anextremal L-initial (n, k_1, …, k_t)-cross intersecting system with IDs I_1, I_2, …, I_tof ℱ_1, ℱ_2, …, ℱ_t respectively.Then I_1 and I_2satisfyI_2∈ℱ_2, 3andI_1is the k_1-parity of I_2.Recall that k_1>k_2≥…≥ k_t, and k_1+k_3≤ n <k_1+k_2.Let G_2∈ℱ_2, 3. Let G_1 be the k_1-parity of G_2. Then from Fact <ref>, we have the following claim.For any j∈ [3, t], G_1 and G_2 have the same k_j-partner.Fixing any G_2∈ℱ_2, 3, let G_1 be the k_1-parity of G_2, and for any i∈ [3, t], letT_i by the k_i-partner of G_2. By Claim <ref> and Remark <ref>, we can see thatℒ(T_3, k_3),…, ℒ(T_t, k_t) are pariwise cross-intersecting. Furthermore, combiningFact <ref> and Remark <ref>,we conclude thatm(n, G_1, G_2) =∑_i=3^t|ℒ(T_i, k_i)|.Now we are ready to express M(n, k_1, …, k_t) in terms of a function of G_2. Let G_2∈ℱ_2, 3,let G_1 be the k_1-parity of G_2 and for any i∈ [3, t], letT_i be the k_i-partner of G_2. Defineg(G_2)=∑_i=1^2|ℒ(G_i, k_i)|+∑_i=3^t|ℒ(T_i, k_i)|.Then by Lemma <ref> and (<ref>),M(n, k_1, …, k_t)=max_G_2∈ℱ_2, 3{g(G_2)}.For each j∈ [k_2-1], let ℛ_2, j={R∈ℛ_2: [n-j+1, n]⊂ R} and ℛ_2(j)={R∖ [n-j+1, n]: R∈ℛ_2, j}. Denote ℱ_2, 3(j)={R∖ [n-j+1, n]: R∈ℱ_2, 3∩ℛ_2, j}. For any j∈ [k_2-1] and any R∈ℱ_2, 3∩ℛ_2, j, we define g(R∖ [n-j+1, n])=g(R).We will prove several key lemmasto show the `local unimodality' of g(G_2) in Section <ref>. Before stating these crucial lemmas, we need to introduce some definitions. Let 𝒜⊂[n] k be a family and c∈ [k]. We say that 𝒜 is c-sequential if there are A⊂ [n] with |A|=k-c and a≥max A (For a set A⊂ [n],denote max A=max{x: x∈ A} and min A=min{x: x∈ A}) such that 𝒜={A⊔{a+1, …, a+c}, A⊔{a+2, …, a+c+1}, …, A⊔{b-c+1, …, b}}, then we say that𝒜 is c-sequential, write A_1c≺ A_2c≺⋯c≺A_b-a-c+1, where A_1=A⊔{a+1, …, a+c}, A_2=A⊔{a+2, …, a+c+1},…, A_b-a-c+1=A⊔{b-c+1, …, b},we also say that A_i and A_j are c-sequential.In particular, if l_2=l_1+1, we write A_l_1c≺A_l_2; if max A_l_2=n, write A_l_1c⟶A_l_2. Note that if |𝒜|=1, then 𝒜 is c-sequential for any c∈ [k]. Let ℱ be a family and F_1, F_2∈ℱ. If F_1⪵ F_2 and there is no F'∈ℱ such that F_1⪵ F' ⪵ F_2, then we say that F_1<F_2 in ℱ, or F_1<F_2 simply if there is no confusion.We will prove the following crucial lemmas in Sections <ref> and <ref>. They show that function g(R) has local unimodality.For any j∈ [0, k_2-1], let 1≤ c≤ k_2-j and F_2, G_2, H_2∈ℱ_2, 3(j) with F_2c≺G_2c≺H_2. Then g(G_2)≥ g(F_2) implies g(H_2)>g(G_2).Let 4≤ j≤ k_2+1 and F_2, G_2, H_2∈ℱ_2, 3 with [2, j]=F_2∖ [n-ℓ(F_2)+1, n], [2, j-1]=G_2∖ [n-ℓ(G_2)+1, n] and [2, j-2]=H_2∖ [n-ℓ(H_2)+1, n].Then g([2, j-1])≥ g([2, j]) implies g([2, j-2])>g([2, j-1]).Let k_t+2≤ j≤ k_t+k_2-1 and F_2, G_2, H_2∈ℱ_2, 3 with [k_t, j]=F_2∖ [n-ℓ(F_2)+1, n], [k_t, j-1]=G_2∖ [n-ℓ(G_2)+1, n] and [k_t, j-2]=H_2∖ [n-ℓ(H_2)+1, n]. Then g([k_t, j-1])≥ g([k_t, j]) implies g([k_t, j-2])>g([k_t, j-1]). Let A be a k_2-subset of [n] with ℓ(A)=p. Suppose A={x_1, …, x_k_2-p, n-p+1, …, n} and A'=A∖{x_k_2-p}∪[n-p, n]. If A∈ℱ_2, 3, then A'∈ℱ_2, 3.Let T and T' be the partners of {x_1, …, x_k_2-p}=A∖ A^ t and {x_1, …, x_k_2-p-1}=A'∖ A'^ t respectively. Since A∈ℱ_2, 3, |T|≤ k_3. Thus, |T'|≤ |T|≤ k_3. Let B=T'∪{n-k_3+|T'|+1, …, n} if |T'|<k_3, otherwise, let B=T'. Then B is the k_3-partner of {x_1, …, x_k_2-p-1}. Fact <ref> implies that (A', B) is maximal, thus A'∈ℱ_2, 3. Let A be a k_2-subset of [n] with ℓ(A)=p≥ 1. Suppose that A={x_1, …, x_k_2-p, n-p+1, …, n} and A'=A∖{n-p+1}∪{x_k_2-p+1}∪ [n-p+2, n] (if p=1, then [n-p+2, n]=∅). If A∈ℱ_2, 3 and {1, n-k_2+2, …, n}⪵ A, then A'∈ℱ_2, 3.Let T and T' be the partners of {x_1, …, x_k_2-p}=A∖ A^ t and {x_1, …, x_k_2-p,x_k_2-p+1}=A'∖ A'^ t respectively. Then |T|=|T'| and max T'-max T=1. Since A∈ℱ_2, 3, by Definition <ref>, there is B∈ℱ_3^2 such that (A, B) is maximal. By Fact <ref>, T=B∖{n-ℓ(B)+1, …, n}, B=T∪{n-k_3+|T|+1, …, n} and max T<n-k_3+|T|. We claim that max T<n-k_3+|T|-1. Since otherwise max T=n-k_3+|T|-1, this implies that |{x_1, …, x_k_2-p}|+|T|=n-k_3+|T| and then|A|+|B|≥ |{x_1, …, x_k_2-p}|+1+ |T|+|{n-k_3+|T|+1, …, n}|=n+1,this is a contradiction to n≥ k_2+k_3 (=|A|+|B|). Let B'=T'∪{n-k_3+|T'|+1, …, n} if |T'|<k_3, and B'=T' otherwise. Thus,max T'=max T+1<n-k_3+|T|= n-k_3+|T'|.Therefore, T'=B'∖{n-ℓ(B')+1, …, n}. Trivially, {x_1, …, x_k_2-p, x_k_2-p+1}=A'∖{n-ℓ(A')+1, …, n}. By Fact <ref> again, (A', B') is maximal and since {1, n-k_2+2, …, n}⪵ A, we have A'∈ℱ_2, 3, as desired.Let ℱ_1,3⊆ℛ_1 and ℱ_3^1⊆ℛ_3 such that ℱ_1,3 and ℱ_3^1 are maximal pair families. By Proposition <ref>, ℱ_3^2⊆ℱ_3^1. Let ℱ'_1,3 be the subfamily of ℱ_1,3 such that ℱ'_1,3 and ℱ_3^2 are maximal pair families.Since k_3≥ k_t≥ 2 and n≥ k_1+k_3, by Fact <ref>, it is easy to see that {1, n-k_2+1,…, n}, {2, …, k_2+1}, {k_t, k_t+1, …, k_t+k_2-1}, {k_t, n-k_2+1,…, n}are in ℱ_2, 3. As a consequence of Claim <ref> and Claim <ref>, we have the following observation. {2, …, k_2, n}, {2, …, k_2-1, n-1, n}, …, {2, n-k_2+2, …, n}, {k_t, k_t+1,…, k_t+k_2-2, n},{k_t, k_t+1, …, k_t+k_2-3, n-1, n}, …, {k_t, n-k_2+2, …, n} are in ℱ_2, 3 as well. ℱ'_1, 3 contains {1, n-k_1+2, …, n},{2, …, k_1+1}, {2, …, k_1, n}, {2,…, k_1-1, n-1, n}, …, {2, n-k_1+2, …, n}, {k_t, k_t+1, …, k_t+k_1-1},{k_t, k_t+1, …, k_t+k_1-2, n},{k_t, k_t+1, …, k_t+k_1-3, n-1, n}, …, {k_t, n-k_1+2, …, n}. Note that ℱ'_1, 3⊆ℒ({k_t, n-k_1+2, …, n}, k_1) since we require that (n, k_1, …, k_t)-cross intersecting systems is non-empty. Assuming that Lemmas <ref>, <ref>, <ref> and <ref> hold,we are going to complete the proof of Proposition <ref>. We will proveLemmas <ref>, <ref>, <ref> and <ref> in Sections <ref>, <ref> and <ref>.For a family 𝒢⊆ℱ_2, 3, denote g(𝒢)=max{g(G): G∈𝒢}. By Observation<ref> and Lemma <ref>, we haveg(ℱ_2, 3)=max{g([2, k_2+1]), g([k_t, k_t+1,k_t+k_2-1]), g(ℱ_2, 3(1))}. ApplyingLemma <ref> and Observation<ref> repeatedly, we haveg(ℱ_2, 3(1))=max{g([2, k_2]), g([k_t, k_t+k_2-2]), g(ℱ_2, 3(2))},g(ℱ_2, 3(2))=max{g([2, k_2-1]), g([k_t, k+k_2-3]), g(ℱ_2, 3(3))},⋮g(ℱ_2, 3(k_2-1))<max{g({1}),g({k_t})}.By Lemma <ref>, we havemax{g([2, k_2+1]), g([2,k_2]), …, g({2, 3}), g({2})}=max{g([2,k_2+1]), g({2})},andg({2})<max{g({1}),g({k_t})}.By Lemma <ref>, we havemax{g([k_t, k_t+k_2-1]), g([k_t, k_t+k_2-2]), …, g({k_t, k_t+1})}=max{g([k_t, k_t+k_2-1]),g({k_t, k_t+1}) },andg({k_t, k_t+1})<max{g([k_t, k_t+k_2-1]),g({k_t}) }.In Section <ref>, we will prove the following proposition.g([2, k_2+1]) <max{g({1}), g([2, k_2])}g([k_t, k_t+k_2-1]) <max{ g({k_t-1}), g([k_t, k_t+k_2-2])}.Combining (<ref>), (<ref>) and (<ref>), we haveg([2, k_2+1])<max{g({1}), g({k_t})}.Combining (<ref>), (<ref>) and (<ref>), we haveg([k_t, k_t+k_2-1])<max{ g({k_t}),g({k_t-1})}≤max{g({1}), g({k_t})}. Combining (<ref>), (<ref>), (<ref>), (<ref>), (<ref>), (<ref>),(<ref>) and (<ref>), we haveg(ℱ_2, 3)< max{ g({1}),g({k_t})}.Recall (<ref>), we obtainM(n, k_1, …, k_t) =max_G_2∈ℱ_2, 3g(G_2)<max{ g({1}),g({k_t})}.This completes the proof of Proposition <ref>.We apply Propositions <ref> and <ref> to prove Theorem <ref>.Propositions <ref> and <ref> imply the quantitative part of Theorem <ref>. Now we show that extremal (n, k_1, …, k_t)-cross intersecting systems with k_1+k_3≤ n <k_1+k_2 must be isomorphic to Constructions <ref> or <ref>. By Claim <ref>, Propositions <ref> and <ref>, and Theorem <ref>, we conclude that: if (ℱ_1, …, ℱ_t) is an (n, k_1, …, k_t)-cross intersecting system with ∑_i=1^t|ℱ_i|=max{λ_1, λ_2}, then either for each i∈ [t], |ℱ_i|=n-1 k_i-1 or |ℱ_1|=n k_1-n-k_t k_1, |ℱ_2|=n k_2-n-k_t k_2 and |ℱ_i|=n-k_t k_i-k_t holds for each i∈ [3, t]. If the previous happens, then (ℱ_1, ℱ_3,…, ℱ_t) is an (n, k_1, k_3, …, k_t)-cross intersecting system with ∑_i=1, i 2^t|ℱ_i|=∑_i=1, i 2^tn-1 k_i-1and(ℱ_2, ℱ_3,…, ℱ_t) is an (n, k_2, k_3, …, k_t)-cross intersecting system with ∑_i=2^t|ℱ_i|=∑_i=2^tn-1 k_i-1. By Theorem <ref>,(ℱ_1, …, ℱ_t) is isomorphic to (𝒢_1, …, 𝒢_t) which is defined in Construction <ref>. If the later happens, then (ℱ_1, ℱ_3,…, ℱ_t) is an (n, k_1, k_3, …, k_t)-cross intersecting system with ∑_i=1, i 2^t|ℱ_i|=n k_1-n-k_t k_1+∑_i=3^tn-k_t k_i-k_tand(ℱ_2, ℱ_3,…, ℱ_t) is an (n, k_2, k_3, …, k_t)-cross intersecting system with ∑_i=2^t|ℱ_i|=n k_2-n-k_t k_2+∑_i=3^tn-k_t k_i-k_t. By Theorem <ref>,(ℱ_1, …, ℱ_t) is isomorphic to (ℋ_1, …, ℋ_t) which is defined in Construction <ref>.This completes the proof of Theorem <ref>.We owe the proofs of Lemmas <ref>, <ref>, <ref>, <ref> and Proposition <ref>. Before giving their proofs, we list some results obtained in <cit.> for non-mixed type in the next section.Figure 1 is the flow chart of the proofs ofthe main theorem and lemmas. § RESULTS OF NON-MIXED TYPEIn <cit.>,we worked on non-mixed type: Let t≥ 2, k_1≥ k_2≥⋯≥ k_t and n≥ k_1+k_2and families 𝒜_1⊂[n] k_1, 𝒜_2⊂[n] k_2, …, 𝒜_t⊂[n] k_t be non-empty pairwise cross-intersecting (not freely). Let R be the ID of 𝒜_1, and Tbe the partner of R. In the proof of Theorem <ref>, one important ingredient is that by Proposition <ref>, ∑_i=1^t|𝒜_i| can be bounded by a function of R as in the following lemma.<cit.> Letk_1≥ k_2≥…≥ k_t, n≥ k_1+k_2 and (𝒜_1, 𝒜_2, …, 𝒜_t) be a non-empty L-initial (n, k_1, k_2, …, k_t)-cross intersecting system with |𝒜_1|≥n-1 k_1-1. Let R be the ID of 𝒜_1 and T be the partner of R. Then∑_j=1^t|𝒜_j|≤ |ℒ(R, k_1)| +∑_j=2^t |ℒ(T, k_j)| =: f(R). Another crucial part in the proof of Theorem <ref> is to showlocal unimodality of f(R). Let R and R'bek_1-subsets of [n] satisfying R≺ R'. In order to analyze f(R')-f(R), two related functions are defined in <cit.> as follows. Let t≥ 2, k_1, k_2, …, k_t be positive integers with k_1≥ k_2≥…≥ k_t and n≥ k_1+k_2. Let R and R'bek_1-subsets of [n] satisfying R≺ R' and let T and T' be the partners of R and R' respectively. Wedefineα(R, R'):=|ℒ(R', k_1)|-|ℒ(R, k_1)|,β(R, R'):=∑_j=2^t(|ℒ(T, k_j)|-|ℒ(T', k_j)|). It is easy to see thatf(R')-f(R)=α(R, R')-β(R, R').Letℛ_1={ R∈[n] k_1:{1, n-k_1+2, …, n}≺ R≺{k_t, n-k_1+2, …, n}}.In order to show local unimodality of f(R), weproved thefollowingresults in<cit.>. These results give us some foundation in showing local unimodality of g(G_2) (recall (<ref>)).<cit.> Let F<G∈ℛ_1 and max G=q. Thenβ(F, G)=∑_j=2^tn-q k_j-(q-k_1). If n= k_1+k_j holds for any j∈ [t]∖{1}, then β(F, G)=∑_j=2^t 1=t-1; otherwise, we have β(F, G)≥ 0 and β(F, G) decrease as q increase. <cit.> Let c∈ [k_1] and F, G, F', G'∈ℛ_1. If F, G are c-sequential, F', G' are c-sequential and max F=max F', max G=max G', then α (F, G)=α (F', G') and β (F, G)=β (F', G').For k∈ [k_1-1], letℛ_1, k={R∈ℛ_1: [n-k+1, n]⊂ R}, and ℛ_1(k)={R∖ [n-k+1, n]: R∈ℛ_1, k}. In addition, we will write ℛ_1(0)=ℛ_1.When we consider ℛ_i(k), we regard the ground set as [n-k]. For R∈ℛ_i(k), we write f(R) simply, in fact, f(R)=f(R∪ [n-k+1, n]). <cit.> Let 1≤ j≤ k_1-1 and 1≤ d≤ k_1-j. Let F, H, F', H'∈ℛ_1(j) and F, H are d-sequential, F', H' are d-sequential. If max F=max F', then α(F, H)=α(F', H') and β(F, H)=β(F', H'). The following two lemmas confirm that f(R) has local unimodality. <cit.> Suppose that if t=2, then n>k_1+k_t. For any j∈ [0, k_1-1], let 1≤ c ≤ k_1-j and F, G, H be contained in ℛ_1(j) with Fc≺Gc≺H. If f(G)≥ f(F), then f(H)> f(G). <cit.> Suppose that if t=2, then n>k_1+k_t. Let 1≤ m ≤ k_t and m+1 ≤ j ≤ m+k_1-1. If f_1([m, j])≤ f_1([m, j-1]), then f([m, j-1])< f([m, j-2]).§ VERIFY PARITY: PROOF OF LEMMA <REF> In this section,we will give the proofof Lemma <ref>. The proof is divided intotwo cases.Case 1: I_1≺ I_2. By Fact <ref> , Fact <ref>, Remark <ref> and (ℱ_1, …, ℱ_t) is extremal, we have that for each i∈ [3, t], I_i is the k_i-partner of I_2. By Fact <ref>, for all i∈ [4, t], we haveI_i≺ I_3orI_3is the k_3-parity ofI_i .(I_2, I_3) is maximal. Assume on the contrary that (I_2, I_3) is not maximal. Let F_2, i be the k_2-partner of I_i for each i∈ [3, t], by Fact <ref>, (I_2, i, I_i) is maximal. Since (I_2, I_3) is not maximal, I_2⪵ I_2, 3. By (<ref>) and Fact <ref>, we have I_2, 3≺ I_2, j for all j∈ [4, t]. This implies that for i∈ [3, t], ℒ(I_2, 3, k_2) and ℒ(I_i, k_i) are cross intersecting. Further more, (ℱ_1, ℒ(I_2, 3, k_2), ℱ_3, …, ℱ_t ) is an (n, k_1, …, k_t)-cross intersecting system. However, I_2⪵ I_2, 3 implies ℱ_2 ⫋ℒ(I_2, 3, k_2), this is a contradiction to the assumption that (ℱ_1, ℱ_2, …, ℱ_t) is extremal.Claim <ref> tells us that I_2 ∈ℱ_2, 3. By Proposition <ref>, ℱ_1, 3 is the k_1-parity of ℱ_2, 3. Then there exists I'_1∈ℱ_1, 3 such that I'_1 is the k_1-parity of I_2. If I_1=I'_1, then I_1 and I_2 satisfy (<ref>), as desired. So we may assumeI_1 I'_1. By Proposition <ref>, (I'_1, I_3) is maximal. Since ℱ_1 and ℱ_3 are cross intersecting, I_1⪵ I'_1. Let I'_i be the k_i-partner of I'_1 for each i∈ [3, t]. It follows from Fact <ref> that I'_i=I_i, i∈ [3, t]. Therefore, (ℒ(I'_1, k_1), ℱ_2,…, ℱ_t ) is an (n, k_1, …, k_t)-cross intersecting system. However, I_1⪵ I'_1 implies ℱ_1 ⫋ℒ(I'_1, k_1), this is a contradiction to the assumption that (ℱ_1, ℱ_2, …, ℱ_t) is extremal.Case 2: I_2⪵ I_1.Using a similar argument to Case 1, we conclude the following three properties:(a) I_i is the k_i-partner of I_1 for each i∈ [3, t];(b) for all i∈ [4, t], I_i≺ I_3 or I_3 is the k_3-parity ofI_i;(c) (I_1, I_3) is maximal and I_1 ∈ℱ_1, 3.If (I_2, I_3) is maximal, then I_2 ∈ℱ_2, 3, and Proposition <ref> implies that I_1 is the k_1-parity of I_2. However, I_2⪵ I_1 implies thatI_1 is not the k_1-parity of I_2, a contradiction. Therefore (I_2, I_3) is not maximal. Since (ℱ_1, ℱ_2, …, ℱ_t) is extremal, combining (b), Fact <ref> and Fact <ref>, we have that I_2 is the k_2-partner of I_3. Therefore, Fact <ref> implies that the k_3-partner I'_3 of I_2 is such that (I_2, I'_3) is maximal. So I_2∈ℱ_2, 3. By Proposition <ref>, there exists I'_1∈ℱ_1, 3 such that I'_1 is the k_1-parity of I_2. For i∈ [3, t], let I'_i be the k_i-partner of I_2. It follows from Facts <ref> and<ref> that I'_i is maximal to I'_1 for all i∈ [3, t]. Combining with Remark <ref>, we conclude that (ℒ(I'_1, k_1), ℱ_2, ℒ(I'_3, k_3), …, ℒ(I'_t, k_t)) is an (n, k_1, …, k_t)-cross intersecting system. Since (ℱ_1, …, ℱ_t) is extremal,|ℒ(I'_1, k_1)|+|ℱ_2|+∑_i=3^t|ℒ(I'_i, k_i)|≤∑_i=1^t|ℱ_i|.This implies that∑_i=1, i 2^t|ℒ(I'_i, k_i)|≤∑_i=1, i 2^t|ℱ_i|. Let H_1=I_1∖ I_1^ t andg=|H_1|. Recall that I_2 is the k_2-partner of I_3, (I_2, I_3) is not maximal and (I_1, I_3) is maximal. It follows from Facts <ref> and <ref> thatk_2< g.Let H_2={x_1, …, x_i, x_i+1, …, x_k_2} be the first k_2 elements of H_1.Let i∈ [0, k_2-1] be thesubscript such that x_i+2≤ x_i+1 and x_j+1=x_j+1 for all j∈ [i+1, k_2-1], where i=0 if x_j+1=x_j+1 for all j∈ [k_2-1]. Since I_2 is the k_2-partner of I_3 and (I_1, I_3) is maximal, we can see that if i<k_2-1, then I_2={x_1, …, x_i, x_i+1-1, n-k_2+i+2, …, n},if i=k_2-1, then I_2={x_1, …, x_k_2-1, x_k_2-1}. Since I'_1 is the k_1-parity of I_2 and k_1>k_2, we get ℓ(I'_1)>0 andI'_1={x_1, …, x_i, x_i+1-1, n-k_1+i+2, …, n},where i≤ k_2-1. I'_1⪵ I_1⪵ I”_1:={x_1, …,x_i+1, n-k_1+i+2,…, n}.Obviously, I'_1⪵ I_1. We are going to prove I_1⪵ I”_1. Notice that {x_1, …, x_i, x_i+1}⊂ I_1 and |I”_1|=k_1, these imply I_1 ≺ I”_1. If I_1=I”_1, then H_1={x_1, …, x_i+1}. So g=i+1≤ k_2 since i≤ k_2-1, this is a contradiction to k_2<g. Since (ℱ_1, ℱ_2, …, ℱ_t) is an (n, k_1, k_2, …, k_t)-cross intersecting system with k_1+k_3≤ n<k_1+k_2, (ℱ_1, ℱ_3, …, ℱ_t) is an (n, k_1, k_3, …, k_t)-cross intersecting system with n≥ k_1+k_3. Denotem'=max{∑_i=1^t|𝒢_i|:(𝒢_1, 𝒢_3, …, 𝒢_t)is an L-initial(n, k_1, k_3, …, k_t)-cross intersectingsystem with n≥ k_1+k_3, the ID G_1of 𝒢_1satisfiesI'_1≺ G_1≺ I”_1 }. Suppose that (ℒ(G_1, k_1), ℒ(G_3, k_3), …, ℒ(G_t, k_t)) is an (n, k_1, …, k_t)-cross intersecting system with n≥ k_1+k_3 such that m' =∑_i=1, i 2^t|𝒢_i| and I'_1≺ G_1≺ I”_1.Since I'_1 is the k_1-parity of I_2 andI'_1≺ G_1≺ I”_1, we have that either I_2 ≺ G_1 (if G_1 I'_1) or G_1 is the k_1-parity of I_2 (if G_1= I'_1). Thus, (ℒ(G_1, k_1), ℱ_2,ℒ(G_3, k_3), …, ℒ(G_t, k_t)) is an (n, k_1, …, k_t)-cross intersecting system as well. Since (ℱ_1, ℱ_2, …, ℱ_t) is extremal,we have ∑_i=1, i 2^t|ℱ_i|≥∑_i=1, i 2^t|𝒢_i|=m'. Therefore,m'=∑_i=1, i 2^t|ℱ_i|.To complete the proof of Lemma <ref>, we only need to prove the following claim since it makes a contradiction to (<ref>) and Claim <ref>. Let (𝒢_1, 𝒢_3, …, 𝒢_t) be an L-initial (n, k_1, k_3, …, k_t)-cross intersecting system with n≥ k_1+k_3 andG_1 be the ID of 𝒢_1 satisfying I'_1≺ G_1≺ I”_1.Then ∑_i=1, i 2^t|𝒢_i|=m' if and only ifG_1=I'_1 orG_1=I”_1. For each i∈ [3, t], let T_i be the k_i-partner of G_1. By Remark <ref> and Lemma <ref>, we have∑_i=1, i 2^t|𝒢_i|≤|ℒ(G_1, k_1)|+∑_i=3^t|ℒ(T_i, k_i)|=:f_1(G_1).By the definition of m',m'=max{f_1(R): R ∈[n] k_1 andI'_1≺ R≺ I”_1}. Denoteℱ={F∈[n] k_1: I'_1 ≺ F ≺ I”_1}.Then m'=f_1(ℱ) (recall that f_1(ℱ)=max{f_1(F): F∈ℱ}). For each j∈ [k_1], denoteℱ(j)={F∖ [n-j+1, n]: F∈ℱ, [n-j+1, n]⊆ F}. To prove Claim <ref> is equivalent to provethe following claim.f_1(ℱ)=max{f_1(I'_1), f_1(I”_1)}, and for any F∈ℱ with I'_1⪵ F ⪵ I”_1, we have f_1(F)<f_1(ℱ).By the definitions of I'_1 and I”_1,we can see that ℓ(I'_1)=ℓ(I”_1)=:k>0, ℱ(k)={I'_1∖ [n-ℓ(I'_1)+1, n],I”_1∖ [n-ℓ(I”_1)+1, n]} and for each F∈ℱ, ℓ(F)≤ k. DenoteA_0 ={x_1, …, x_i, x_i+1, x_i+1+1, …, x_i+1+k_1-i-1},A_1 ={x_1, …, x_i, x_i+1, x_i+1+1,…, x_i+1+k_1-i-2}∪{n},⋮A_k-1 ={x_1, …, x_i, x_i+1, x_i+1+1}∪ [n-k_1+i+3, n]. Then A_0 is the k_1-set with F'_1<A_0. Applying Lemma <ref> repeatedly, we obtainf_1(ℱ) =max{f_1(A_0), f_1(ℱ(1))},f_1(ℱ(1)) =max{f_1(A_1), f_1(ℱ(2))},⋮f_1(ℱ(k-1)) =max{f_1(A_k-1), f_1(ℱ(k))},f_1(ℱ(k)) =max{f_1(F'_1), f_1(F”_1)}.Thusf_1(ℱ)=max{f_1(A_0), f_1(A_1), …, f_1(A_k-1), f_1(I'_1), f_1(I”_1)}.Since k_1>k_2 and n≥ k_2+k_t,n> k_1+k_t. Then by Lemma <ref> again, we can see that for any F∈ℱ∖{A_0, A_1, …, A_k-1, I'_1, I”_1}, we have f_1(F)<f_1(ℱ). Combing with thefollowing Claims <ref> and <ref>, we will get Claim <ref>. max{f_1(A_0), f_1(A_1), …, f_1(A_k-1), f_1(I”_1)}=max{f_1(A_0), f_1(I”_1)}.If k=1, then we are fine. We may assume that k≥ 2. Let I”_1=A_k. To prove Claim <ref>, it is sufficient to prove that for any j∈ [0, k-2 ], if f_1(A_j)≤ f_1(A_j+1), then f_1(A_j+1)<f_1(A_j+2). Let j∈ [0, k-2]. Clearly, ℓ(A_j+1)=ℓ(A_j)+1≥ 1 and ℓ(A_j+2)=ℓ(A_j+1)+1. Let A'_j and A'_j+1 be the k_1-sets such that A_j<A'_j and A_j+1<A'_j+1. Then A'_jℓ(A_j+1)⟶A_j+1. Let J be the k_1-set such thatA'_j+1ℓ(A_i+1)⟶J. Then A'_j, A_j+1 are ℓ(A_i+1)-sequential and A'_j+1, J are ℓ(A_i+1)-sequential. Clearly, max A'_j=max A'_j+1 and max A_j+1=max J=n. Applying Lemma <ref>, we haveα(A'_j, A_j+1)=α(A'_j+1, J) and β(A'_j, A_j+1)=β(A'_j+1, J).If ℓ(A'_j)≥ 1, then A_j<A'_j=A_j+1<A_j+2=A'_j+1. In this case, α(A_j+1, A_j+2)=1. ByProposition <ref> and n>k_1+k_t (since k_1>k_2 and n≥ k_2+k_t), we have β(A_j+1, A_j+2)=0. Sof(A_j+1)< f(A_j+2), as desired. Next we may assume that ℓ(A'_j)=0. Clearly, α(A_j, A'_j)=α(A_j+1, A'_j+1)=1. By Proposition <ref> and max A'_j=max A'_j+1, we have β(A_j, A'_j)=β(A_j+1, A'_j+1). Combining with (<ref>), we getα(A_j, A_j+1)=α(A_j+1, J) and β(A_j, A_j+1)=β(A_j+1, J).Thus f(J)≥ f(A_j+1) since f(A_j)≤ f(A_j+1). Note that ℓ(A_j+1)=ℓ(J) and A_j+1∖ A_j+1^ t∈ℛ_1(ℓ(A_j+1)), so J∖ J^ t∈ℛ_1(ℓ(A_j+1)) and A_j+1∖ A_j+1^ t1≺J∖ J^ t. Since ℓ(A_j+2)=ℓ(A_j+1)+1, A_j+2∖ [n-ℓ(A_j+1)+1, n]∈ℛ_1(ℓ(A_j+1)). And in ℛ_1(ℓ(A_j+1)),we haveA_j+1∖ A_j+1^ t1≺J∖ J^ t1⟶A_j+2∖ [n-ℓ(A_j+1)+1, n].Recall that n>k_1+k_t. By Lemma <ref> and f(J)≥ f(A_j+1), we obtain f(A_j+2)>f(J)≥ f(A_j+1), as required.If f_1(A_0)≥ f_1(I'_1), then f_1(A_0)< f_1(I”_1). Note that I'_1<A_0 in ℛ_1. Then α(I'_1, A_0)=1. Since f_1(A_0)≥ f_1(I'_1), α(I'_1, A_0)≥β(I'_1, A_0), so β(I'_1, A_0)≤ 1. Consider the family ℱ':={F: |F|=k_1, A_0≺ F≺ I”_1}. We can see that for each F∈ℱ', max F≥max A_0. By Proposition <ref> and n>k_1+k_t (since k_1>k_2 and n≥ k_2+k_t), for each two k_1-sets F and G with A_0 ≺ F<G≺ I”_1, we have that β(F, G)<β(I'_1, A_0)≤ 1 and α(F, G)=1. Thus, f_1(I”_1)>f_1(A_0). This completes the proof of Claim <ref> Since we have shown Claims <ref> and <ref>,the proof of Claim <ref> is complete.Since the proof of Claim <ref> is complete,Claim <ref> follows as noted before.Since the proof of Claim <ref> is complete, theproof of Lemma <ref> is complete.§ VERIFY UNIMODALITY: PROOF OF LEMMA <REF>In this section, we are going to prove a more general result Lemma <ref> which will be applied for the most general case n≥ k_1+k_t in <cit.>. Lemma <ref> will follow from Lemma <ref>immediately. Beforestating the result, we need to make some preparations. Suppose that t≥ 2 is a positive integer, s∈ [t-1], k_1≥ k_2≥…≥ k_t and k_1+k_s+1≤ n <k_s-1+k_s. We define ℛ_i for every i∈ [t] as follows. For i∈ [s], letℛ_i={R∈[n] k_i: {1, n-k_i+2, …, n}≺ R ≺{k_t, n-k_i+2, …, n}}.For i∈ [s+1, t], letℛ_i={ R∈[n] k_i: [k_t]⊆ R }.In Section <ref>, we defined notations ℛ_i, 1≤ i ≤ t. Throughout the paper except in this section, ℛ_i follows from the definitions in Section <ref>. ℛ_i in this section follows from the above definition. When s=2, they are consistent.Let R and T be two subsets of [n] with different sizes. We write R∼< T if R≺ T and there is no other set R' such that|R'|=|R| and R⪵ R' ≺ T.By the definition of parity,we have the following simple remark.For any R∈ℛ_1 and i∈ [2, s], R has a k_i-parity if and only if |R∖ R^ t|≤ k_i.From the above observation, for any R_1∈ℛ_1 and i∈ [2, s], we define the corresponding k_i-set of R_1 as follows.Let R_1∈ℛ_1 and i∈ [2, s]. If R has a k_i-parity, then let R_i be the k_i-parity of R_1; otherwise, let R_i be the k_i-set such that R_i∼< R_1. We call R_i the corresponding k_i-set of R_1.Let R_1∈ℛ_1. For each i∈ [2, s], let R_i bethe corresponding k_i-set of R_1as in Definition <ref> and for each i∈ [s+1, t], let R_i be the k_i-partner of R_1. We denotef(R_1)=∑_i=1^t |ℒ(R_i, k_t)|. For each j∈ [k_1-1], let ℛ_1, j ={R∈ℛ_1: [n-j+1, n]⊆ R}, ℛ_1(j) ={R∖ [n-j+1, n]: R∈ℛ_1, j}.For any j∈ [k_1-1] and any R∈ℛ_1, j, we define f(R∖ [n-j+1, n])=f(R). For example, f({1})=f({1, n-k_1+2, …, n}).Let R_1, R'_1∈ℛ_1 with R_1≺ R'_1 and for any i∈ [s+1, t], R_i, R'_i be the k_i-partners of R_1, R'_1 respectively, and for each i∈ [2, s], let R_i, R'_i be the corresponding k_i-sets of R_1, R'_1 respectivelyas in Definition <ref>.We define the following functions.For i∈ [s], letα_i(R_1, R'_1) =|ℒ(R'_i, k_1)|-|ℒ(R_i, k_1)|, γ(R_1, R'_1) =∑_i=1^sα_i(R_1, R'_1), δ(R_1, R'_1) =∑_i=s+1^t(|ℒ(R_i, k_i)|-|ℒ(R'_i, k_i)|). For any j∈ [k_1-1] and any R_1, R'_1 ∈ℛ_1, j with R_1≺ R'_1, we define α_i(R_1∖ [n-j+1, n], R'_1∖ [n-j+1, n])=α_i(R_1, R'_1) for each i∈ [s], γ(R_1∖ [n-j+1, n], R'_1∖ [n-j+1, n])=γ(R_1, R'_1) and δ(R_1∖ [n-j+1, n], R'_1∖ [n-j+1, n])=δ(R_1, R'_1). From the above definition, we have f(R'_1)-f(R_1)=γ(R_1, R'_1)-δ(R_1, R'_1). Suppose that A_1, B_1, C_1∈ℛ_1 with A_1≺ B_1≺ C_1. Then for any i∈ [s], α_i(A_1, C_1)=α_i(A_1, B_1)+α_i(B_1, C_1), and γ(A_1, C_1)=γ(A_1, B_1)+γ(B_1, C_1),δ(A_1, C_1)=δ(A_1, B_1)+δ(B_1, C_1).We will prove the following more general lemma, which implies Lemma <ref>. Let k∈ [0, k_1-1] and F_1, G_1, H_1 ∈ℛ_1(k) with F_1c≺ G_1 c≺ H_1 for some c∈ [k_1]. If f(F_1)≤ f(G_1), then f(G_1)<f(H_1). Let us explain why Lemma <ref> implies Lemma <ref>. Take s=2 (see Definition <ref>). Take F_2, G_2, H_2 ∈ℱ_2, 3(j) satisfying the condition F_2 c≺G_2c≺H_2 (see Lemma <ref>). Let F'_2=F_2∪ [n-j+1, n], G'_2=G_2∪ [n-j+1, n] andH'_2=H_2∪ [n-j+1, n]. Then F'_2, G'_2, H'_2 ∈ℱ_2, 3 and g(F_2)=g(F'_2), g(G_2)=g(G'_2) and g(H_2)=g(H'_2) (see Definition <ref>). Let F'_1, G'_1, H'_1 be the k_1-parities of F'_2, G'_2, H'_2 respectively. (Proposition <ref>) guarantees the k_1-parities of F'_2, G'_2, H'_2 exist.) Take k=j+k_1-k_2 in Lemma <ref>. Let F_1=F'_1∖ [n-k+1, n], G_1=G'_1∖ [n-k+1, n] and H_1=H'_1∖ [n-k+1, n]. Then F_1, G_1, H_1 ∈ℛ_1(k) with F_1c≺G_1c≺H_1 and f(F_1)=g(F'_1), f(G_1)=f(G'_1) and f(H_1)=f(H'_1) (see Definition <ref>). By Fact <ref>, for each i∈ [3, t], F'_1 and F'_2 have the same k_i-partner, G'_1 and G'_2 have the same k_i-partner and H'_1 and H'_2 have the same k_i-partner. Therefore, g(F_2)=g(F'_2)=f(F'_1)=f(F_1), g(G_2)=g(G'_2)=f(G'_1)=f(G_1) and g(H_2)=g(H'_2)=f(H'_1)=f(H_1). Now applying Lemma <ref>, we have that if f(F_1)=f(F'_1)=g(F_2)≤ g(G_2)=f(G'_1)=f(G_1), then g(G_2)=f(G'_1)=f(G_1)<f(H_1)=f(H'_1)=g(H_2). Before proving Lemma <ref>, we need to make some preparations. Denotes'={i: i∈ [s], k_i=k_1}.The following remark isuseful.When k_1=k_2=…=k_s,the authors have proved the truth of Theorem <ref> in <cit.>, see Corollary 1.12 in <cit.> (taking k=k_1=…=k_s in Corollary 1.12). So we may assume that s'<s. So n>k_s'+k_s+1. LetR_1, R'_1∈ℛ_1 with R_1<R'_1 and R=R'_1∖R'_1^ t Then for each i∈ [s], we haveα_i(R_1, R'_1)=ℓ(R'_1) k_i-|R|.In particular, α_i(R_1, R'_1)=0 if and only if ℓ(R'_1)< k_1-k_i. Furthermore, if ℓ(R'_1)=0, then γ(R_1, R'_1)=s'.Clearly, for i∈ [s'], α_i(R_1, R'_1)=1=ℓ(R'_1) k_i-|R|. We next consider for i∈[s'+1, s]. In this case, k_i<k_1. Let R_i and R'_i be the corresponding k_i-sets of R_1 and R'_1 respectivelyas in Definition <ref>.Then R_i≺ R'_i, moreover α_i(R_1, R'_1) ≥ 0 andα_i(R_1, R'_1)=0 if and only if R_i=R'_i clearly. R_i=R'_i implies R'_i∼<R'_1, furthermore, ℓ(R'_1)< k_1-k_i. Thenwe can see that if ℓ(R'_1)=0, then α_i(R_1, R'_1)=0=ℓ(R'_1) k_i-|R|. Therefore, γ(R_1, R'_1)=s', as required. We next assume that R'_i is the k_i-parirty of R'_1. So ℓ(R'_1)≥ 1. In this case, since R_1<R'_1, ℓ(R_1)=ℓ(R'_1)-1. If R_i∼<R_1, then ℓ(R'_1)= k_i-|R| and α_i(R_1, R'_1)=1=ℓ(R'_1) k_i-|R|, as required. Last, weassume that R_i is the k_i-parirty of R_1. So ℓ(R_1)≥ 1. Let k_1-k_i=k. In this case we haveR'_1 =R∪ [n-ℓ(R'_1)+1, n];R_1 =R∪{n-ℓ(R'_1)}∪ [n-ℓ(R'_1)+2, n];R_i =R∪{n-ℓ(R'_1)} if ℓ(R_i)=0,R_i =R∪{n-ℓ(R'_1)}∪ [n-ℓ(R'_1)+2+k, n]if ℓ(R_i)≥ 1;R'_i =R∪ [n-ℓ(R'_1)+1+k, n].Thenα_i(R_1, R'_1)=|ℒ(R'_i, k_i)|-|ℒ(R_i, k_i)|=ℓ(R'_1) k_i-|R|,as required.From Definitions <ref> and <ref>, we have the following observation.For any R_1, R_1' ∈ℛ_1 with R_1≺ R'_1, we have that α_i(R_1, R'_1)=α(R_1, R'_1) holds for each i∈ [s'], and δ(R_1, R'_1)=β(R_1, R'_1).Let j∈ [0, k_1-1], d∈ [k_1-j] and A_1, B_1, C_1, D_1∈ℛ_1(j). Suppose that A_1, B_1 are d-sequential andC_1, D_1 are d-sequential with max A_1=max C_1 and max B_1=max D_1. Then γ(A_1, B_1)=γ(C_1, D_1) and δ(A_1, B_1)=δ(C_1, D_1). In particular, if A_1d⟶ B_1, C_1d⟶ D_1 and max A_1=max C_1, then γ(A_1, B_1)=γ(C_1, D_1) and δ(A_1, B_1)=δ(C_1, D_1).By the definitions of A_1, B_1, C_1, D_1, from Lemma <ref> and Remark <ref>, we haveδ(A_1, B_1)=δ(C_1, D_1) and α_i(A_1, B_1)=α_i(C_1, D_1) holds for each i∈ [s'].Next, we aim to show that for each i∈ [s'+1, s], α_i(A_1, B_1)=α_i(C_1, D_1). Let i∈ [s'+1, s]. Denot𝒜 ={R: A_1∪ [n-j+1, n]≺ R≺ B_1∪ [n-j+1, n]and|R|=k_1}, ℬ ={T: C_1∪ [n-j+1, n]≺ T≺ D_1∪ [n-j+1, n] and|T|=k_1}.Since α_1(A_1, B_1)=α_1(C_1, D_1), |𝒜|=|ℬ|=:h. Let 𝒜={R_1, R_2, …, R_h} and ℬ={T_1, T_2, …, T_h}, where R_1 ≺ R_2 ≺…≺ R_h and T_1 ≺ T_2 ≺…≺ T_h. For any j∈ [h], we have ℓ(R_j)=ℓ(T_j) and |R_j∖ R_j^ t|=|T_j∖ T_j^ t|. Thus, by Claim <ref>, for any j∈ [h-1], α_i(R_j, R_j+1)=α_i(T_j, T_j+1). Furthermore, by Remark <ref>, we conclude that α_i(A_1, B_1)=α_i(C_1, D_1).Let A_1, B_1, C_1∈ℛ_1 and a be an integer. SupposeA_1∖ A_1^ t=A∪{a, a+1}, B_1=A∪{a}∪ [n-ℓ(B_1)+1, n] and C_1=A∪{a+1}∪ [n-ℓ(B_1)+1, n] for someA. Then δ(A_1, B_1)=δ(B_1, C_1). If a+1≤ n-ℓ(A_1)-2, then γ(A_1, B_1)=γ(B_1, C_1). If a+1= n-ℓ(A_1)-1, then γ(A_1, B_1)≤γ(B_1, C_1), equality holds if and only if C_1 does not have k_s'+1-parity (recall that s' is the integer such that k_1=…=k_s'>k_s'+1).Let A'_1 and B'_1 be the k_1-sets such that A_1<A'_1 and B_1<B'_1. Then max A'_1=max B'_1, and A'_1, B_1 are (ℓ(A_1)+1)-sequential, B'_1, C_1 are (ℓ(A_1)+1)-sequential. Note that max B_1=max C_1=n, thenapplying Lemma <ref> and Reamrk <ref>, we have that α_i(A'_1, B_1)=α_i(B'_1, C_1) holds for each i∈ [s'] and δ(A'_1, B_1)=δ(B'_1, C_1). Clearly, α_1(A_1, A'_1)=α_1(B_1, B'_1)=1 holds for each i∈ [s'] and using Proposition <ref> and Remark <ref>, we have δ(A_1, A'_1)=δ(B_1, B'_1). Therefore, by Remark <ref>, we have δ(A_1, B_1)=δ(B_1, C_1). Clearly, for each i∈ [s'],α_i(A_1, B_1)=α_i(A_1, A'_1)+α_i(A'_1, B_1)=α_i(B_1, B'_1)+α_i(A'_1, C_1)=α_i(B_1, C_1).Since A_1∖ A_1^ t=A∪{a, a+1}, a+1≤ n-ℓ(A_1)-1. By the definitions of A_1, B_1, andC_1, if a+1≤ n-ℓ(A_1)-2, then ℓ(B_1)=ℓ(C_1)=ℓ(A_1)+1, and if a+1= n-ℓ(A_1)-1, then ℓ(B_1)=ℓ(A_1)+1 and ℓ(C_1)=ℓ(B_1)+1.Using Claim <ref>, for each j∈ [s'+1, s], we have α_j(A'_1, B_1)=α_j(B'_1, C_1).If the previous case happens, then ℓ(A'_1)=ℓ(B'_1)=0, so Claim <ref> gives α_i(A_1, A'_1)=α_i(B_1, B'_1)=0 for each i∈ [s'+1, s]. Combing this with (<ref>), (<ref>) and Remark <ref>, we getγ(A_1, B_1) =∑_i=1^sα_i(A_1, B_1)=∑_i=1^s'α_i(A_1, B_1)+∑_i=s'+1^s(α_i(A_1, A'_1)+α_i(A'_1, B_1))=∑_i=1^s'α_1(B_1, C_1)+∑_i=s'+1^s(α_i(B_1, B'_1)+α_i(B'_1, C_1))=γ(B_1, C_1),as desired. If the later case happens, then A_1<B_1<C_1, by Claim <ref>, since ℓ(B_1)=ℓ(A_1)+1 and ℓ(C_1)=ℓ(B_1)+1, then for each j∈ [s'+1, s], we have α_j(A_1, B_1)= α_j(B_1, C_1) if k_j<|C_1∖ C_1^ t|, i.e., C_1 dest not have k_j-parity; and α_j(A_1, B_1)<α_j(B_1, C_1) if k_j≥ |C_1∖ C_1^ t|, i.e., C_1 has k_j-parity. Note that k_s'+1≥…≥ k_s, so if C_1 does not have k_s'+1-parity, then it does not have any k_j-parity for j∈ [s'+1, s].Therefore, ∑_j=1^s α_j(A_1, B_1)≤∑_j=1^s α_j(B_1, C_1), and the equality holds if and only if C_1 has k_s'+1-parity, that is, γ(A_1, B_1)≤γ(B_1, C_1), and the equality holds if and only if C_1 doesnot have k_s'+1-parity, as desired.Let A_1, B_1, C_1, D_1 ∈ℛ_1, a be an integer, A_1∖ A_1^ t=A∪{a, a+1}, B_1∖ B_1^ t=A∪{a, a+2}, C_1∖ C_1^ t=A∪{a} and D_1∖ D_1^ t=A∪{a+1, a+2} for some subset A. Then γ(A_1, B_1)=γ(C_1, D_1) and δ(A_1, B_1)=δ(C_1, D_1).Clearly, ℓ(A_1)=ℓ(B_1)=ℓ(D_1)=ℓ(C_1)-1.The proof is divided into two cases.(i) ℓ(A_1)=0. In this case, A_1<B_1, C_1<D_1 and max B_1=max D_1. By Proposition <ref> and Remark <ref>, we have δ(A_1, B_1)=δ(C_1, D_1), as required. By Cliam <ref> and ℓ(B_1)=ℓ(D_1)=0, we have γ(A_1, B_1)=γ(C_1, D_1)=1, as desired.(ii) ℓ(A_1)≥ 1. Let A'_1 and C'_1 be the k_1-sets such that A_1<A'_1 and C_1<C'_1. Then A'_1ℓ(A_1)⟶B_1, C'_1ℓ(A_1)⟶D_1 and max A'_1=max C'_1. By Claim <ref>, γ(A'_1, B_1)=γ(C'_1, D_1) and δ(A'_1, B_1)=δ(C'_1, D_1). Since B_1∖ B_1^ t=A∪{a, a+2}, a+1≤ n-ℓ(A_1)-2. Then ℓ(A'_1)=ℓ(C'_1)=0. By Claim <ref> again, γ(A_1, A'_1)=γ(C_1, C'_1)=1. By Proposition <ref> and Remark <ref>, we have δ(A_1, A'_1)=δ(C_1, C'_1). Thus, γ(A_1, B_1)=γ(C_1, D_1) and δ(A_1, B_1)=δ(C_1, D_1), as required. We are going to prove Lemma <ref> by induction on k. §.§The Base Case We are going to show that Lemma <ref> holds for k=0. Assume that F_1, G_1, H_1∈ℛ_1 with F_1 c≺G_1c≺H_1 for some c∈ [k_1] and f(F_1)≤ f(G_1), we are going to show that f(G_1)<f(H_1). Since F_1, G_1, H_1 ∈ℛ_1 with F_1c≺ G_1 c≺ H_1, max F_1≤ n-2, max G_1≤ n-1 and max H_1≤ n. Let G'_1 and H'_1 be the k_1-sets such that G'_1<G_1 and H'_1<H_1. we first showthe following observation. Let F_m be the k_1-set such that F_1c⟶F_m, where m is the number of k_1-subsets R satisfying F_1≺ R ≺ F_m. If δ(G'_1, G_1)<s', then f(R), where R∈ℱ, increases onℱ, in particular, f(G_1)<f(H_1).We may assume that ℱ={F_i: 1≤ i ≤ m} and F_1<F_2<…<F_m. Then max F_2=max G_1. Since F_1<F_2 and G'_1<G_1, by Proposition <ref> and Remark <ref>, δ(F_1, F_2)=δ(G'_1, G_1)<s' (assumption). Since max F_i≥ F_2 for any i∈ [2, m], then by Proposition <ref> and Remark <ref>, δ(F_i, F_i+1)<s' holds for all i∈ [m-1]. On the other hand, γ(F_i, F_i+1)≥ s' for all i∈ [m-1], thus, f(R), where R∈ℱ, increases onℱ. Clearly, G_1, H_1∈ℱ and G_1⪵ H_1, therefore, f(G_1)<f(H_1), as required. By Claim <ref>, next we may assume thatδ(G'_1, G_1)≥ s'.Note thatf(G_1) =f(G'_1)+γ(G'_1, G_1)-δ(G'_1, G_1),f(H_1) =f(H'_1)+γ(H'_1, H_1)-δ(H'_1, H_1).Since ℓ(G_1)=0, by Claim <ref>, for each i∈ [s'+1, s], we haveα_i(G'_1, G_1)=0≤α_i(H'_1, H_1).Clearly, for i∈ [s'], α_i(G'_1, G_1)=α_i(H'_1, H_1)=1.So γ(G'_1, G_1)≤γ(H'_1, H_1). Note that max G_1<max H_1, combining Proposition <ref>, Remark <ref>, Remark <ref> and (<ref>), we obtain δ(G'_1, G_1)> δ(H'_1, H_1). Combining with equalities (<ref>) and (<ref>),to show f(G_1)<f(H_1), it is sufficient to show the following claim. f(G'_1)≤ f(H'_1).If c=1, then F_1=G'_1 and G_1=H'_1. Since f(F_1)≤ f(G_1), f(G'_1)≤ f(H'_1), as desired. Thus, we have provedfor c=1.Next, we considerc≥ 2. In this case, F_1c-1⟶G'_1 and G_1c-1⟶H'_1. Let F'_1 be the k_1-set such that F_1c-1≺F'_1. Therefore, max F'_1=max G_1 and F'_1c-1⟶G'_1. Note that max G'_1=max H'_1=n. By Lemma <ref> and Remark <ref>, we have α_1(F'_1, G'_1)=α_1(G_1, H'_1) and δ(F'_1, G'_1)=δ(G_1, H'_1). Let F”_1 be the k_1-set such that F”_1<F'_1. Thus, F”_1, G'_1 and H'_1 satisfy the condition of Claim <ref>. So we conclude that γ(F”_1, G'_1)≤γ(G'_1, H'_1) and δ(F”_1, G'_1)=δ(G'_1, H'_1). Note thatf(G'_1)=f(F”_1)+γ(F”_1, G'_1)-δ(F”_1, G'_1),f(H'_1)=f(G'_1)+γ(G'_1, H'_1)-δ(G'_1, H'_1).Thus, to show f(G'_1)≤ f(H'_1) is sufficientto show the following claim.f(F”_1)≤ f(G'_1).Suppose on the contrary that f(F”_1)>f(G'_1). Since α_i(G'_1, G_1)=1 holds for each i∈ [s'] and α_j(G'_1, G_1)=0 holds for each j∈ [s'+1, s] (see (<ref>)), γ(G'_1, G_1)=s'. Since δ(G'_1, G_1)≥ s' and f(G_1)=f(G'_1)+γ(G'_1, G_1)-δ(G'_1, G_1), f(G'_1)≥ f(G_1). Therefore, f(G'_1)≥ f(F_1) since f(G_1)≥ f(F_1). Hence, f(F”_1)>f(F_1) since f(F”_1)>f(G'_1). This implies F_1 F”_1. Therefore, c≥ 3 and F_1c-2⟶F”_1. Since F_1c≺G_1, we may assume that there exists some (k_1-c)-set F and some integer x such that F_1=F∪ [x+1, x+c] and max F≤ x if F∅. Then F”_1=F∪{x+1, x+2}∪ [n-c+3, n].The forthcoming claim will be used to make a contradiction to f(F”_1)>f(G'_1), thereby ending the proof of Claim <ref>. We will explain this after stating the claim. Let d be a positive integer, B_1=B∪ [y+1, y+d] for some set B with max B≤ y and y+d<n. Suppose i∈ [d] and C_1 is the k_1-set such thatB_1i⟶C_1. Let p=n-y-d. We define k_1-sets D_1, D_2, …, D_p as follows. If i=1, then let D_1=B_1 and D_j=B∪ [y+1, y+d-1] ∪{y+d+j-1} for each j∈ [2, p], when d=1,we regard [y+1, y+d-1] as an empty set. If i≥ 2, then let D_j=B∪ [y+1, y+d-i] ∪{y+d+j-i}∪ [n-i+2, n] for each j∈ [ p],when d=i,we regard [y+1, y+d-i] as an empty set. If f(B_1)≤ f(C_1), then for each j∈ [p], f(D_j)≤ f(C_1). To finish the proof of Claim <ref>, we only need to prove Claim <ref>. Assume that Claim <ref> holds, taking B_1=F_1, y=x, d=c and i=c-1 in Claim <ref>, we can see that C_1=G'_1 and D_1=F”_1. Since f(F_1)≤ f(G'_1), then Claim <ref> gives f(F”_1)≤ f(G'_1), which is a contradiction to the assumption that f(F”_1)> f(G'_1). This completes the proof of Claim <ref> by assuming the truth of Claim <ref>.We are going to prove Claim <ref> by induction on i.We first consider the case i=1. In this case, B_1=D_11≺D_21≺…1≺D_p1≺C_1. If p=1, then we have nothing to say. We may assume p≥ 2. Iff(D_2)> f(C_1), then f(D_1)< f(D_2) since f(B_1)=f(D_1)≤ f(C_1). Note that we have already proved Lemma <ref> when c=1. By taking c=1 in Lemma <ref>, we have f(D_2)<f(D_3)<…<f(D_p)<f(C_1), a contradiction. Using the same argument, we can see that for each j∈ [2, p], f(D_j)<f(C_1), as required.We next consider the case i≥ 2 and assume that Claim <ref> holds for i-1. If f(D_1)≥ f(C_1), then f(D_1)≥ f(B_1) since f(B_1)≤ f(C_1). Note that B_1i-1⟶D_1. By replacing C_1 with D_1 and i with i-1 in Claim <ref>, we define E_1, E_2, …, E_p as definitions of D_1, D_2, …, D_p. Then by induction hypothesis, we obtainf(E_j)≤ f(D_1) holds for each j∈ [p]. For convenience, we denote C_1=D_p+1. We have the following claim.For each j∈ [p], δ(E_j, D_1)=δ(D_j, D_j+1).For each j∈ [p-1], γ(E_j, D_1)=γ(D_j, D_j+1) andγ(E_p, D_1)≤γ(D_p, D_p+1).Note that E_p<D_1, D_p<D_p+1, ℓ(D_1)=ℓ(D_p+1)-1 and |D_1∖ D_1^ t|=|D_p+1∖ D_p+1^ t|+1. By Claim <ref>, for each j∈ [s'+1, s], α_j(E_p, D_1)≤α_j(D_p, D_p+1). Trivially, for each j∈ [s'], α_i(E_p, D_1)≤α_i(D_p, D_p+1)=1, therefore, γ(E_p, D_1)≤γ(D_p, D_p+1), as required.For each j∈ [2, p], let F_j and H_jbe the k_1-sets such that D_j-1<F_j andE_j-1<H_j. Thus, for each j∈ [2, p], F_j i-1⟶D_j and we have B_1 i-1≺ H_2 i-1≺ H_3 i-1≺…i-1≺ H_p i-1≺ D_1 and B_1 i≺ F_2 i≺ F_3 i≺…i≺ F_p i≺ C_1. Then for each j∈ [2, p], we have max H_j=max F_j. By Claim <ref>,γ(H_j, D_1)=γ(F_j, D_j) and δ(H_j, D_1)=δ(F_j, D_j). Note that for each j∈ [2, p], D_j-1<F_j and E_j-1< H_j, hence Claim <ref> gives γ(D_j-1, F_j)=γ(E_j-1,H_j), Proposition <ref> and Remark <ref> give δ(D_j-1, F_j)=δ(E_j-1,H_j). So for each j∈[2, p], we have γ(E_j-1, D_1) =γ(E_j-1, H_j)+γ(H_j, D_1)=γ(D_j-1, F_j)+γ(F_j, D_j)=γ(D_j-1, D_j),andδ(E_j-1, D_1) =δ(E_j-1, H_j)+δ(H_j, D_1)=δ(D_j-1, F_j)+δ(F_j, D_j)=δ(D_j-1, D_j).Since E_p<D_1, D_p<D_p+1 and max D_1=max D_p+1=max C_1=n. By Proposition <ref> and Remark <ref>, δ(E_p, D_1)=β(E_p, D_1)=β(D_p, D_p+1)=δ(D_p, D_p+1). This completes the proof of Claim <ref>. Let us continue the proof of Claim <ref>. Note that for each j∈ [p], f(D_j+1)=f(D_j)+γ(D_j, D_j+1)-δ(D_j, D_j+1) and f(D_1)=f(E_j)+γ(E_j, D_1)-δ(E_j, D_1). By Claim <ref> and (<ref>), we conclude thatf(D_1)≤ f(D_2)≤…≤ f(D_p+1)=f(C_1).This completes the proof of Claim <ref>.Since we have shown that Claim <ref> holds,Claim <ref> holds.This completes the base case ofLemma <ref>. We next to consider the induction step. §.§The Induction Step Recall that ℛ_1, k=:{R∈ℝ_1: [n-k+1, n]⊂ R}, ℛ_1(k)=:{R∖ [n-k+1, n]: R∈ℛ_1, k} for k∈ [k_1-1]. The authors have shown the following result in <cit.>.<cit.> Let j∈ [0, k_1-j], F_1<F'_1, G_1<G'_1 in ℛ_1(j), andmax F'_1=max G'_1. Then α(F_1, F'_1)=α(G_1, G'_1) and β(F_1, F'_1)=β(G_1, G'_1). We have the following claim.Let j∈ [0, k_1-1], F_1<F'_1, G_1<G'_1 and F_1 ≺ F'_1 ≺ G_1 ≺ G'_1 in ℛ_1(j), andmax F'_1=max G'_1 with ℓ(F'_1)=ℓ(G'_1) in ℛ_1(j). Then γ(F_1, F'_1)=γ(G_1, G'_1) and δ(F_1, F'_1)=δ(G_1, G'_1).If j=0, then we are fine (recall Claim <ref> and Proposition <ref>). Assume j≥ 1. Let A_1=F_1∪ [n-j+1, n], A'_1=F'_1∪ [n-j+1, n], B_1=G_1∪ [n-j+1, n] and B'_1=G'_1∪ [n-j+1, n].Let A”_1 and B”_1 be thek_1-sets such that A_1<A”_1 and B_1<B”_1. Then by the definitions of F_1, F'_1, G_1, G'_1, we have max A”_1=max B”_1, A”_1j⟶A'_1 and B”_1j⟶B'_1. By Claim <ref>, γ(A”_1, A'_1)=γ(B”_1, B'_1) and δ(A”_1, A”_1)=δ(B”_1, B'_1). By Claim <ref> and ℓ(F'_1)=ℓ(G'_1) in ℛ_1(j), γ(A_1, A'_1)=γ(B_1, B”_1) and δ(A_1, A”_1)=δ(B_1, B”_1). Thus, γ(A_1, A'_1)=γ(B_1, B'_1) and δ(A_1, A'_1)=δ(B_1, B'_1), that is γ(F_1, F'_1)=γ(G_1, G'_1) and δ(F_1, F'_1)=δ(G_1, G'_1),as desired.We are ready to givethe induction step of Lemma <ref>. By induction on k. We have shown that it holds for k=0 in Section 6.1. Suppose it holds for k∈[0, k_1-2], we are going to prove it holds for k+1. Let F_1, G_1, H_1 ∈ℛ_1(k+1) with F_1c≺G_1c≺H_1 and f(G_1)≥ f(F_1), i.e., γ(F_1, G_1) ≥δ(F_1, G_1). We are going to apply induction hypothesis to show f(H_1)>f(G_1), i.e., γ(G_1, H_1)>δ(G_1, H_1). Let F'_1=F_1∪{max F+1}, G'_1=G_1∪{max G_1+1} and H'_1=H_1∪{max H_1+1}. Then F'_1, G'_1, H'_1∈ℛ_1(k). Moreover, F'_1c+1≺G'_1c+1≺H'_1 inℛ_1(k).Let A, B, C, D, E be the (k_1-k)-sets satisfying C<G'_1<D, E<H'_1, F'_1c≺A and F'_1<B in ℛ_1(k). Let F_1=F_1⊔{n-k}, G_1=G_1⊔{n-k}, H_1=H_1⊔{n-k}. Then F_1, G_1, H_1∈ℛ_1(k). We can see that if c≥ 2, thenF'_1<B1⟶F_1⪵ Ac⟶ C<G'_1<D1⟶G_1andG'_1c⟶E<H'_1.If c=1, thenF'_1<A=B1⟶F_1=C<G'_1<D1⟶G_1andG'_11⟶E<H'_1. γ(A, C)>δ(A, C).Suppose on the contrary that γ(A, C)≤δ(A, C). We first consider the case c≥ 2. By (<ref>),γ(F_1, G_1)=γ(F_1, A)+γ(A, C)+γ(C, G'_1)+γ(G'_1, G_1),δ(F_1, G_1)=δ(F_1, A)+δ(A, C)+δ(C, G'_1)+δ(G'_1, G_1).Note that f(F_1)=f(F_1) and f(G_1)=f(G_1), therefore, γ(F_1, G_1)≥δ(F_1, G_1) implies γ(F_1, G_1)≥δ(F_1, G_1). Since γ(A, C)≤δ(A, C), thenγ(F_1, A)+γ(C, G'_1)+γ(G'_1, G_1)≥δ(F_1, A)+δ(C, G'_1)+δ(G'_1, G_1).Note that max B=max G'_1 with ℓ(B)=ℓ(G'_1)=0 in ℛ_1(j) (i.e., max B=max G'_1<n-k). By Claim <ref>, we have δ(F'_1, B)=δ(C, G'_1) and γ(F'_1, B)=γ(C, G'_1). Note that B1⟶F_1, G'_11⟶G_1, max B=max G'_1 and maxF_1=maxG_1, it follows from Claim <ref> that γ(B, F_1)=γ(G'_1, G_1) and δ(B, F_1)=δ(G'_1, G_1).Thenγ(F_1, A)+γ(C, G'_1)+γ(G'_1, G_1)=γ(F_1, A)+γ(F'_1, B)+γ(B, F_1)=γ(F'_1, A).Similarly, we haveδ(F_1, A)+δ(C, G'_1)+δ(G'_1, G_1)=δ(F'_1, A).So inequality (<ref>) gives γ(F'_1, A)≥δ(F'_1, A). Note that F'_1c≺A, Ac⟶C in ℛ_1(k) for c∈ [k_i-k], by induction hypothesis, γ(A, C)>δ(A, C). A contradiction to our assumption.We next consider the case c=1.By (<ref>), we haveγ(F_1, G_1)=γ(C, G'_1)+γ(G'_1, G_1),δ(F_1, G_1)=δ(C, G'_1)+δ(G'_1, G_1).Note that γ(F_1, G_1)≥δ(F_1, G_1) implies γ(F_1, G_1)≥δ(F_1, G_1) andγ(C, G'_1)+γ(G'_1, G_1)≥δ(C, G'_1)+δ(G'_1, G_1).Note that max B=max G'_1 and ℓ(B)=ℓ(G'_1)=0 in ℛ_1(k). By Claim <ref>, we have δ(F'_1, B)=δ(C, G'_1) and γ(F'_1, B)=γ(C, G'_1). Note that B1⟶F_1, G'_11⟶G_1, max B=max G'_1 and maxF_1=maxG_1, it follows from Claim <ref> that γ(B, F_1)=γ(G'_1, G_1) and δ(B, F_1)=δ(G'_1, G_1).Thenγ(C, G'_1)+γ(G'_1, G_1)=γ(F'_1, B)+γ(B, F_1)=γ(F'_1, F_1). Similarly, we haveδ(C, G'_1)+δ(G'_1, G_1)=δ(F'_1, F_1).So inequality (<ref>) gives γ(F'_1, F_1)≥δ(F'_1, F_1).In view of (<ref>) thatγ(F'_1, A)=γ(F'_1, F_1)-γ(A, C)andδ(F'_1, A)=δ(F'_1, F_1)-δ(A, C).Since γ(A, C)≤δ(A, C) and γ(F'_1, F_1)≥δ(F'_1, F_1), we get γ(F'_1, A)≥δ(F'_1, A). Note that F'_1c≺A, Ac⟶C ∈ℛ_1(k),where c=1∈ [k_1-k], by induction hypothesis, γ(A, C)>δ(A, C). A contradiction to our assumption. This completes the proof of Claim <ref>. By (<ref>) and (<ref>), we have G'_1c⟶E, Ac⟶ C,max G'_1=max A, max E=max C in ℛ_1(k). By Claim <ref> and Claim <ref>, we getγ(G'_1, E)>δ(G'_1, E).γ(D, H'_1)>δ(D, H'_1).Since G'_1<D, E<H'_1and G'_1≺ D ≺ E ≺ H'_1 in ℛ_1(k),we will meet the following two cases: ℓ(D)=ℓ(H'_1) and ℓ(D)<ℓ(H'_1). We first consider the case ℓ(D)=ℓ(H'_1). Then by Claim <ref>, we have γ(G'_1, D)=γ(E, H'_1) and δ(G'_1, D)=δ(E, H'_1). Therefore,γ(D, H'_1) =γ(G'_1, E)-γ(G'_1, D)+γ(E, H'_1)=γ(G'_1, E)>δ(G'_1, E)=δ(G'_1, E)-δ(G'_1, D)+δ(E, H'_1)=δ(D, H'_1),as desired. Next we assume that ℓ(D)<ℓ(H'_1).If c=1, then G'_1<D=E<H'_1. By Claim <ref>, γ(D, H'_1)≥γ(G'_1, D)(<ref>)>δ(G'_1, E)=δ(D, H'_1),where the last equality holds by Proposition <ref>, as required. If c≥ 2, then G'_1<D=G_1⪵ E<H'_1, max D=max E=n-k and ℓ(E)>ℓ(D)≥ 0. By Claim <ref> and Remark <ref>,γ(D, H'_1)≥γ(G'_1, E)(<ref>)>δ(G'_1, E)=δ(D, H'_1),as required.Consequently, f(D)<f(H'_1) following from Claim <ref>. Recall that D1⟶G_1 and H'_11⟶H_1. Hence, f(G_1)<f(H_1) by applying Claim <ref>. This implies γ(G_1, H_1)>δ(G_1, H_1), as desired.Thiscompletes the proof ofLemma <ref>. § VERIFY UNIMODALITY: THE PROOFS OF LEMMAS <REF>, <REF> AND PROPOSITION <REF> Lemmas <ref> and <ref>willfollow from the following lemma. Let B_0={b_1, …, b_x}∪ [y, y+k] with b_x<y-1, k≥ 1 and y+k<n. For i∈ [k], let B_i={b_1, …, b_x}∪ [y, y+k-i]∪ [n-i+1, n]. Suppose thatB_i, B_i+1, B_i+2∈ℛ_1 for some i∈ [0, k-2] and f(B_i)≤ f(B_i+1), then f(B_i+1)< f(B_i+2). Let usexplain why Lemma <ref> implies Lemmas <ref> and <ref>. Let F_2, G_2, H_2 be as in Lemma <ref> or Lemma <ref>.Let F_1, G_1, H_1 be the k_1-parities of F_2, G_2, H_2 respectively (Proposition <ref> guarantees that F_1, G_1, H_1 exists). By Fact <ref>, for each i∈ [3, t], F_1 and F_2 have the same k_i-partner. Take s=2 (see Definition <ref>), we have g(F_2)=f(F_1), g(G_2)=f(G_1) and g(H_2)=f(H_1). We may take B_i=F_1, B_i+1=G_1 and B_i+2=H_1. So g(F_2)≤ g(G_2) impliesf(B_i)=f(F_1)=g(F_2)≤ g(G_2)=f(G_1)=f(B_i+1).Then by Lemma <ref>, we have f(B_i+2)>f(B_i+1). Sog(H_2)=f(H_1)=f(B_i+2)>f(B_i+1)=f(G_1)=g(G_2).Clearly, ℓ(B_i+1)=ℓ(B_i)+1≥ 1 and ℓ(B_i+2)=ℓ(B_i+1)+1. Let B'_i and B'_i+1 be the k_1-sets such that B_i<B'_i and B_i+1<B'_i+1. Then B'_iℓ(B_i+1)⟶B_i+1.Let J be the k_1-setsuch that B'_i+1ℓ(B_i+1)⟶J.Then B'_i, B_i+1 are ℓ(B_i+1)-sequential and B'_i+1, J are ℓ(B_i+1)-sequential. Clearly, max B'_i=max B'_i+1 and max B_i+1=max J=n.By Claim <ref>,γ(B'_i, B_i+1)=γ(B'_i+1, J) and δ(B'_i, B_i+1)=δ(B'_i+1, J). If ℓ(B'_i)≥ 1, then B_i<B'_i=B_i+1<B_i+2=B'_i+1. By Proposition <ref>, δ(B_i, B_i+1)=δ(B_i+1, B_i+2).By Claim <ref>,we have γ(B_i, B_i+1)≤γ(B_i+1, B_i+2). So f(B_i+1)-f(B_i)=γ(B_i, B_i+1)-δ(B_i, B_i+1)≤γ(B_i+1, B_i+2)-δ(B_i+1, B_i+2)=f(B_i+2)-f(B_i+2). So if f(B_i)< f(B_i+1), then f(B_i+1)< f(B_i+2), as desired. We next assume thatf(B_i)= f(B_i+1). Then δ(B_i, B_i+1)=γ(B_i, B_i+1)≥ s'. Ifγ(B_i, B_i+1)=s', then δ(B_i, B_i+1)=s'. Since B_i<B_i+1 and max B_i+1=n, δ(B_i, B_i+1)=β(B_i, B_i+1)=0 or 1, and δ(B_i, B_i+1)=β(B_i, B_i+1)=s' if and only if n=k_1+k_t=k_2+k_t-1=…=k_s'+k_t-s'+1 (see Proposition <ref>). This is a contradiction to n>k_1+k_t (since s'<s and n≥ k_s+k_t). So γ(B_i, B_i+1)>s'. Consequently, there exists j∈ [s'+1, s] such that α_j(B_i, B_i+1)>0. Let j be any integer that satisfies the above condition. By Claim <ref>, ℓ(G_i+1)≥ k_1-k_j. Since ℓ(B_i+2)=ℓ(B_i+1)+1, ℓ(G_i+1)> k_1-k_j.By Claim <ref> again, α_j(B_i+1, B_i+2)>α_j(B_i, B_i+1).By the arbitrariness of j and the definitions of γ(B_i, B_i+1) and γ(B_i+1, B_i) (see Definition <ref>), we conclude thatγ(B_i+1, B_i)<γ(B_i, B_i+1). Combining with δ(B_i, B_i+1)=δ(B_i+1, B_i+2), we havef(B_i+1)-f(B_i)=γ(B_i, B_i+1)-δ(B_i, B_i+1)< γ(B_i+1, B_i+2)-δ(B_i+1, B_i+2)=f(B_i+2)-f(B_i+2). So if f(B_i)= f(B_i+1), then f(B_i+1)< f(B_i+2), as desired.Next weassume that ℓ(B'_i)=0. By Claim <ref>, for each j∈ [s'+1, s], α_j(B_i, B'_i)=α_j(B_i+1, B'_i+1)=0. Clearly, for for each j∈ [s'], α_j(B_i, B'_i)=α_j(B_i+1, B'_i+1)=1 and then γ(B_i, B'_i)=γ(B_i+1, B'_i+1). Combining with (<ref>), we getγ(B_i, B_i+1)=γ(B_i+1, J) and δ(B_i, B_i+1)=δ(B_i+1, J).Thus f(J)≥ f(B_i+1) since f(B_i)≤ f(B_i+1). Note that ℓ(B_i+1)=ℓ(J) and B_i+1∖ B_i+1^ t∈ℛ_1(ℓ(B_i+1)), so J∖ J^ t∈ℛ_1(ℓ(B_i+1)) and B_i+1∖ B_i+1^ t1≺J∖ J^ t. Since ℓ(B_i+2)=ℓ(B_i+1)+1, B_i+2∖ [n-ℓ(B_i+1)+1, n]∈ℛ_1(ℓ(B_i+1)). And in ℛ_1(ℓ(B_i+1)),we haveB_i+1∖ B_i+1^ t1≺J∖ J^ t1⟶B_i+2∖ [n-ℓ(B_i+1)+1, n].By Lemma <ref> and f(J)≥ f(B_i+1), we obtain f(B_i+2)>f(J)≥ f(B_i+1), as required. The proofs ofequalities (<ref>) and (<ref>) are quite similar, we prove the previous one only.Let G_2=[2, k_2+1] andG_1 be the k_1-parity of G_2. Since k_1>k_2, ℓ(G_1)≥ 1. So G_1=[2, k_2+1]∪ [n-k_1+k_2+1, n]. Let A be the k_1-parity of [2, k_2]∪{n}. So A=[2, k_2]∪ [n-k_1+k_2, n]. To prove (<ref>), it is equivalent toprove thatf(G_1)<max{f({1}), f(A)}. Let k=k_1-k_2. Note that G_1∖ G_1^ t, {1, n-k_1+2, …, n}∖ [n-k_1+k_2+1, n] and A∖ [n-k_1+k_2+1, n] are contained in ℛ_1(k) and{1, n-k_1+2, …, n}∖ [n-k_1+k_2+1, n]<G_1∖ G_1^ t1⟶A∖ [n-k_1+k_2+1, n]in ℛ_1(k). Let C_1=G_1∖ G_1^ t∪{max (G_1∖ G_1^ t)+1 }∪ [n-ℓ(G_1)+2, n] (if ℓ(G_1)=2, then [n-ℓ(G_1)+2, n]=∅). Since ℓ(G_1)≥ 1, |C_1|=k_1.Let B∈ℛ_1(k) be the set such that G_1∖ G_1^ t<B and B_1=B∪ [n-ℓ(G_1)+1, n]. Clearly, {1, n-k_1+2, …, n}⪵ C_1 ⪵ G_1 ⪵ B_1 ≺ A. We may also assume that f(G_1)≥ f(C_1) since otherwise, g(ℱ_2, 3)>g(G_1)=f(G_1), we are done. Based on these, we may apply Lemma <ref> to C_1, G_1, B_1. Since f(G_1)≥ f(C_1), Lemma <ref> gives f(G_1)<f(B_1). Consequently,by Lemma <ref>, f(A)>f(B)>f(G_1), as desired. § ABOUT K_1+K_Γ+1≤ N<K_Γ-1+K_Γ Let ℱ_1⊂[n] k_1, ℱ_2⊂[n] k_2, …, ℱ_t⊂[n] k_t be non-empty pairwise cross intersecting families. Suppose t≥ 3, γ∈ [2, t-1], k_1≥ k_2≥⋯≥ k_t and k_1+k_γ+1≤ n<k_γ-1+k_γ. Then∑_i=1^t|ℱ_i|≤{∑_i=1^tn-1 k_i-1, ∑_i=1^γ(n k_i-n-k_t k_i)+∑_i=γ+1^tn-k_t k_i-k_t}.Note that if γ=2, then we are done by Theorem <ref>. So we may assume that γ≥ 3. We will prove Theorem <ref> by induction on γ. Assume that it holds for γ-1. Let us introduce the forth non-empty (n, k_1, …, k_t)-cross intersecting system.Choose a k_t-set T⊆ [n]. For i∈ [γ], we denoteℰ_i={ E∈[n] k_i: E∩ T},and for i∈ [γ+1, t], letℰ_i={ E∈[n] k_i: T⊆ E}.Denote λ_1=∑_i=1^tn-1 k_i-1 and λ_2=∑_i=1^γ{n k_i-n-k_t k_i}+∑_i=γ+1^tn-k_t k_i-k_t. Since k_1+k_γ+1≤ n<k_γ-1+k-γ, we can see that Construction <ref> and Construction <ref>are non-empty (n, k_1, …, k_t)-cross intersecting systems with sums of their sizes are λ_1 and λ_2 respectively. Let (ℱ_1, …, ℱ_t) be an extremal L-initial non-empty (n, k_1, …, k_t)-cross intersecting system with ID F_i of ℱ_i, i∈ [t] respectively.So ∑_i=1^t|ℱ_i|≥max{λ_1, λ_2}.By a similar argument to the proof of Theorem <ref>, we may assume that |ℱ_i|≥n-1 k_i-1 for each i∈ [γ]. If there is i∈ [γ] such that F_i={1, n-k_i+2, …, n} or {k_t, n-k_i+2, …, n}, then ∑_i=1^t|ℱ_i|= {λ_1, λ_2}.Using the induction hypothesis for γ-1, the proof of the above proposition is similar to the proof of Proposition <ref>. We omit it.By Proposition <ref>, we may assume that for each i∈ [γ], we have {1, n-k_i+2, …, n}⪵ F_i⪵{k_t, n-k_i+2, …, n} thoughout the rest of the paper. There is an extremal L-initial non-empty (n, k_1, …, k_t)-cross intersecting system, say,(𝒢_1, …, 𝒢_t) such that the IDs G_i of 𝒢_i, i∈ [γ] satisfyingG_γ∈ℱ_γ, γ+1andG_iis the k_i-parity of G_γ,for eachi∈ [γ-1]. We may assume that this lemma holds for γ-1. Let F_i, i∈ [γ], be the set satisfies F_i≺ F_j for all j∈ [t]∖{i}, and let F_s, s∈ [γ], be the set satisfies F_j≺ F_s for all j∈ [t]∖{s}. For j∈ [γ+1, t] let T_j be the k_j-partner of F_s.Recall that k_1+k_γ+1≤ n< k_γ-1+k_γ and (ℱ_1, …, ℱ_t) is exremal. We can use the similar argumentof Case 1 in the proof of Lemma <ref> to confirm that for j∈[s-1], we have F_j is the k_j-parity of F_s. Also, we can use the similar argumentof Case 2 in the proof of Lemma <ref> to confirm that for j∈[s+1, γ], we have F_s is the k_s-parity of F_j.Case1 and Case2 in the proof Lemma <ref> also gives F_s∈ℱ_s, γ+1.By Fact <ref>, F_j is the k_j-parity of F_γ. And Proposition <ref> gives F_γ∈ℱ_γ, γ+1, we are done. Using a similar argument to the proof of Theorem <ref>, combining with Proposition <ref>, we obtain ∑_i=1^t|ℱ_i|=max{λ_1, λ_2}. This complete the proof of Theorem <ref>.We may assume that for all i∈ [s+1], |ℱ_i|≥n-1 k_i-1.We define t subfamilies as follows: For i∈ [s+1],ℝ_i={R∈n k_i:{1, n-k_i+2, …, n}≺ R ≺{k_t, n-k_i+2, …, n}}, for i∈[s+2, t-1], letℝ_i={ R∈n k_i: {1, …, k_t, n-k_i+k_t+1, …, n}≺ R ≺{1, n-k_i+2, …, n}}and letℝ_t={ R∈n k_t: {1, 2, …, k_t}≺ R ≺{1, n-k_t+2, …, n}}.Note that (ℱ_1, …, ℱ_t) is extremal, then by Claim <ref>, F_i∈ℝ_i.Let S={x_1, …, x_d}⊆ [t] and fix 𝒜_x⊆[n] k_x for each x∈ S. If these given 𝒜_xs are L-initial pairwise cross intersecting with IDs A_x for all x∈ S, then we denote m(n, A_x_1,…, A_x_d) by the maximum value of ∑_j∈ [t]∖ S|𝒜_j|, where 𝒜_j⊆[n] k_j is L-initial and (𝒜_1, …, 𝒜_t) is a non-empty (n, k_1, …, k_t)-cross intersecting system. From Fact <ref>, we san see that n<k_s+k_λ_s and λ_s≥ s+1. So n<k_s+k_s+1 and families ℱ_1, ℱ_2, …, ℱ_s+1 are pairwise cross-intersecting freely. That is for any given G_1∈ℝ_1, …, G_s+1∈ℝ_s+1, s+1 L-initialfamilieswhich have IDs G_1, …, G_s+1 respectively, are cross intersecting. Since n≥max_i=1^s+1{ k_i+k_λ_i+1},for these fixed s+1 sets G_1, …, G_s+1, there is the unique number m(n, G_1, …, G_s+1). Denote f(G_1, …, G_s+1)=∑_i=1^s+1|ℒ(G_i, k_i)|+m(n, G_1,…, G_s+1). So by Claim<ref>, M(n, k_1, …, k_t)=max{f(G_1, …, G_s+1): G_i ∈ℝ_i}. So to determine M(n, k_1, …, k_t), we next to consider which G_is, i∈ [s] can achieve the maximum value. We first introduce the following three claims in <cit.>.Let g_1, …, g_d be positive integers, h_1, …, h_d be positive constants and N≥ g_i+g_j for all j∈ [d]∖{i}. Suppose that ℬ_1⊆[N] g_1, ℬ_2⊆[N] g_2, …, ℬ_t⊆[N] g_t are L-initial pairwise cross-intersecting families. Let R be the ID of ℬ_i with {1, N-g_i+2, …, N}≺ R ≺{g_t, N-g_i+2, …, N} and T be the partner of R.Then∑_j=1^th_j|ℬ_j|≤ h_i|ℒ(R, g_i)|+∑_j ih_j|ℒ(T, g_j)|=:f_i(R).Let k, a, b, N and c be positives integers. Suppose thata≤ b∈ [N] with b-a+1=c, A⊆ [N] with max A<a and H ≺ G ∈[N] k. IfH=A⊔ [a, b] and G=A⊔ [a+1, b+1], we write Hc≺ G.In <cit.> and <cit.>, the authors first proposed the concept of "local convexity" on L-initial cross-intersecting families, more precisely on f_i(R) as was defined in (<ref>). This property roughly consists of the forthcoming two claims. Tointroduce them, we needthe following definitions.Let N, d, i,g_1, …, g_d, h_1, …, h_d and f_i(R) are from Corollary <ref>. Let m_i=min{g_j: j∈ [d]∖ i}. Denoteℝ'_i={R∈[N] g_i: {1, N-g_i+2, …, N}≺ R ≺{m_i, N-g_i+2, …, N}}. For k∈ [g_i-1], let ℛ_i, k={R∈ℝ'_i: [n-k+1, n]⊂ R, }, and ℛ_i(k)={R∖ [n-k+1, n]: R∈ℛ_i, k}. In addition, we will write ℛ_i(0)=ℝ'_i. When we consider f_i(R), R∈ℛ_i, k, we simply write f_i(R∖[n-k+1, n]) etc. In particular, f_i({1}) is indeed f_i({1, n-g_i+1, n-g_i+2, …, n}), and f_i({m_i}) is indeed f_i({m_i, n-g_i+1, n-g_i+2, …, n}).For any j∈ [0, g_i-1], let 1≤ c≤ g_i-j and F, G, H∈ℛ_i(j) with Fc≺Gc≺H. Assume that n>g_1+g_2 or d>2. Then f_i(G)≥ f_i(F) implies f_i(H)>f_i(G).Let m_i+1≤ j≤ m_i+k_i-1. Assume that n>k_i+k_j for some j∈ [t]∖{i} or t>2. If f_i({m_i, m_i+1, …, j})≤ f_i({m_i, m_i+1, …, j-1}), then f_i({m_i, m_i+1, …, j-1})<f_i({m_i, m_i+1, …, j-2}). Notice thatthe cross-intersecting system in <cit.> and <cit.> is of non-mix type, that isthere are no two families of it are cross-intersecting freely, so f_i(R) only has one variable R.The cross-intersecting system we consider is of mix type, from(<ref>), f(G_1, …, G_s+1) is multivariate. This makes us can not apply the previous results directly. We need the following key lemma.For j∈ [s+1] and ℓ∈ [s+2, t], let ℱ_j,ℓ⊆ℝ_j and ℱ_ℓ^j⊆[n] k_ℓ such that ℱ_j,ℓ and ℱ_ℓ^j are maximal pair families.Let k_1≥ k_2≥⋯≥ k_t be positive integers and k_1+k_t≤ n<k_1+k_2 with the corresponding s_1, …, s_r and λ_1, …, λ_s+1 as given in Fact <ref>. Suppose that (ℱ_1, …, ℱ_t) is an extremal non-empty (n, k_1, …, k_t)-cross intersecting system and ℱ_i is L-initial for each i∈ [t] with ID F_i∈ℝ_i. If for each i∈ [s+1],{1, n-k_i+2, …, n}⪵ F_i ⪵{k_t, n-k_i+2, …, n}, then there is an extremal L-initial non-empty (n, k_1, …, k_t)-cross intersecting system, say,(𝒢_1, …, 𝒢_t) such that the IDs G_i∈ℝ_i, i∈[s+1], of 𝒢_i satisfyingG_s+1∈ℱ_s+1, λ_1+1andG_iis the k_i-parity of G_s+1 for eachi∈ [s]. By Lemma <ref> and (<ref>), we obtainM(n, k_1, …, k_t) =max_G_s+1∈ℱ_s+1, λ_1+1{f(G_1, …, G_s+1):G_iis thek_i -parityofG_s+1, i∈[s]}. Recall that k_1≥…≥ k_t, max_i=1^s+1{ k_i+k_λ_i+1}≤ n < min_i=1^s { k_i+k_λ_i}, s_0=0, λ_s_0=t and ℓ_j=∑_i=0^j s_i, j∈{0, 1, … r}.Let G_s+1∈ℱ_s+1, λ_1+1. For i∈ [s+1], let G_i be the k_i-parity of G_s+1. Then from Fact <ref>, we have the following claim.Let x∈ [r].For i∈ [ℓ_x-1, s+1] and j∈ [λ_ℓ_x+1, λ_ℓ_x-1], G_is have the same k_j-partner, moreover, the same one is exactly the k_j-partner of G_s+1. For j∈ [s+2, t], we denoteT_j by the k_j-partner of G_s+1.Then by Fact <ref>, we conclude thatm(n, G_1, …, G_s+1) =∑_j=s+2^t|ℒ(T_j, k_j)|.As always, we remind readers to recall Fact <ref> again. Forj∈ [s+2, t], setting d_j as follows.d_j=1/(s+1-ℓ_x-1),x∈ [r],j∈ [λ_ℓ_x+1, λ_ℓ_x-1].Then combining (<ref>), (<ref>) andReamrk <ref>, we have the following claim.Let G_s+1∈ℱ_s+1, λ_1+1. For i∈ [s], let G_i be the k_i-parity of G_s+1 and for j∈ [s+2, t], letT_j be the k_j-partner of G_s+1. Thenf(G_1, …, G_s+1) =∑_i=1^s+1|ℒ(G_i, k_i)|+∑_j=s+2^t|ℒ(T_j, k_j)|,=∑_x=1^r ∑_i=ℓ_x-1+1^ℓ_x{|ℒ(G_i, k_i)|+ ∑_j∈λ_i+1^t d_j|ℒ(T_j, k_j)| }. By Remark <ref>, we replace T the the partner of R, in the above corollary by the g_j-partner of R for j∈ [d]∖{i}, (<ref>) still holds. So we have the following claim. In fact, we do this change for the convenience of the application in thispaper and there is no formal distinction between these two claims. Let g_1, …, g_d be positive integers, h_1, …, h_d be positive constants and N≥ g_i+g_j for all j∈ [d]∖{i}. Suppose that ℬ_1⊆[N] g_1, ℬ_2⊆[N] g_2, …, ℬ_t⊆[N] g_t are L-initial pairwise cross-intersecting families. Let R be the ID of ℬ_i with {1, N-g_i+2, …, N}≺ R ≺{g_t, N-g_i+2, …, N} and T be the g_j-partner of R for j∈ [d]∖{i}.Then∑_j=1^th_j|ℬ_j|≤ h_i|ℒ(R, g_i)|+∑_j ih_j|ℒ(T, g_j)|=:f_i(R).If there exists i∈ [s+1] such that F_i={1, n-k_i+2, …, n} or F_i={k_t, n-k_i+2, …, n}, then∑_i=1^t|ℱ_i|=max{m_0, …, m_r}.If we meet case (iv), then there exists γ∈ [r-1] such that i=ℓ_γ; if we meet case (v), thenthere exists γ'∈ [r-1] such thatj=ℓ_γ'+1. Since the proofs of these three cases are similar and (vi)'s is a bit more complicated, we take (vi) as an example.Ifj=i+1, then we are done. Other wise, we may assume that for each x∈ [i+1, j-1],{1, n-k_x+2, …, n}⪵ F_x ⪵{k_t, n-k_x+2, …, n}. According to (ii) in Fact <ref>,for j∈ [0, r-1], we have λ_ℓ_j+1=…=λ_ℓ_j+1, so we may assume that if F_i'={1, n-k_i'+2, …, n}for some i'∈{λ_ℓ_j+1, …, λ_ℓ_j+1}, then F_j'={1, n-k_j'+2, …, n}for all j'∈{λ_ℓ_j+1, …, λ_ℓ_j+1}, and if F_i'={k_t, n-k_i'+2, …, n}for some i'∈{λ_ℓ_j+1, …, λ_ℓ_j+1}, then F_j'={k_t, n-k_j'+2, …, n}for all j'∈{λ_ℓ_j+1, …, λ_ℓ_j+1}. Thus, there exist γ, γ'∈ [r-1] such that i=ℓ_γand j=ℓ_γ'+1. Notice that in this case, for each j∈ [λ_i+1, t], F_j is the first set in ℛ_j, in other words, |ℱ_j| is the smallest among all possible values. On the other hand, since max_i'=1^s+1{ k_i'+k_λ_i'+1}≤ n < min_i'=1^s { k_i'+k_λ_i'},for each j'∈ [s+2, λ_j-1], F_j' is the last set in ℛ_iȷ', in other words, |ℱ_j'|=n-1 k_j'-1. Thus, we only need to consider F_x, x∈ [i+1, j-1] satisfying {1, n-k_x+2, …, n}⪵ F_x ⪵{k_t, n-k_x+2, …, n}. Then using the similar argument as case (iii), and by (ℱ_1, …, ℱ_s) is extremal, we have the following claim. There is anL-initial non-empty (n, k_i+1, …, k_j-1, k_λ_j-1+1, …, k_λ_i)-cross intersecting system, say,(𝒢_i+1, …, 𝒢_j-1, 𝒢_λ_j-1+1, …, 𝒢_λ_i) such that∑_x=i+1^j-1|𝒢_x|+∑_x=λ_j-1+1^λ_i|𝒢_x|=∑_x=i+1^j-1|ℱ_i|+∑_x=λ_j-1+1^λ_i|ℱ_x|andthe IDs G_x of 𝒢_x, x∈[i+1, j-1],satisfyingG_j-1∈ℱ_j-1, λ_i+1+1andG_xis the k_x-parity of G_j-1 for eachx∈ [i+1, j-2].Denote X=[i+1, j-1]∪ [λ_j-1+1, λ_i]. Then∑_x∈ X|ℱ_x|=max{∑_x∈ Xn-1 k_x-1, ∑_x=i+1^j-1(n k_x-n-k_t k_x)+ ∑_x=λ_j-1+1^λ_in-k_t k_x-k_t}. Thus, when (iv), (v) or (vi) happens, we have∑_i=1^t|ℱ_i|=max{m_1, …, m_r-1}Combining with results for cases (i)-(iii), we obtian∑_i=1^t|ℱ_i|=max{m_0, …, m_r}.This complete the proof of Theorem <ref>. § ABOUT MAX{K_2+K_3, K_1+K_Λ +1}≤ N < K_1+K_ΛIn this section, we continue to study M(n, k_1, …, k_t) further, extending the known results in this regime. We consider the case that k_1≥ k_2≥…≥ k_t, λ∈ [2, t-1], and max{k_2+k_3, k_1+k_λ +1}≤ n < k_1+k_λ.We denoteλ_3=n k_1-n-k_t k_1+∑_i=2^λn-1 k_i-1+∑_i=λ+1^tn-k_t k_i-k_t.Our main result is the following theorem. Suppose that k_1≥ k_2≥…≥ k_t, λ∈ [2, t-1], and max{k_2+k_3, k_1+k_λ +1}≤ n < k_1+k_λ. If ℱ_1⊆[n] k_1, ℱ_2⊆[n] k_2, …, ℱ_t⊆[n] k_t are non-empty pairwise cross intersecting families, then ∑_i=1^t|ℱ_i|≤{λ_1, λ_2, λ_3}. § PROOF FOR THEOREM <REF>In this section, we assume that (ℱ_1, …, ℱ_t) is an L-initialextremal non-empty (n, k_1,…, k_t)-cross intersecting system with k_1≥ k_2≥…≥ k_t, λ∈ [2, t-1], max{k_2+k_3, k_1+k_λ +1}≤ n < k_1+k_λ, and the ID of ℱ_j is F_j for each j∈ [t]. And the range of n implies k_1>k_2. Note that if λ =2, then k_1+k_3≤ n<k_1+k_2, andwe are done by Theorem <ref>. So we may assume that λ∈[3, t-1]. If λ=3 and n=k_2+k_3 or λ=t-1 and n=k_1+k_λ+1, then ∑_i=1^t|ℱ_i|≤max{λ_1, λ_2}. To prove the above proposition, we need the following fact. Let n=g+h,𝒢 be a g-uniform L-initial familyandℋbe an h-uniform L-initial family over [n]. Suppose that 𝒢 and ℋ are non-empty cross intersecting families with the maximum possible sum of their sizes, then 𝒢=ℒ(G, g) for any {1, …, g}≺ G ≺{h, n-g+2, …, n}.We only consider the case that λ=3 and n=k_2+k_3, since the other case is quite similar. Let (𝒯_1, 𝒯_2, 𝒯_4, …, 𝒯_t) be an extremal non-empty L-initial (n, k_1, k_2, k_4, …, k_t)-cross intersecting system and T' be thek_3-partner of the ID of 𝒯_2. By Fact <ref>, we get∑_i=1^t|ℱ_i|≤∑_i=1, i 3^t|𝒯_i|+|ℒ(T, k_3)|=max{λ_1, λ_2}.The last equality holds by n≥ k_1+k_λ+1 and Theorem <ref>. By Proposition <ref>, we may assume that: if λ=3, then n>k_2+k_3; if λ=t-1, then n>k_1+k_λ+1.Let us introduce the third non-empty (n, k_1, …, k_t)-cross intersecting system. We denote 𝒥_1={ J∈[n] k_1: J∩ [k_t]∅}. For i∈ [λ+1, t], let𝒥_i={ J∈[n] k_i: [k_t]⊆ J },and for i∈ [2, λ],let𝒥_i={ J∈[n] k_i: 1∈ J }.Since max{k_2+k_3, k_1+k_λ +1}≤ n < k_1+k_λ, (𝒥_1, …, 𝒥_t) is a non-empty (n, k_1, …, k_t)-cross intersecting system and ∑_i=1^t|𝒥_i|=λ_3. Trivially, the previous two constructions, Construction <ref> and Construction <ref>, are also non-empty (n, k_1, …, k_t)-cross intersecting system. Since (ℱ_1, …, ℱ_t) is extremal, we obtain∑_i=1^t|ℱ_i|≥max{λ_1, λ_2, λ_3}. Applying a similar argument which used in the proof of Theorem <ref>, we have the following claim, whose proof is given in Appendix.We may assume that |ℱ_1|≥n-1 k_1-1 and |ℱ_i|≥n-1 k_i-1for some i∈ [2, λ].For j∈ [2, λ], we denoteλ_2^j=∑_s=1,j(n k_s-n-k_t k_s)+∑_s=2, s j^tn-k_t k_s-k_t. Since k_2≥ k_3≥…≥ k_λ and n≥ k_2+k_3,Claim 2.19 in <cit.> gives λ_2=λ_2^2=max{λ_2^j: j∈ [2, λ]}. By (<ref>) and (<ref>), to show Theorem <ref>, it is sufficient to show the following result. Suppose thatmax{k_2+k_3, k_1+k_λ +1}≤ n < k_1+k_λ, and (𝒢_1, …, 𝒢_t) is anL-initial non-empty (n, k_1, …, k_t) with |𝒢_1|≥n-1 k_1-1 and |𝒢_i|≥n-1 k_i-1for some i∈ [2, λ]. Then∑_j=1^t|𝒢_i|≤max{λ_1, λ_2^i, λ_3 }. We point out that for all i∈ [2, λ], the proofs of Theorem <ref>are quite similar. Forconvenience, we shall deal with the case i=2 only. In other words, we may assume that |ℱ_1|≥n-1 k_1-1 and |ℱ_2|≥n-1 k_2-1 throughout the rest of the paper.An argument similar to the one used in <cit.> shows that by Lemma <ref> and Lemma <ref>, we have the following corollary.Let ℓ_1≥ℓ_2≥…≥ℓ_s'≥…ℓ_s and n≥ℓ_1 + ℓ_2. Suppose that (𝒜_1, …, 𝒜_s) be an L-initial non-empty (n, ℓ_1, …, ℓ_s)-cross intersecting system. If A, the ID of 𝒜_1,satisfies {1, n-k_1+2, …, n}≺ A ≺{ℓ_s, n-k_1+2, …, n}, then∑_i=1^s'|𝒜_i|≤max{∑_i=1^s'n-1ℓ_i-1, nℓ_1-n-ℓ_sℓ_1+∑_i=2^s'n-ℓ_sℓ_i-ℓ_s}. Further more, the upper bound is achievable.If F_1={1, n-k_1+2, …, n} or F_1={k_t, n-k_1+2, …, n} (or F_2={1, n-k_2+2, …, n} or F_2={k_t, n-k_2+2, …, n}), then ∑_i=1^t|ℱ_i|= {λ_1, λ_2,λ_3}.We remark that the proof of Proposition 4.4 can be obtained from that of Proposition 2.17 by a slight modification.Suppose F_1={1, n-k_1+2, …, n} firstly.By a similar argument which used in the proof ofProposition <ref>, we have ∑_i=1^t|ℱ_i|= λ_1, as required. Next, we suppose F_1={k_t, n-k_1+2, …, n}. Since n≥ k_1+k_λ+1, for i∈ [λ+1, t], every element of ℱ_i contains {k_1, …, k_t}. This implies thatF_i={1, …, k_t,n-k_i+k_t+1, …, n} for i∈ [λ+1, t]. Thus,∑_i=1^t|ℱ_i|=n k_1-n-k_t k_1+∑_i=λ+1^tn-k_t k_i-k_t+∑_i=2^λ|ℱ_i|. Recall that k_2+k_3≤ n<k_1+k_λ and (ℱ_1, …, ℱ_t) is extremal, by Corollary <ref>,∑_i=2^λ|ℱ_i|= max{∑_i=2^λn-1 k_i-1, n k_2-n-k_tk_2+∑_i=3^λn-k_t k_i-k_t}.Hence, ∑_i=1^t|ℱ_i|=max{λ_2, λ_3}, as required.Now suppose F_2={1, n-k_2+2, …, n}. In this case,∑_i=2^λ|ℱ_i|≤∑_i=2^λn-1 k_i-1.Since (ℱ_1, ℱ_λ+1, …, ℱ_t) is a non-empty (n, k_1, k_λ+1, …, k_t)-cross intersecting system and n≥ k_1+k_λ+1, Theorem <ref> gives|ℱ_1|+∑_i=λ+1^t|ℱ_i|≤max{∑_i={1}∪ [λ+1, t]n-1 k_i-1, n k_1-n-k_t k_1+∑_i=λ+1^tn-k_t k_i-k_t}.Combining this,(<ref>) and (<ref>), we obtain ∑_i=1^t|ℱ_i|=max{λ_1, λ_3}, as required.At last, suppose F_2={k_t, n-k_2+2, …, n}. Since n≥ k_2+k_3, in this case, for each i∈ [3, t], F_i={1, …, k_t, n-k_i+k_t+1, …, n}. Then (ℱ_1,…, ℱ_t) is extremal implies F_1={k_t, n-k_1+2, …, n}, therefore, ∑_i=1^t|ℱ_i|=λ_3. This complete the proof of Proposition <ref>.By Proposition <ref>, we may assume that {1, n-k_1+2, …, n}⪵ F_1⪵{k_t, n-k_1+2, …, n}and {1, n-k_2+2, …, n}⪵ F_2⪵{k_t, n-k_2+2, …, n} thoughout the rest of this section.We have the following lemma, which is similar to Lemma <ref> in form, but needs amore complicated proof.There is an extremal L-initial non-empty (n, k_1, …, k_t)-cross intersecting system, say,(𝒢_1, …, 𝒢_t) such that the IDs G_1 and G_2 of 𝒢_1 and 𝒢_2 respectively satisfyingG_2∈ℱ_2, λ+1andG_1is the k_1-parity of G_2. We will meet the following two cases.Case 1: F_1≺ F_2. Since n≥ k_2+k_3, using the same arguments of Case 1 in the proof of Lemma <ref>, we obtain the following claims: (a) for each i∈ [3, t], F_i is the k_i-partner of F_2;(b) for all i∈ [4, t], we have F_i≺ F_3 or F_3 is the k_3-parity ofF_i;Combining (a) and Fact <ref>,for each j∈[λ+1, t], there is a k_2-set F_2, j such that (F_2, j, F_j) is maximal, hence F_j∈ℱ_j^2. By Proposition <ref>, ℱ_j^2⊆ℱ_j^1 for j∈[λ+1, t], so F_j∈ℱ_j^1. Then for each j∈[λ+1, t],there is a k_1-set F_1, jsuch that (F_1, j, F_j) is maximal. If F_2=F_2, λ+1, then F_1, λ+1 is the k_1-parity of F_2. Otherwise, By Fact <ref>, F_2⪵ F_2, λ+1, further more F_2≺ F_1, λ+1. Hence, F_1≺ F_2 implies F_1≺ F_1, λ+1. Combining (b), Fact <ref> and Fact <ref>, we get F_1, λ+1≺ F_1, λ+2≺…, ≺ F_1, t. So (ℒ(F_1, λ+1, k_1), ℱ_2, …, ℱ_t) is a non-empty (n, k_1, …, k_t)-cross intersecting system. Since (ℱ_1, …, ℱ_t) is extremal,F_1=F_1, λ+1.Thus, F_2=F_2, λ+1. So F_2 ∈ℱ_2, λ+1 andF_1 is the k_1-parity of F_2. F_1 and F_2 satisfy (<ref>), as required. Case 2: F_2⪵ F_1. In this case, since n≥max{ k_1+k_λ+1, k_2+k_3} and using the same arguments of Case 1 in the proof of Lemma <ref>, we obtain the following claims: (a) for each i∈ [λ+1, t], F_i is the k_i-partner of F_1;(b) for all i∈ [λ+2, t], we have F_i≺ F_λ+1 or F_λ+1 is the k_λ+1-parity ofF_i; (c) (F_1, F_λ+1) is maximal; (d) for each i∈ [3, λ], F_i is the k_i-partner of F_2; (e) for all i∈ [4, λ], we have F_i≺ F_3 or F_3 is the k_3-parity ofF_i.For each j∈ [λ+1, t], let F_2, j be thek_2-partner of F_j. Recall that n≥ k_1+k_λ+1≥ k_2+k_λ+1. By Fact <ref>, F_2, λ+1 is maixmal to F_λ+1. So F_2, λ+1∈ℱ_2, λ+1. Then since ℱ_2 and ℱ_λ+1 are cross intersecting,F_2≺ F_2, λ+1.If F_1 is not the k_1-parity of F_2, λ+1, then we can replace F_1 by thek_1-parity of F_2, λ+1 such that the extremal property holds.By Fact <ref>, there is a k_λ+1-set F'_λ+1 such that (F_2, λ+1, F'_λ+1) is maximal.If F_λ+1=F'_λ+1, then (F_2, F_λ+1) is maximal. Combining (c)and Proposition <ref>, we get that F_1 is the k_1-parity ofF_2, and we are done.Suppose F_λ+1 F'_λ+1.Let F'_1 be the k_1-parity of F_2, λ+1. Then F'_1⪵ F_1. By Proposition <ref>, we can see that (F'_1, F'_λ+1) is maximal as well. For i∈ [λ+2, t], let F'_i be the k_i-partner of F_2, λ+1. By Fact <ref>, F'_i is the k_i-partner of F'_1 as well. Combining with (d) and F_2≺ F_2, λ+1, we conclude (ℒ(F'_1, k_1), ℱ_2, …, ℱ_λ, ℒ(F'_λ+1, k_λ+1), …, ℒ(F'_t, k_t)) is a non-empty (n, k_1, …, k_t)-cross intersecting system. In particular, (ℒ(F'_1, k_1), ℱ'_λ+1, …, ℱ'_t) is a non-empty (n, k_1, k_λ+1,…, k_t)-cross intersecting system. Since (ℱ_1, …, ℱ_t) is extremal,|(ℒ(F'_1, k_1)|+∑_i=λ+1^t|ℱ'_i|≤ |(ℒ(F_1, k_1)|+∑_i=λ+1^t|ℱ_i|.If equality holds, then (ℒ(F'_1, k_1), ℱ_2, …, ℱ_λ, ℒ(F'_λ+1, k_λ+1), …, ℒ(F'_t, k_t))is extremal. We may replace F_1 by F'_1, and replace F_λ+1, …, F_t by F'_λ+1, …, F'_t respectively, the extremal property holds, as desired.Next, we suppose|(ℒ(F'_1, k_1)|+∑_i=λ+1^t|ℒ(F'_i, k_i)|< |(ℒ(F_1, k_1)|+∑_i=λ+1^t|ℱ_i|.By a similar argument which was used in the proof of Case 2 in Lemma <ref>, we will get a contradiction to the assumption that (ℱ_1, …, ℱ_t) is extremal. By the above claim, we may always assume that F_1 is the k_1-parity of F_2, λ+1. Note thatF_2≺ F_2, λ+1. If F_2=F_2, λ+1, then F_1 and F_1 satisfy (<ref>), we are done. Next we suppose F_2⪵ F_2, λ+1.For i∈ [3, λ], let F'_i be the k_i-partner of F_2, λ+1. Then ( ℒ(F_2, λ+1, k_2), ℒ(F'_3, k_3), …, ℒ(F'_λ, k_λ)) is a non-empty (n, k_2, …, k_λ)-cross intersecting system. Combining (b) and Fact <ref>, we get F_2, λ+1≺ F_2, λ+2≺…, ≺ F_2, t. So (ℱ_1, ℒ(F_2, λ+1, k_2),ℒ(F'_3, k_3), …, ℒ(F'_λ, k_λ), ℱ_λ+1…, ℱ_t) is a non-empty (n, k_1, …, k_t)-cross intersecting system. Since (ℱ_1, …, ℱ_t) is extremal,|ℒ(F_2, λ+1, k_2)|+∑_i=3^λ|ℒ(F'_i, k_i)|≤∑_i=2^λ|ℱ_i|.If |ℒ(F_2, λ+1, k_2)|+∑_i=3^λ|ℒ(F'_i, k_i)|=∑_i=2^λ|ℱ_i|, then (ℱ_1, ℒ(F_2, λ+1, k_2), ℒ(F'_3, k_3), …, ℒ(F'_λ, k_λ), ℱ_λ+1…, ℱ_t) is extremal. Replace ℱ_2, …, ℱ_λ by ℒ(F_2, λ+1, k_2), ℒ(F'_3, k_3), …, ℒ(F'_λ, k_λ) respectively. This meets Case 2.1, we are done.We now consider the case that|ℒ(F_2, λ+1, k_2)|+∑_i=3^λ|ℒ(F'_i, k_i)|<∑_i=2^λ|ℱ_i|.Let ℓ_0=0, ℓ_k=k_2 and x_1, …, x_ℓ_k be the k_2 elements of F_2, λ+1. Then there are ℓ_1<ℓ_2<…<ℓ_k such that F_2, λ+1=[x_1, x_ℓ_1]∪[x_ℓ_1+1, x_ℓ_2]∪…∪[x_ℓ_k-1+1,x_ℓ_k], and for each j∈ [k], i∈ [ℓ_j-1+1, ℓ_j],we have x_i+1=x_i+1 and x_ℓ_j<x_ℓ_j+1-1 . DenoteH_1: ={x_1, …, x_ℓ_k-1}∪[x_ℓ_k-1+1, x_ℓ_k-1+k_2-ℓ_k-1]={x_1, …, x_ℓ_k-2}∪[x_ℓ_k-2+1, x_ℓ_k-2+1+k_2-ℓ_k-2-1],H_2: ={x_1, …, x_ℓ_k-2}∪[x_ℓ_k-2+1, x_ℓ_k-2+k_2-ℓ_k-2]={x_1, …, x_ℓ_k-3}∪[x_ℓ_k-3+1, x_ℓ_k-3+1+k_2-ℓ_k-3-1],⋮H_k-1: =[x_1, x_1+k_2-1],H_k: =[2, k_2+1].It is easy to see that for each i∈ [k], H_i is a k_2-set andit's last elementis smaller than n.For each i∈ [k], let J_i be the k_2-set satisfying J_i<H_i.Let F'_2, λ+1 be the k_2-set satisfying F'_2, λ+1< F_2, λ+1.By Lemma <ref>, Lemma <ref> and (<ref>), it is easy to see the following claim. If max F_2, λ+1<n and x_ℓ_k-1+1<x_ℓ_k, then F_2∈{F'_2, λ+1, J_1, …, J_k-1}. Other wise, F_2∈{J_1, …, J_k-1}. Notice that J_k-1≺…≺ J_1≺ F'_2, λ+1. Then ifmax F_2, λ+1<n and x_ℓ_k-1+1<x_ℓ_k, we have F_2≺ F'_2, λ+1, otherwise, F_2≺ J_1. As F_2, λ+1∈ℱ_2, λ+1, by definitions we have J_i∈ℱ_2, λ+1 for all i∈ [k-1] and F'_2, λ+1∈ℱ_2, λ+1. We are going to confirm the following claim to end the proof of Case 2 as it makes a contradiction to the assumption that (ℱ_1, …, ℱ_t) is extremal. Suppose F_2≺ F'_2, λ+1 or F_2≺ J_1, and let F'_1 be the k_1-parity of F'_2, λ+1 or J_1, accordingly. Then there is F”_1∈ℛ_1 andsuch that the following holds.(i) F_1≺ F”_1;(ii) For i∈ [λ+1, t], let F”_i be the k_i partner of F”_1. Then (ℒ(F”_1, k_1), ℱ_2, …,ℱ_λ, ℒ(F”_λ+1, k_λ+1), …, ℒ(F”_t, k_t)) is a non-empty (n, k_1, …, k_t)-cross intersecting system.(iii) ∑_i∈{1}∪ [λ+1, t]|ℒ(F”_i, k_i)|>∑_i∈{1}∪ [λ+1, t]|ℱ_i|. Since the proofs of these two cases are quite similar, we only prove the case: F_2≺ F'_2, λ+1. In this case, max F_2, λ+1<n, x_ℓ_k-1+1<x_ℓ_k and F'_1 is the k_1-parity of F'_2, λ+1. Since F'_2, λ+1<F_2, λ+1, we getF'_2, λ+1=[x_1, x_ℓ_1]∪…∪[x_ℓ_k-2+1,x_ℓ_k-1]∪{ x_ℓ_k-1+1-1}∪ [n-k_2+ℓ_k-1+2, n].Thus,F'_1=[x_1, x_ℓ_1]∪…∪[x_ℓ_k-2+1,x_ℓ_k-1]∪{ x_ℓ_k-1+1-1}∪ [n-k_1+ℓ_k-1+2, n]. Since F_1 is the k_1-parity of F_2, λ+1, we obtainF_1=[x_1, x_ℓ_1]∪[x_ℓ_1+1, x_ℓ_2]∪…∪[x_ℓ_k-1+1,x_ℓ_k]∪ [n-k_1+k_2+1, n]. DenoteF”_1=[x_1, x_ℓ_1]∪…∪[x_ℓ_k-2+1,x_ℓ_k-1]∪ [n-k_1+ℓ_k-1+1, n].Since F_1⪵{k_t, n-k_1+2, n}, F”_1≺{k_t, n-k_1+2, n}.Let R be a k_1-set with F'_1≺ R ≺ F”_1, and for j∈ [λ+1, t],let T_j be the k_i partner of R. Using i=λ+1 to Corollary <ref>, thenf_1(R)=|ℒ(R, k_1)|+∑_j=λ+1^t|ℒ(T_i, k_i)|. Denote m'=max{f_1(R): F'_1≺ R ≺ F”_1}. Then as a consequence of Lemma <ref>, we obtainmax{f_1(R): F'_1≺ R ≺ F”_1}=max{f_1(F'_1), f_1(F”_1)}. Note that F'_1≺ F_1 ≺ F”_1, so combining with (<ref>), we have the following claim.f_1(F'_1)<f_1(F_1)≤max{f_1(F'_1), f_1(F”_1)}. More over, f_1(F'_1)<f_1(F_1) implies f_1(F_1)< f_1(F”_1).It is not hard to see the above claim using Claim <ref>, we will explainthis in the Appendix. We complete the proof of Claim <ref>. Using a similar argument to the proof of Theorem <ref>, we obtain ∑_i=1^t|ℱ_i|=max{λ_1, λ_2} under Assumption <ref>. Combining with Proposition <ref>, we get ∑_i=1^t|ℱ_i|=max{λ_1, λ_2, λ_3}. This complete the proof of Theorem <ref>.§ THE UNIQUENESS IN THEOREM <REF> § APPENDIXIf |ℱ_i|<n-1 k_i-1 for each i∈ [t], then ∑_i=1^t|ℱ_i|< λ_1, it is a contradiction to (<ref>). So there is i∈ [t] such that |ℱ_i|≥n-1 k_i-1. First, we suppose i∈ [λ+1, t]. Since n≥ k_1+k_λ+1, n≥ k_i+k_j for all [t]∖{i}, then by (20) in <ref>, we have∑_j=1^t|ℱ_j|≤{∑_j=1^tn-1 k_j-1, n k_i-n-k_t k_i+∑_j=1,j i^tn-k_t k_j-k_t}≤{∑_j^tn-1 k_j-1, n k_1-n-k_t k_1+∑_j=2^tn-k_t k_j-k_t}≤{λ_1, λ_2},≤{λ_1, λ_2,λ_3},as desired. The second inequality holds by Claim 2.19 in <cit.> and the third inequality holds by n k_2-n-k_t k_2≥n-k_t k_2-k_t. Next we assume i∈ [λ].So |ℱ_j|≤n-1 k_1 for all j∈ [λ+1, t] Since k_1+k_λ+1≤ n<k_1+k_λ, so may assume |F_1|≥n-1 k_1-1. Suppose |ℱ_j|<n-1 k_j-1 for each j∈ [2, λ]. Let X={1}∪ [λ+1, t]. We have∑_i=1^t|ℱ_i| =M(n, k_1, …, k_t)=∑_j=2^λ|ℱ_j|+m(n, F_2, …, F_λ)=∑_j=2^λ|ℱ_j|+M(n, k_1, k_λ+1, …, k_t)<∑_j=2^λn-1 k_j-1+max{∑_i∈ Xn-1 k_i-1,n k_1-n-k_t k_1+∑_i=λ+1^tn-k_t k_i-k_t}=max{λ_1, λ_3}.A contradiction. So there is i∈ [2, λ] such that |ℱ_i|≥n-1 k_i-1. Since n≥ k_2+k_3, there is only one such i,as desired. This proof is a use of Claim <ref>. We denoteG_0 ={x_1, …, x_ℓ_k-1}∪[x_ℓ_k-1+1, x_ℓ_k-1+1+k_1-ℓ_k-1-2],G_1 ={x_1, …, x_ℓ_k-1}∪[x_ℓ_k-1+1, x_ℓ_k-1+1+k_1-ℓ_k-1-3]∪{n},G_2 ={x_1, …, x_ℓ_k-1}∪[x_ℓ_k-1+1, x_ℓ_k-1+1+k_1-ℓ_k-1-4]∪{n-1, n}, ⋮G_ℓ_k-ℓ_k-1+1 ={x_1, …, x_ℓ_k-1}∪[n-k_1+ℓ_k-1+1, n]=F”_1.Then F'_1<G_0≺ G_1…≺ G_ℓ_k-ℓ_k-1+1. By Claim <ref>, we obtianmax_G_0≺ R ≺ G_ℓ_k-ℓ_k-1+1 f_1(R)=max_i∈[0, ℓ_k-ℓ_k-1+1] f_1(G_i)=max{G_0, F”_1}.On the other hand, if f_1(F'_1)≤ f_1(G_0), then f_1(F”_1)>f_1(G_0), thusmax_F'_1≺ R ≺ F”_1 f_1(R)=max{F'_1, F”_1}.Notice that F_1=G_i for some i∈ [ℓ_k-ℓ_k-1]. Then f_1(F'_1)<f_1(F_1) implies f_1(F_1)<f_1(F”_1). This complete the proof of Claim <ref>. § ACKNOWLEDGEMENTSThis work was supported by NSFC (Grant No. 11931002).99 EKR1961 P. Erdős, C. Ko, R. Rado, Intersection theorems for systems of finite sets, Quart. J. Math. Oxf. 2(12) (1961) 313–320. FK2018 P. Frankl, A. Kupavskii, Erdős-Ko-Rado theorem for {0, ±1}-vectors, J. Comb. Theory Ser. A 155 (2018), 157–179. KK3 P. Frankl, A. Kupavskii, Sharp results concerning disjoint cross intersecting families, Europ J. Combin 86 (2020) 103089. FT P. Frankl, N. Tokushige, Some best possible inequalities concerning crossing-intersecting families, J. Combin. Theory Ser. A 61 (1992) 87-97. HP Yang Huang, Yuejian Peng, The maximum sum of sizes of non-empty pairwise cross intersecting families. arXiv: 2306.03473.HP+ Yang Huang, Yuejian Peng, Coefficients added non-empty pairwise cross intersecting families. Manuscript.mix2 Yang Huang, Yuejian Peng, Mixed pairwise cross intersecting families (II). In preparation.HM1967 A.J.W. Hilton, E.C. Milner, Some intersection theorems for systems of finite sets, Quart. J. Math. Oxf. 2 (18) (1967) 369-384.KK4 A.J.W. Hilton, The Erdős-Ko-Rado theorem with valency conditions, Unpublished Manuscript, 1976.H A.J.W. Hilton, An intersection theorem for a collection of families of subsets of a finite set, J. London Math. Soc. 2 (1977) 369-376.KK1 G.O.H. Katona, A theorem of finite sets, in: Theory of Graphs, Proc. Colloq. Tihany, Akadémai Kiadó, (1968) 187-207.KK2 J.B. Kruskal, The number of simplices in a complex, in: Math. Opt. Techniques, Univ. of Calif. Press, (1963) 251-278. SFQ2020 C. Shi, P. Frankl, J. Qian, On non-empty cross-intersecting families, Combinatorica 42 (2022) 1513–1525
http://arxiv.org/abs/2310.17859v1
{ "authors": [ "Yang Huang", "Yuejian Peng" ], "categories": [ "math.CO", "05D05, 05C65, 05D15" ], "primary_category": "math.CO", "published": "20231027022405", "title": "Mixed pairwise cross intersecting families (I)" }
[email protected] Fondazione Istituto Italiano di TecnologiaCenter for Life Nano-Neuroscience at la SapienzaViale Regina Elena 291, 00161 Roma, Italy Fondazione Istituto Italiano di TecnologiaCenter for Life Nano-Neuroscience at la SapienzaViale Regina Elena 291, 00161 Roma, Italy We present a quantum computing algorithm for fluid flows based on the Carleman-linearization of the Lattice Boltzmann (LB) method. First, we demonstrate the convergence of the classical Carleman procedureat moderate Reynolds numbers, namely for Kolmogorov-like flows.Then we proceed to formulate the corresponding quantum algorithm, includingthe quantum circuit layout and analyze its computational viability. We show that, at least for moderate Reynolds numbersbetween 10 and 100, the Carleman-LB procedure can be successfully truncatedat second order, which is a very encouraging result.We also show that the quantum circuit implementing the single time-step collision operatorhas a fixed depth, regardless of the number of lattice sites. However,such depth is of the order of ten thousands quantum gates,meaning that quantum advantage over classical computingis not attainable today,but could be achieved in the near-mid term future. The same goal for the multi-step version remains however an open topic for future research. Lattice Boltzmann-Carleman quantum algorithm and circuit for fluid flows at moderate Reynolds number Sauro Succi Received: January 14, 2024; accepted: October 20, 2023 ====================================================================================================§ INTRODUCTION Quantum computing <cit.> holds promise to provide dramaticspeed up to the solution of a number of major scientificproblems, including advanced industrial and societal applications  <cit.>.The so-called quantum advantage stems from the deepest (and most counterintuitive) features of quantum mechanics, in particular, superposition and entanglement of quantum stateswhich offer, at least in principle, the chance to exploit the full Hilbert space, scaling exponentiallywith the number of qubits, the smallest bit of quantum information.This feature provides a natural way out to the infamous"curse of dimensionality",plaguing the simulation of most quantum many-body problems, both classical and quantum <cit.>.Yet, realizing such a mind-boggling potential faces with a number of steep challenges, both conceptual and technological, primarily fast decoherence and, even more so, the quantum noise affecting theoperation of real-life quantum computers.Understandably, quantum computing to date has been directed mostly to quantum physics problems,featuring a one to one mapping between the physical system to be simulated and the quantum hardware <cit.>. Yet, there is a mounting interest in learning whether the potential of quantum computingcan be put at use also for solving the most compelling problems in classical physics, as typicallydescribed by strongly nonlinear partial differential equations <cit.>.In this respect, fluid turbulence stands out as a prominent candidate, both in terms of fundamental physics and also in view of its pervasive applicationsin both natural and industrial phenomena.This work inscribes precisely within the aforementioned scenario; we shall present aquantum algorithm, and the associated circuit, solving the basic (Navier-Stokes) equationswhich govern the physics of dissipative fluids. For reasons to be apparent in the sequel, our strategy isbased on the Lattice Boltzmann formalism for fluid flows.The paper is organized as follows.In Section <ref> we introduce the classical equation of motions. In Section <ref> we review the lattice Boltzmann method. In Section <ref> we focus onthe Carleman linearization. We introduce two ways to deal with the infinite number of variables, andwe show how to conveniently cut down the size of the system of equations.We then show in Section <ref> an explicit analysis of the CL method and itsperformance on a classical computer.In Section <ref> we define the embedding of the Carleman variables into thespace of qubits and we explicitly construct the quantum circuit.This consists of two separate steps, to be applied in series, the collision and the multi-streaming operator.Finally, in Section <ref> we draw preliminary conclusions and draw a prospective outlook for future works in this area.§ THE EQUATION OF MOTION OF CLASSICAL FLUIDS The Navier-Stokes equations (NSE), read as follows: ∂_t ρ + ∂_a(ρ u_a) = 0 ∂_t (ρ u_a) + ∂_b (ρ u_a u_b) = -∂_a p + ∂_b σ_ab+F_a with u_a being the macroscopic velocity of the fluid, ρ the fluid density, F_a the external force, p the pressure of the fluid and σ_ab the dissipative tensor.The latin indices a,b run over the cartesian coordinates x,y and z.The first line (<ref>) is the continuity equation, whereas the second line (<ref>) is a vectorial representation of the evolution of the macroscopic velocities. We use here the Einstein convention, by which repeated indices are summed upon. Eqs. (<ref>) are non-linear partial differential equations, and the strengthof the non-linearity bears heavily on our ability to solve the NSE, either analytically, or even using the most advanced computational fluid dynamics (CFD) methods <cit.>. The Reynolds number Re is a measure of the non-linearity of the system andit is defined as the ratio between the inertial and viscous forces, Re= u·∇ u/ν∇^2 u, as given by the ratio: Re = |u|L/ν, where |u| is the magnitude of the macroscopic velocity and L is the global system size. To be noted that the Reynolds number takes on very large numbers also under very mundane conditions,an ordinary car moving at a standard speed already features Re ∼ 10^7, ten millions. Given that the computational complexity of fluid turbulence scales like Re^3, this means that 10^21 active degrees of freedom need to be tracked in order to simulate the dynamics of a full car. With order thousands floating point operation per degree of freedom, this yields 10^24 floating point operations, implying 10^6 CPU/GPU seconds, about two weeks, to complete the simulationon an ideal Exascale computer. So much for an ordinary car. Consider now the problem of numerical weather forecast, which implies Reynolds numbers easily in the order of 10^10, leading to an intractable problem for any foreseeable classical computer. These simple figures speak clearly for the motivation to investigate the possibility of exploiting quantum computers for simulating classical turbulence <cit.>.CFD is a traditional forefront of computational science, with a ceaseless quest forbetter and more efficient computational methods. In the last three decades, the Lattice Boltzmann method (LBM) , has gained a prominent role in the CFD arena <cit.>. In a nustshell, LBM is a stylized version of the Boltzmann equation which retains the essential physics of fluids within a very efficient computational kinetic-theory harness. It consists of two basic processes: streaming, by which particles move freely from one lattice site to thenext, and collisions, whereby particles exchange mass, momentum and energy, so as to sustain the collectivedynamics telling fluids apart from a "wild bunch" of independent particles.The streaming is minimally non local, as it connects single-cell neighbors, but linear, in factexact, as no information is lost in moving information from one lattice site to another.Conversely, collisions hold the (quadratic) nonlinearity of the physics of fluids, but in a local form, because, consistently with Boltzmann's kinetic theory, only particles in the same lattice site interact with each other. This clearcut separation between nonlinearity and nonlocality is possibly the most profound hallmark of the LBM, and the basic reason for its computational success especially on parallel computers. It is therefore reasonable to investigate whether the advantages of this clearcut separationcarries on to the quantum computing scenario.At a superficial glance, the prospects of using a quantum computer for CFD look promising. The number of qubits, q, required to store Re^3 dynamic degrees of freedom issimply given by:q = 3 log_2 Re ∼ 10 log_10 ReThis shows that q ∼ 70 qubits are in principle sufficient to quantum-simulate a car. And even numerical weather forecast, say Re ∼ 10^10 could be quantum-simulated with about q ∼ 100 qubits, well within the nominalcapability of current quantum hardware <cit.>. As mentioned above, many hurdles stand in the way of this blue-sky scenario. Leaving aside the notorious issues of quantum noise and decoherence, in the following we focus on two issues which are specific to classical fluids: nonlinearity and dissipation.Indeed, the dynamics described by Eqs. (<ref>),(<ref>) is nonlinear and subject to dissipation, whereas quantum algorithms consist into the application of a sequence of unitary, hence conservative, operators. This fundamental difference represents a serious obstacle to the formulation of aquantum algorithm capturing the NSE dynamics. A possible solution is provided by Carleman Linearization (CL) <cit.>.This is a general strategy to transform a non-linear equation into an infinite set of linear equations, promoting all the different monomials appearing in the nonlinear equation to independent variables. In order to numerically deal with the infinite newly-defined variables, a truncation is applied at a given problem-dependent level. CL removes the first obstacle to the resolution of the NSE through quantum computers, non-linearity.Better said, it trades nonlinearity for extra-dimensions and nonlocality, as it will become apparent shortly. To deal with the non-conservative part, we can use an extended circuit,first proposed in <cit.>, thatmakes use of an ancilla qubit to mimic the dissipative dynamics. Various Carleman-based LBM schemes have been proposed in the recent literature <cit.>, but, to the best of our knowledge, none of them provided an explicit description of the corresponding quantum circuit.§ THE LATTICE BOLTZMANN METHOD The LBM has been proved particularly efficient to simulate the NSE, both at low and high Reynolds number <cit.>.LBM models the system in a d dimensional regular lattice and definesQ vectorial velocities c_i pointing toward the neighboring sites.The probability distribution functions f_i(x,t), with i=0,…,Q-1 represent theprobability of arepresentative fluid particle at lattice site x at time t ofhaving velocity c_i. The vector notation is relaxed for simplicity.The distribution functions are related to the macroscopic quantities fluid densityρ and fluid velocity u by the linear relations: ρ(x,t) = ∑_i=0^Q-1f_i(x,t), ρ(x,t)u(x,t) = ∑_i=0^Q-1 c_if_i(x,t), where the discretized velocities c_i are vectors with components either -1,0 or 1, see Table <ref> and Fig. <ref> for an explicit example in D=2 and Q=9.The lattice Boltzmann equation (LBE) reads as follows<cit.>: f(x+c_i,t+1)-f_i(x,t) = -Ω_ifor i=0,…, Q-1.where the left hand side is the free streaming along the i-th direction and the left hand side is the discrete-velocity collision operator. The time step has made unity for simplicity.It proves expedient to use the Bhatnagar–Gross–Krook (BGK) relaxation expression of the collision termΩ_i=-1/τ(f_i-f_i^eq) where τ is the relaxation time-scale and the local equilibrium f_i^eq is definedthrough a Taylor expansion of the Boltzmann equilibrium distribution, as follows: f_i^eq(x,t) = w_iρ(x,t)(1+u· c_i/c_s^2+ (u· c_i)^2/2c_s^4-u· u/2c_s^2), where c_s is the lattice speed of sound, typically 1/√(3) for most lattices. The weights w_i can be obtained from the expansion of the equilibrium function (<ref>) in terms of Hermite polynomials and depend on the number of dimensions D and discrete velocities Q.They are reported in Table <ref> for the D2Q9 model.The dynamics ruled by Eq. (<ref>) consists of two computational steps.First, the collision step is given by a local and non-linear operation that transforms the pre-collisional state f_i(x,t) into the post-collisional one f^*_i(x,t), as follows: f^*_i(x,t)=(1-ω)f_i(x,t)+ω f_i^eq, where we have used the BGK form of the collision operator, and defined ω=Δ t/τ, with Δ t being the time step of the evolution. From Eq. (<ref>) we see that if ω=1 thesystem collapses to the local equilibrium at each time step.In general, LBM allows values of ω<2 for matter of (linear) stability. Second, the streaming step shifts the density functions to the nearest-neighbor site, as f_i(x+c_i,t+Δ t)=f^*_i(x,t). The streaming process represents the free-motion of the of the fluid parcelacross the lattice. Notice that the particles always land on a lattice site. §.§ Weakly compressible limit For weakly-compressible flows, the density can taken as nearlyconstant, with a value ρ≈ 1.This allows to write 1/ρ∼(2-ρ), which permits to write the equilibrium function (<ref>) as acubic function of the density distributions, namely: f_i^eq= L_ijf_j+Q_ijkf_jf_k+T_ijklf_jf_kf_l, with the linear, quadratic and cubic operators defined as L_ij= w_i(1+c_i· c_j/c_s^2),Q_ijk=w_i/c_s^4(c_i· c_jc_i· c_k-c_s^2c_j· c_k),T_ijkl= -1/2Q_ijk,∀ l. The collision step can then be rewritten in mode-coupling form as follows: f_i^*= A_ijf_j+B_ijkf_jf_k+C_ijklf_jf_kf_l, where the matrices A, B and C are obtained from the previous operators via the relations A_ij= (1-ω)δ_ij+ω L_ij,B_ijk=ω Q_ijk,C_ijkl= -ω/2Q_ijk,∀ l. To be noted that the nonlinearity is formally carried out by the Machnumber 𝐌𝐚=u/c_s which is O(1), in stark contrast with the Reynolds number, easily in the multimillions for macroscopic objects, e.g. a standard automobile. This is a potentially major advantage over the continuum formulation of fluid dynamics. § THE CARLEMAN LINEARIZATION As noted earlier on, the Carleman linearization <cit.>transformsa finite-dimensional non-linear problem into an infinite set of linear equations. This technique makes the problem more suited to quantumcomputers, as the quantum mechanics of a closed system relieson linear algebra (leaving aside the measurement problem, of course). It consists of assigning the status of independent dynamic variable to any of the monomials in Eq. (<ref>), thus defining the non-local variables g_ij(x_1,x_2) ≡ f_i(x_1)f_j(x_2), h_ijk(x_1,x_2,x_3) ≡ f_i(x_1)f_j(x_2)f_k(x_3), and soon for the higher degrees polynomials.We sketch an example of these functions in Figure <ref>.We then define the Carleman vector V=(f_0(x_1),…,g_00(x_1,x_1),…,h_Q-1,Q-1,Q-1(x_N,x_N,x_N),…) as the vector including all the possible products of functions f localized at each of the N lattice sites.The Carleman system is an infinite-dimensional set of linear equations, which can symbolically be written as: V^*=𝒞V, where V^* is the Carleman vector after collision, and 𝒞 is the Carleman matrix, whosecomponents can be obtained by Eq. (<ref>).In the following, we consider two different procedures for dealing with the infinitenumber of Carlemanvariables, truncation and closure. §.§ Carleman truncation We ignore the higher degree terms by applying a truncation to the Carleman system, by simply neglecting the terms with degree above the order of the truncation. §.§.§ Second order Carleman truncation At 2nd order, all the terms with degree 3 or higher are set to zero, (h=0).Thus, we write:f_i^*(x_1) = A_ijf_j(x_1)+B_ijkg_jk(x_1,x_1),g_ij^*(x_1,x_2) = A_ikA_jlg_kl(x_1,x_2). Therefore, the Carleman system can be written in terms of the vector V_tr^(2), which collects all the components of f and g at each lattice site, and the corresponding vector after collision V_tr^(2)*.The linear relation between the two is given by the matrix 𝒞_tr^(2) that collectsthe elements from Eq. (<ref>).§.§.§ Third order Carleman truncation With the truncation at 3rd order, the Carleman equations read as follows: f_i^*(x_1) = A_ijf_j(x_1)+B_ijkg_jk(x_1,x_1)+C_ijklh_jkl(x_1,x_1,x_1),g_ij^*(x_1,x_2) = A_ikA_jlg_kl(x_1,x_2)+ A_ikB_jlmh_klm(x_1,x_2,x_2)+B_iklA_jmh_klm(x_1,x_1,x_2),h_ijk^*(x_1,x_2,x_3) = A_ilA_jmA_knh_lmn(x_1,x_2,x_3).§.§ Carleman closure We propose an alternative and slightly more accurate method to cut down the number of Carleman variables.We approximate the product of d functions f to the product of d-1 functions multiplied by a constant.We choose this constant to be the LBM weight of the corresponding index that has been removed, thereby approximating the function with its steady equilibrium value.§.§.§ Second order Carleman closure We give here an explicit example of the closure procedure at second order approximatingthe following third order polynomial as f_i(x_1)f_j(x_2)f_k(x_3)= 1/3[f_i(x_1)g_jk(x_2,x_3)+f_j(x_2)g_ik(x_1,x_3)+f_k(x_3)g_ij(x_1,x_2)]≈ 1/3[w_ig_jk(x_2,x_3)+w_jg_ik(x_1,x_3)+w_kg_ij(x_1,x_2)]. At this order, the closure affects directly thedefinition of theequilibrium function, as the term C_ijklf_jf_kf_l changes and the closure leads to f_i^*= A_ijf_j+B_ijkf_jf_k+C_ijklf_jf_kf_l = A_ijf_j+B_ijkf_jf_k+1/3C_ijkl(w_jf_kf_l+w_kf_jf_l+w_lf_jf_k) = A_ijf_j+5/6B_ijkf_jf_k, where the last line has been obtained via the relations C_ijklw_j = C_ijklw_k = 0, C_ijklw_l =-B_ijk/2. The Carleman form of the LBE after closure at second order is given by: f_i^*(x_1) = A_ijf_j(x_1)+5/6B_ijkg_jk(x_1,x_1)g_ij^*(x_1,x_2) = A_ikA_jlg_kl(x_1,x_2)+5/18[w_iB_jklg_kl(x_2,x_2)+w_jB_iklg_kl(x_1,x_1)]. We see that the closure makes the collision step non-diagonal, as it involves g functions located at different sites.§.§.§ Third order Carleman closure Closure at third or higher order does not change the equilibrium function.The combination of four distribution functions is approximated to f_i(x_1)f_j(x_2)f_k(x_3)f_l(x_4) ≈ 1/4[w_ih_jkl(x_2,x_3,x_4)+w_jh_ikl(x_1,x_3,x_4)+w_kh_ijl(x_1,x_2,x_4) +w_lh_ijk(x_1,x_2,x_3)]. The collision step becomes: f_i^*(x_1) = A_ijf_j(x_1)+B_ijkg_jk(x_1,x_1)+C_ijklh_jkl(x_1,x_1,x_1),g_ij^*(x_1,x_2) = A_ikA_jlg_kl(x_1,x_2)+A_ikB_jlm(7/8h_klm(x_1,x_2,x_2)+1/4w_i∑_nh_lmn(x_2,x_2,x_2)) +B_iklA_jm(7/8h_klm(x_1,x_1,x_2)+1/4w_j∑_nh_lmn(x_1,x_1,x_1))h_ijk^*(x_1,x_2,x_3) = A_ilA_jmA_knh_lmn(x_1,x_2,x_3)+ 1/4[w_iA_jlB_kmnh_lmn(x_2,x_3,x_3)+w_jA_ilB_kmnh_lmn(x_1,x_3,x_3)] +1/4[w_iB_jlmA_knh_lmn(x_2,x_2,x_3)+w_kA_ilB_jmnh_lmn(x_1,x_2,x_2)] +1/4[w_jB_ilmA_knh_lmn(x_1,x_1,x_3)+w_kB_ilmA_jnh_lmn(x_1,x_1,x_2)] +1/32∑_n(w_iw_jB_klmh_lmn(x_3,x_3,x_3) +w_iw_kB_jlmh_lmn(x_2,x_2,x_2)+w_jw_kB_ilmh_lmn(x_1,x_1,x_1)). § COMPARISON BETWEEN THE EXACT LBM AND THE CARLEMAN LINEARIZED MODEL In this section, we present the simulations of a two dimensional system with constant pressureand no external forces. We consider a Kolmogorov-like flow on a grid of N=N_xN_y points.The distribution functions are initialized as follows: f_i(x,y) = w_i[1+A_xcos(2π/N_yk_x y) c_i· c_1+A_ycos(2π/N_xk_y x) c_i· c_2], where the wave numbers k_x,y are integers and A_x,y is a positive amplitude between 0 and 1. By setting A_y=0, the velocity in the y direction is null, u_y=0 and the dynamics is purelylinear and dissipative, as the convective term vanishes, u·∇ u=0.The dynamics is then ruled only by the linear term ν∇^2 u. In this regime, the velocity u_x evolves following an exponential decaying function, with u_x(t) = u_x(0)exp{-ν k^2 t}, with theviscosity ν being a function of the LBM parameter ω, ν = 1/6(2/ω-1), in lattice units with Δ t = Δ x =1. For this linear regime, we show in Fig. <ref>(a) the velocity at t=0 for a grid with N=32× 32, choosing A_k=0.3 (u_x=0.1) and k_x=1. Simulations using the “exact" LBMrecover the exponential decay, as shown in Fig. <ref>(b) for differentvalues of ω.In order to compare the LBM with the CL approximation, we define theRoot Mean Squared Error (RMSE) ϵ(a,b) between two distributions a and b as ϵ(a,b) = √(∑_i=1^N1/N(a_i-b_i/a_i)^2). In the above, N is the number of grid points.Accordingly, we end up with one RMSEfor each of the Q velocity directions of the LBM.In our analysis, we consider the mean value of the RMSE among the Q distributions, namely: ⟨RMSE⟩ = ∑_q=0^Q-11/Qϵ(f_q^LBM,f_q^CL), where f_q^LBM,f_q^CL are the distribution functions calculated with the LBM and the CL respectively. The quantity ⟨RMSE⟩ accounts for the mean deviation of the CL with respect to the results obtained by the LBM. From Fig. <ref>(c), we see that this deviation is of the order of 10^-4. This is fully consistent with the weak compressibility error of the standard LBM. Upon increasing the value of A_y, we start observing the effects of the non-linearity, as the convective terms raises in magnitude and the Reynolds number becomes larger.Thus, the temporal evolution of u_x and u_y deviates from the exponential decay, as it is shown in Figure <ref>(a), with parameters A_x=0.3, k_x=1, A_y=0.2, k_y=4.For low ω (high dissipation, low Reynolds)the curves rapidly go back to the original decaying dynamics, whereas for ω=1.9, the deviation is evident even after longer time periods.The oscillations of the curve are due to the presence of small vortices caused by the mildly turbulent dynamics.In Fig. <ref>(b) we compare the ⟨RMSE⟩ between the truncation and the closure at second order. The Figure refers to ω=1.5, but similar deviations are obtained for different ω, as it is evident from Fig. <ref>(c). We see that the approximation brought by the closure leads to a lower ⟨RMSE⟩, thus mitigating the error. However, we notice that the order of magnitude between the two cut-off methods remains the same.The ⟨RMSE⟩ depends on the value of ω, as shown in Fig. <ref>(c), and therefore on the corresponding value of the Reynolds number Re.At low Reynolds the ⟨RMSE⟩ remains below 10^-3, which is anoteworthy result given the small number of grid-points employed in these simulations.We conclude that the Carleman truncation(closure) at second order does not introduce any error beyond the one inherent to the lattice Boltzmann procedure, at least up to Re ∼ O(100). While still far from turbulence, this is nonetheless encouraging, also in view of the fact that there are interesting and demanding problems in the physics of low-Reynolds fluids, particularly in the context of biology and soft matter <cit.>, and in quark-gluon plasma hydrodynamics <cit.>.§ QUANTUM CIRCUIT In the previous section we have shown the performance of the CL as applied to the LBM.We have seen that, even truncating the number of Carleman variablesat just second order, the error is about 10^-3. However, the number of Carleman variables increases as 𝒪(N^kQ^k),k being thetruncation order, since we need to multiply together all possible combinations of distributions functionswith different velocity at different spatial lattice locations. To convey a concrete idea of the numbers in play, we observe that a simulation of the D2Q9 modelon a 32× 32 square lattice implies a number of natural variables n_v = N× Q=9216, a number of Carleman variables at second order n_CL^(2)≈9×10^7 and at third order n_CL^(3)≈8×10^11. As a touchstone, the current near-exascale supercomputerscan handle of the order of trillion (10^12) grid points.Clearly, CL is not a viable option on classical computers: the dimensionality and nonlocalityprice is much higher than the linearity gain. One may argue that for each time step, the only non-local terms that affect the dynamicsare the ones involving neighbor sites, which would substantially reduce the number of Carleman variables. However, non-local correlations, involving neighbors up to order sare required,if the simulation is run as a single update from time t to time t+s.For instance, when a non-local variable g is calculated through the collision process, it streams into the neighboring sites serving as initial data for the subsequent collision process. Figure <ref> illustrates the flow of the two points non-local function g(x_1,y_1) until it converges at the point x_3, thereby defining the local function g(x_3,x_3).The numbers provided above correspond to the global case s≥L+1/2, whereL is the linear size of the spatial domain.The situation changes if one aims to compute a number of time steps s<L+1/2. The number of variables reduces to N(Q+Q^2(1+2(s-1))^2). For a single time step this simplifies to n_CL=N(Q+Q^2), a significant reduction of variables.Nevertheless, the purpose of CL is to explore whether a quantum computer could do away with the above problems, and for the application on quantum computers, it may be more convenient to deal with a high number of variables than to reset the calculation and reinitialize the quantum state <cit.>. At the same time, dealing with a single time step leads to significant simplifications,to be detailed shortly.Future quantum computers might be able to handle this explosive increase of variables in case oflarge time-steps simulation, asthey would need a number of qubits q = log_2n_CL.This translates into only about q = 27 for the truncation at second order, and q = 40 forthe truncation at third order, both numbers being well within thenominal reach of current quantum computers. §.§ The quantum embedding We can embed the whole Carleman vector with amplitude encoding in a quantum state in the following way.To embed the linear components f we can use two quantum registers, such that |f⟩ = ∑_i=0^Q-1∑_x=0^N-1f_i(x)|i⟩_v_1|x⟩_p_1,where the subscripts v_1,p_1 stand for velocity and position registers respectively. The register v is composed by ⌈log_2Q⌉ qubits, while the register p by ⌈log_2N⌉ qubits. To embed the quadratic components g we use four quantum registers, such that |g⟩ = ∑_i,j=0^Q-1∑_x,y=0^N-1g_ij(x,y)|i⟩_v_1|j⟩_v_2|x⟩_p|y⟩_p,and the same embedding strategy can be applied for functions of higher degree.In order to assemble the Carleman vector collecting together f and g, we use an extraquantum register that contains the information about the truncation order τ.This is made by ⌈log_2τ⌉ qubits (just 1 qubit is necessary to embedthe truncation at second order). Thus, the two vectors can be merged togetherf →|f⟩|0⟩_v_2|0⟩_p_2|0⟩_τ,g →|g⟩|1⟩_τ. Although this embedding doesn't fill all the components of the quantum state, it providesa helpful way to define the streaming and collision operators. In the remaining of this section we propose a concrete way to implement the streaming andcollision steps of LBM with CL in terms of quantum operators. §.§ The Multi-streaming operator In this section we analyze the streaming step of the LBM and its effect on the Carleman variables. The linear components of the Carleman vector V,V^* of equation (<ref>),beforeand after collision respectively, f,f^*, are N× Q, where N=N_xN_y is the number of lattice sites. For these variables, the streaming is simply given by Eq.(<ref>).We can define S_i as the linear operator embedding the transformation to be appliedto the linear components f^*_i with velocity c_i.The streaming operation can thus be written as f(t+Δ t) = ⊕_i=0^Q-1S_if_i^*(t). where ⊕ is the symbol of the direct sum, i.e. each streaming operator S_i acts onlyon the subspace of the functions f_i and performs the shift f_i(x,t+Δ t) = f_i^*(x-c_i,t). The streaming operation S_i is therefore a controlled operation, conditioned by the value of the velocity register v_1. The explicit form of the streaming operators S_i can be obtained from an adaptation of the circuit proposed in <cit.>, that uses just a polynomial number of two-qubit gates per streaming operator.We can consistently define the vector of the second order Carleman variables before and after collision g,g^*, where the quadratic components are given by all the possible pairs of Q and N, as stated before. The streaming on the quadratic component g_ij at position (x_1,x_2) applies the transformation g_ij(x_1,x_2,t+Δ t) = g^*_ij(x_1-c_i,x_2-c_j,t), and the corresponding operator is given by the tensor product of the linear streaming operators as g(t+Δ t) = ⊕_i,j=0^Q-1S_i⊗ S_j g^*_ij(t). From the circuital point of view, the streaming operator S_i⊗ S_j is a double-controlled operation conditioned by the velocity registers v_1 and v_2. In fact, the tensor product in Eq. (<ref>) means that the we can just apply linear streaming operators on the different position registers, as depicted in Fig. <ref>.As the two velocity registers run over log_2Q qubits, we have introduced the symbol for a multi-qubit controlled operation, conditioned by the numerical value of the Q-bit string. In Fig. <ref> we explicitly define the symbol for a simple case of two qubits acting as control. Furthermore, we see that the streaming in the diagonal direction, i.e. the one carried by the velocities c_5,c_6,c_7 and c_8, cf. Table <ref>, can be written as the compositionof two streaming operators in horizontal and vertical direction, so thatS_5 = S_1S_2, S_6 = S_2S_3, S_7 = S_3S_4, S_8 = S_4S_1. The explicit form of the bi-streaming operator allows us to extend its representation to higher Carleman orders.For instance, the streaming of the cubic components of the Carleman vector is given by thetensor product of three streaming operators, as h(t+Δ t) = ⊕_i,j,k=0^Q-1S_i⊗ S_j⊗ S_k h_ijk^*(t), and the extension applies naturally to Carleman variables of higher order.We notice that when performing the collision step, we need to calculate also the product of functions atdifferent lattice sites, as it is explicitly apparent from Eq. (<ref>).The application of the bi-streaming operator requires a non-local collision operator, i.e. we need tocalculate also the combinations g^*(x_1,x_2) for each pair of grid points.This is of course a major downside of CL as a quantum computing method, onethat complicates the circuital expression of the collision operator, as we are going to show next.§.§ The collision operator The collision step of the LBM is a non-linear operation that implements the relaxation of theprobability distributions towards the equilibrium distributions (<ref>).By definition, the dissipative process induced by the relaxation cannot be represented by acircuit performing unitary operations.However, we can circumvent this issue by using an ancilla qubit and makinguse of open quantum system theory.We can therefore implement a circuit that allows only unitary operations, but where only a subset of the available qubits is accounted for (the system). By tracing out the ancilla qubits (environment), we obtain a non-unitary operation on the system qubits. The circuit that implements the collision operator is detailed in Ref. <cit.>.This circuit is capable of performing non-unitary operations on the qubits by means of an ancilla qubit.It achieves this by decomposing a non-unitary, positive-definite matrix C into a linear combinationof two unitaries U_a and U_b, such thatC=U_a+γ U_b.The coefficient γ is chosen such that the maximum eigenvalue of C, c_M, fulfills the relation c_M≤ 1+γ. Moreover, a requirement needed to apply thisdecomposition is that c_M-c_m<2, where c_m is the minimum eigenvalue of C.Whenever this is not the case, we can define a renormalized matrix C̃=C/c_M and apply the decomposition to the new matrix. The coefficient has to be set to γ = 1-c_m/c_M <cit.>.The circuit is represented in Fig. <ref>. The Carleman vector is embedded in the state |ψ⟩, and an ancilla qubit is initialized in |0⟩_a.We perform an R_y rotation of angle Γ=arccos(√(γ/γ+1)), on the ancilla qubit, yielding the state |ψ⟩(cosΓ|0⟩_a + sinΓ|1⟩_a). The action of the anti-controlled and controlled unitaries leads to the state U_b|ψ⟩cosΓ|0⟩_a + U_a|ψ⟩sinΓ|1⟩_a and the inverse rotation R_y^†(Γ) on the ancilla qubit results into1/γ+1|0⟩(U_a+γ U_b)|ψ⟩+√(γ)/γ+1|1⟩(U_a-U_b)|ψ⟩. We finally measure the ancilla qubit.If the outcome is |0⟩, the state collapses onto (U_a+γ U_b)|ψ⟩, which is exactly the application of C on the state ψ. In this case, the algorithm succeeds and we can proceed with the streaming step.On the other hand, if the outcome of the measurement is 1, the update fails and the circuit needs to be repeated.The probability of success is constrained by p_0≤4γ/(γ+1)^2. Inspection of the collision matrix shows that it can be explicitly written as follows:𝒞 =|0⟩⟨0|_τ⊗1_p_1⊗∑_ij A_ij|i⟩⟨ j|_v_1⊗|0⟩⟨0|_p_2⊗|0⟩⟨0|_v_2+ |0⟩⟨1|_τ⊗1_p_1⊗∑_ijk B_ijk|i⟩⟨ j|_v_1⊗∑_x|0⟩⟨ x|_p_2⊗|0⟩⟨ k|_v_2+ |1⟩⟨1|_τ⊗1_p_1⊗∑_ik A_ik|i⟩⟨ k|_v_1⊗1_p_2⊗∑_jl A_jl|j⟩⟨ l|_v_2, or, in more compact form:𝒞 = [ 𝒜⊗ |0⟩⟨0|_p_2⊗ |0⟩⟨0|_v_2ℬ⊗∑_x|0⟩⟨ x|_p_2; 0 𝒜⊗𝒜⊗1_p_2 ]⊗1_p_1, where 𝒜 and ℬ represent the A and B matrices of Eq. (<ref>) embedded in the space of qubits. The different quadrants of the matrix (<ref>) refer to the values of the τ register. We note that all the components depend in a non-trivial way on the p_2 register.This does not permit to define the collision operator as a local process to be applied only on the velocityregisters v_1 and v_2,a feature which is rooted in the inherentnon-locality of theCarleman linearization.Because of this, the matrix cannot be written in sparse, block-diagonalform, a well known requirement for polynomial approximations in the number of quantum gates <cit.>.Thus, the two controlled operations of the unitaries U_a,U_b require a number of two-qubit gates of the order of 4^n, according to the theoretical lower bound <cit.>.Since the number of qubits in our system is n=⌈log_2(NQ+N^2Q^2)⌉+1, for aCarleman system truncated at second order, the corresponding number of two-qubit gates scales as 𝒪(N^4Q^4). For relevant cases, this number exceeds by several orders of magnitude thecurrent capacity of any quantum computer <cit.>.Just a simple 32 × 32 grid with 9 discrete populations, features (NQ)^4 ∼ 9000^4 ∼ 10^16 two-qubit gates.We numerically tested this result for the circuit ofthe collision operator with the IBM Qiskit package.To streamline the numerical analysis, we define an hermitian augmented Carlemanmatrix 𝒞^H as; 𝒞^H = [ 0 𝒞; 𝒞^T 0 ].where superscript T stands for “transpose".This passage simplifies the numerical representation of 𝒞^Has the weighted sum of unitary matrices (Eq. (<ref>)) with the minimaladdition of just one qubit.With this matrix at hand, we can translate the circuit into a sequence of two-qubit gatesusing Qiskit's decomposition tool.This straightforward step gives a number of two-qubit gates close to the theoretical lower-boundmentioned earlier, indicating that simplifying the circuit is by no means a trivial task.We stress here that this issue is common to any algorithm which aims at implementing a non-sparse matrix.§.§.§ Single time-step collision operator A possible getaway from this issue is to introduce a single-step collision operator.As highlighted in section <ref>, the non-local Carleman variables are needed onlyif one aims to apply the dynamics over multiple time steps.However, if only one single time step is implemented, the number of Carleman variablesreduces to N(Q+Q^2). Since all the Carleman variables are now local, the quantumregister p_2 can be dropped, and the matrix (<ref>) takes the simpler form: 𝒞_s=1 = [ 𝒜⊗ |0⟩⟨0|_v_2 ℬ; 0 𝒜⊗𝒜 ]⊗1_p_1, which is also local in qubit's space.This means that we can apply the collision operator only on the registersembedding the information about truncation order τ and the velocities v_1,v_2.In this framework, we can exploit the symmetry of the second order functions,such that g_ij=g_ji, to reduce even further the dimension of the numberof Carleman variables to 3/2NQ+NQ^2/2. Consequently, for the D2Q9 model, the matrix 𝒞_s=1 of Eq. (<ref>) can be written with 54 variables, and be embedded in the space of q=6 qubits, regardless of the number of lattice sites. Figures <ref>(a) and (b) show the matrices ofEqs. (<ref>) and (<ref>) in the case of a single lattice site respectively.We remind that for any lattice with some spatial extension, the matrix 𝒞 isnot further reducible, implying that the number of two-qubit gates grows exponentially with n, whereasfor the single time-step case, the matrix 𝒞_s=1 is encoded within a fixed number of gates,for any number of lattice sites, thus making it exponentially more efficient than any classical algorithm. We tested with Qiskit the number of two-qubit gates needed to construct thesingle step circuit. A number of two-qubit gates of the order of 4^7 =16384 is needed toproduce both the controlled U_a and U_b of the circuit, yielding a total valueabout ∼ 30,000 two-qubit gates.Although this number is too large for present-day quantum hardware,it might become viable in the near or mid-term. As is well known, the downside of any single time-step implementation, not just the Carleman LB algorithm discussed here, is the overhead due to the embedding and readout processes, that need to berepeated atevery single timestep,thereby spoiling the quantum advantage.These are important issues that need to be addressed in future work.Looking at Fig. <ref>(a), we see the specific symmetries of thenon-local operator C could lead to a lower numberof gates, and therefore future work should explore the best compiling method, usingseveral potential techniques <cit.>, for exampletensor network analysis <cit.> to minimize non-local correlations <cit.>.Another potential route is to apply the CL procedure to the fluid equations in their nativeNavier-Stokes form: the Carleman matrix is seemingly more complex due to the cross-correlationsbetween fluid density, flow and pressure,but likely to entail a lesser number of Carleman variables, with a consequent benefit on the depth of the algorithm.Finally,also the long-known option of special-purpose quantum hardware might be worth being revisited <cit.>.§ CONCLUSIONS AND OUTLOOK In this work we have developed a Carleman linearization of the Lattice Boltzmann dynamics for a weakly-compressible fluid for both classical and quantum computers. The most promising result is that the relative error between CL–LBM is well within the “physiological" level of the standard lattice Boltzmann, method at least for moderate Reynolds numbers up to 𝒪(10-100). Although this value does not describe turbulence, it definitely displays sizeable non-linear effects, leaving hope that turbulent regimes can be attended in the future,an hypothesis that can only be tested on quantum computers.The CL procedure becomes rapidly unfeasible on classical computers, showing thatin a classical framework, trading nonlinearity for extra-dimensions is a very inconvenient bargain. In fact, the exponential increase of Carleman variables with the truncationorder makes this method substantially useless for relevant applications, as the numberof variables quickly reaches the limit of exascale supercomputers even on very small grids.Nonetheless, we stress that the ultimate goal of CL is to implement the embedding of fluid dynamics onto quantum computers, where the number of qubits scales like the logarithmof the number of variables, thus taming the exponential increase of the number of Carleman variablesvia a suitable embedding onto qubits.To this purpose, we have proposed and described the explicit form of the quantumcircuit implementing the CL procedure as applied to the Lattice Boltzmann formulation of fluids.To the best of our knowledge, this is the first workdelivering an explicit formulation and implementation of the quantum algorithm into an actual quantum circuit.Specifically,we have derived the circuit for both the streamingand the collision operators and combined them in terms of global unitary gates.The latter circuit faces with a formidable depth problem, scaling like (NQ)^4,seemingly unviableon any quantum computer, unless dramatic improvements on error correction/mitigationprocedures are achieved in the coming years. However, this steep barrier can be tamed by turning to single-step formulations,featuringa fixed circuit depth, regardless of the number of lattice sites.Developing an efficient multi-step formulation stands out as a majorchallenge for the Carleman approach to quantum computing of fluids.Among prospective directions to be explored, we mention the Carleman procedure applied to the native Navier-Stokes formulations,concrete applications of the Solovay-Kitaev theorem,tensor-network analysis or the use of parametric quantum circuits. Finally,special-purpose quantum computers for fluid might also be worth revisiting. We acknowledge financial support from National Centre for HPC, Big Data and Quantum Computing (Spoke 10, CN00000013). We also acknowledge the CERN and IBMQuantum Hub with which the Italian Institute of Technology (IIT) is affiliated. The authors gratefully acknowledge discussions with M. Maronese, A. Solfanelli, R. Steijl, K. Sreenivasan and W. Itani.The authors have no conflicts to disclose. The data that support the findings of this study are available from the corresponding author upon reasonable request. unsrt
http://arxiv.org/abs/2310.17973v4
{ "authors": [ "Claudio Sanavio", "Sauro Succi" ], "categories": [ "quant-ph", "physics.flu-dyn" ], "primary_category": "quant-ph", "published": "20231027083810", "title": "Lattice Boltzmann-Carleman quantum algorithm and circuit for fluid flows at moderate Reynolds number" }
Reinvestigation of ^91Sr and ^95Y atomic masses using the JYFLTRAP Penning trap [ January 14, 2024 ===============================================================================§ INTRODUCTIONThe phenomenology of B physics has become a rich and diverse study area at both dedicated B factories such as Belle and BABAR and also more general collider experiments such as those at the LHC <cit.>. Over many years, there has been enormous efforts put in by the experimental community to increase the precision of measurements of the decays and properties of B mesons (see e.g. Ref. <cit.>), and thus to fully leverage this success the precision of theoretical predictions for these decays and properties should similarly increase. In particular, the behaviour of neutral meson mixing provides a key insight into CP violation in the Standard Model (SM) and can help constrain elements of the Cabibbo-Kobayashi-Maskawa (CKM) quark mixing matrix which further tests the SM and aids in searches for new physics.Furthermore, the lifetime of a particle is one of its fundamental properties and thus is of great importance in testing the underlying theory for consistency. On the theory side, the lifetimes of B mesons are determined in the framework of the heavy quark expansion (HQE) where the result is described in terms of a series expansion in 1/m_b of perturbative Quantum Chromodynamics (QCD) contributions and non-perturbative Δ B=0 matrix elements; for a review, see e.g. Ref. <cit.>. In fact, multiple calculations of the Δ B=2 matrix elements have already been carried out on the lattice <cit.> (and also using QCD sum rules <cit.>). For recent overviews, see <cit.>; in addition RBC/UKQCD and JLQCD presented preliminary results for on-going work using an RI-SMOM renormalisation scheme <cit.>.When considering the ratio of B_s^0 over B^0 mixing, the SU(3) breaking parameter ξ can be defined <cit.>. Since renormalisation factors and other uncertainties cancel, the lattice determination of ξ is typically more precise <cit.> and ξ is frequently used in global CKM unitarity triangle fits <cit.>.The history of the Δ B=0 four-quark matrix elements is less thorough. After early quenched studies <cit.> and preliminary unquenched results <cit.> in the early 2000s, the topic received little attention from the lattice community until recently with some interest in lifetimes ratios and baryonic decays <cit.>. In the meantime, there have been predictions using QCD sum rules <cit.>. These matrix elements present additional challenges for a lattice calculation due to contributions from disconnected diagrams where the signal-to-noise ratio worsens.Moreover, mixing with operators of lower mass dimension occurs in standard renormalisation procedures.In the following we outline a non-perturbative renormalisation scheme utilising the gradient flow <cit.> and the  <cit.> in which operator mixing is absent.A perturbative matrix is required to match each quantity to thescheme, likely circumventing some of the difficulty in calculating Δ B=0 four-quark matrix elements.The method is first tested for Δ F=2 matrix elements where results can be verified against the literature.Our approach is similar to work by Suzuki et al. applying the  to neutral Kaon mixing or the determination of the energy-momentum tensor <cit.>.§ GRADIENT FLOW AND SHORT-FLOW-TIME EXPANSIONThe gradient flow <cit.> has become a well-known tool in lattice simulations with common usage for e.g. scale setting.One introduces an auxiliary dimension, the flow time τ [^-2] which acts as a UV regulator and provides a well-defined smearing of gauge and fermion fields through the first-order differential equations∂_τ B_μ(τ,x)=D_ν(τ)G_νμ(τ,x),B_μ(0,x) = A_μ(x),∂_τχ(τ,x)=D^2(τ)χ(τ,x),        χ(0,x) = q(x),where G_νμ(τ)=∂_ν B_μ(τ)-∂_μ B_ν(τ)+[B_ν(τ),B_μ(τ)] is the flowed gluon field strength tensor, D_ν(τ)=∂_ν+[B_ν(τ),·] is the flowed covariant derivative, A_μ,q are the regular gauge and fermion fields respectively and B_μ(τ),χ(τ) are those extended in the flow time. Operators evolved along positive gradient flow time are removed of UV divergences and are renormalised within a gradient flow (GF) scheme.An effective Hamiltonian expressed normally as a sum of operators O_m and their Wilson coefficients C_m can be rewritten in terms of these `flowed' operators Õ_n(τ) and similarly `flowed' Wilson coefficients C̃_n(τ):H_ eff = ∑_m C_m O_m = ∑_nC̃_n(τ)Õ_n(τ),where the flow-time dependence of the operators cancels with that of the coefficients <cit.>. In the , one can relate the `flowed' operators to the regular ones asÕ_n(τ) = ∑_mζ_nm(τ) O_m + O(τ) ∑_nζ_nm^-1(μ,τ)⟨Õ^ GF_n⟩(τ) = ⟨ O^_m⟩(μ),where higher-dimensional operators are accompanied by higher powers of τ and are expected to be negligible for small τ <cit.>. The perturbatively-calculated matrix ζ_nm^-1(μ,τ) matches the GF renormalised operators to thescheme.For a general heavy quark field F, we focus for now only on the bag parameter of the Δ F=2 four-quark operator O_1 = (F̅γ_μ(1-γ_5)q)(F̅γ_μ(1-γ_5)q).This is well-studied in the literature and is the only contributor to Δ M in the SM. In the future, we will extend our study to consider the full SUSY basis of Δ F=2 four-quark dimension-six operators as well as Δ F=0.The bag parameters are defined as ratios of the three-point matrix element of a four-quark operator O_i to its vacuum insertion approximation. For a pseudoscalar meson state P with mass m and decay constant f, the bag parameter of O_1 is defined, at leading order, asB_1 = ⟨ P| O_1|P⟩/8/3 m^2f^2.The perturbative calculations used in this work are described in <cit.>. At next-to-next-to-leading order (NNLO), the perturbative matching from GF toschemes for the bag parameter B_1 with number of flavours n_f is given byζ_B_1^-1(μ,τ) = 1+ a_s/4(-11/3 - 2L_μτ) + a_s^2/43200[-2376 - 79650L_μτ - 24300L_μτ^2 + 8250n_f + 6000 n_f L_μτ + 1800 n_f L_μτ^2 - 2775π^2 + 300 n_f π^2 - 241800log2  + 202500log3 - 110700Li_2(1/4)],where L_μτ=log(2μ^2τ)+γ_E and a_s=α_s/π. The final result for B_1 in thescheme is given byB_1^(μ) = lim_τ→0ζ_B_1^-1(μ,τ)B_1^ GF(τ). § LATTICE CALCULATIONWe will consider six RBC/UKQCD 2+1-flavour domain-wall fermion (DWF) and Iwasaki gauge field ensembles with three lattice spacings a∼ 0.11, 0.08, 0.07 (determined by RBC/ UKQCD <cit.>) and pion masses ∈[267,433).Light and strange quarks are simulated with the Shamir DWF action <cit.> with M_5=1.8.These ensembles are listed in Table <ref>.Heavy quarks are simulated using stout-smeared gauge fields <cit.> and the Möbius DWF action <cit.>, where the mass has been tuned to the physical charm on each ensemble through the D_s pseudoscalar meson <cit.>. Using a similar setup as Ref. <cit.>, all propagators are generated with Z2-noise wall sources where the number of sources and smearing parameters are listed in Table <ref>; Gaussian smearing is also applied for the strange quarks.In the following, we use exploratory results obtained on the C1, C2, and M1 ensembles.While testing the validity of our method, we remove the additional complications of extrapolations in the valence sector, studying only strange and charm quarks at their physical values. As such, we currently consider the short-distance contributions to `neutral D_s' meson mixing.On the lattice, this is obtained in the large t and Δ T limit by the ratio of correlation functions,R_1(t,Δ T,τ) = C_ O_1^ 3pt(t,Δ T,τ)/8/3 C_AP^ 2pt(t,τ)C_PA^ 2pt(Δ T-t,τ)→ B_1^ GF(τ),where t is the Euclidean time and Δ T is the separation of the two sources used in the three-point function as shown in Figure <ref>, and C_AP^ 2pt(t,τ),C_PA^ 2pt(Δ T-t,τ) are the two-point functions with the pseudoscalar current at the sources and the flowed axial current at the sink. In this pilot study we only consider Δ T=28 for all data analysed so far.Dependence on the flow time τ is written here explicitly as the above ratio is evaluated for propagators taken at discrete steps in the flow. The Runge-Kutta evolution of the gradient flow is performed with ϵ=0.01. For small flow, measurements of two- and three-point functions are taken at steps of ε=0.1 in lattice units, with this `coarsening' to ε=0.4 for τ/a^2>5.§ FIRST RESULTSIn Figure <ref>, we present our first results for the Δ F=2 bag parameter B_1^ GF(τ) and its GF-to- matching coefficient at both NLO and NNLO, expressed with the renormalisation scale μ=3GeV both as functions of the gradient flow time τ in physical units.The plot on the left shows the dependence of the lattice data on the GF time τ converted to physical units. The clear overlap of the data from different ensembles indicates a mild continuum limit for large enough flow times (τ≳ 0.2GeV^-2) where the results become GF renormalised; for smaller flow times, the continuum limit may however carry a substantial systematic uncertainty. The plot on the right shows the perturbative matching with a clear difference between the next-to-leading order (NLO) and NNLO results. When combining with the data from the lattice simulations, we expect to obtain the -renormalised result by taking the τ→ 0 limit assuming a linear dependence on the GF time. The outcome is shown in Figure <ref>. The purple circles utilize the NLO matching to -scheme, whereas the orange squares use NNLO matching coefficients. In both cases an extended linear region is present. Using the NNLO coefficients this linear region extends to lower flow times compared to the NLO results. The next step is to seek a window in flow time where the gradient flow has had sufficient effect on the lattice results such that they are renormalised but the flow time is still small enough for higher-dimensional operators to be suppressed. If we use as a first guess a flow time window 0.25GeV^-2≤τ≤0.67GeV^-2 for NNLO and 0.39GeV^-2≤τ≤0.67GeV^-2 for NLO, we can perform the τ→ 0 extrapolation shown by the gray bands to obtain the renormalised bag parameter in the -scheme at zero flow time. While the difference in the prediction may indicate systematics due to the order of the perturbative matching, we also point out that other systematic effects, e.g. from the continuum extrapolation, need to be accounted for. In addition we only consider a naive error estimate for the τ→ 0 limit which warrants improvement. Similar discussions regarding the τ→0 extrapolation for the energy-momentum tensor in the  framework are given in e.g. Refs. <cit.>.Further study is still required to fully understand the validity range of both extrapolations. Phenomenologically, `neutral D_s' mixing as is calculated here does not exist, however the results should be similar in magnitude to that of short-distance D^0 mixing since any spectator effects are expected to be small.In the literature, the short-distance matrix elements for D^0 mixing have been calculated on the lattice by FNAL/MILC at N_f=2+1 and ETMC at N_f=2+1+1, with μ=3GeV. ETMC finds a value of B_1^=0.757(27) <cit.> (they also have a calculation at N_f=2 <cit.>). FNAL/MILC quotes a value for ⟨ O_1⟩^; using PDG 2023 <cit.> and Eq. (<ref>), this leads to B_1^=0.795(56) <cit.>.In Ref. <cit.>, there also exists a QCD sum rules calculation which, using PDG 2023 <cit.>, results in B_1^=0.636^+0.091_-0.079. One can see in Figure <ref> that our preliminary results extracted here lie between the two literature values from lattice QCD and slightly above that from QCD sum rules. While further scrutiny is still required, this is a promising sign for our method as a novel renormalisation and matching-to- procedure. It motivates further study of Δ F=2 matrix elements in the  as a test case towards a calculation of the long-sought-after Δ B=0 four-quark matrix elements.§ SUMMARYThe Δ B=0 four-quark dimension-six matrix elements are important quantities in accurately and precisely predicting the lifetime of a B meson from the heavy quark expansion.Lattice QCD calculations of these matrix elements are strongly sought-after, but no full calculation has been performed to date, with part of the difficulty coming from mixing with lower-dimensional operators under standard renormalisation procedures. Here we have outlined the idea of using the gradient flow and  as an alternative renormalisation scheme and matching-to- method to bypass the issue of operator mixing. First simulations were carried out with the focus on Δ F=2 operators where results can be validated against lattice calculations in the literature. Removing additional extrapolations, the initial analysis has been performed at the physical D_s scale. Preliminary results show promise and consistency with literature values of short-distance contributions to D^0 mixing. However further scrutiny on estimating systematic uncertainties is warranted and getting deeper insight in how to choose the flow time window is desired.In future work, we aim to extend simulations to all lattice ensembles listed in Table <ref>, and also to multiple heavy quark masses and replacing the strange quarks with light quarks.This will allow extrapolation to physical B and B_s systems where further validation against Δ B=2 calculations may be done and physical results for the ultimate goal of the Δ B=0 matrix elements can be reached.Measurements were performed using  <cit.> and  <cit.>. Computations used resources provided by the OMNI cluster at the University of Siegen and the HAWK cluster at the High-Performance Computing Center Stuttgart.This work was partially supported by DeiC National HPC (g.a.DEIC-SDU-L5-13). We used gauge field configurations generated on the DiRAC Blue Gene Q system at the University of Edinburgh, part of the DiRAC Facility, funded by BIS National E-infrastructure grant ST/K000411/1 and STFC grants ST/H008845/1, ST/K005804/1 and ST/K005790/1. M.B., R.H., F.L., O.W. received support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through grant 396021762 - TRR 257 “Particle Physics Phenomenology after the Higgs Discovery”. The work of F.L. was supported by the Swiss National Science Foundation (SNSF) under contract https://data.snf.ch/grants/grant/211209TMSGI2_211209. We thank ECT* for support at the Workshop “The Gradient Flow in QCD and other strongly coupled field theories” during which this work has developed. Special thanks is given to Felix Erben, Ryan Hill, and J. Tobias Tsang for assistance in setting up the simulation code.JHEP-jmf-arxiv
http://arxiv.org/abs/2310.18059v1
{ "authors": [ "Matthew Black", "Robert Harlander", "Fabian Lange", "Antonio Rago", "Andrea Shindler", "Oliver Witzel" ], "categories": [ "hep-lat" ], "primary_category": "hep-lat", "published": "20231027112218", "title": "Using Gradient Flow to Renormalise Matrix Elements for Meson Mixing and Lifetimes" }
1] MirHamed Jafarzadeh Asl 2]Mohammadhadi Shateri 1]Fabrice Labeau[1]Department of Electrical and Computer Engineering, McGill University, QC, Canada, Email: [email protected], [email protected] [2]Department of Systems Engineering, École de Technologie Supérieure, QC, Canada, Email: [email protected]α-Mutual Information: A Tunable Privacy Measure for Privacy Protection in Data Sharing[ January 14, 2024 =======================================================================================plain plainThis paper adopts Arimoto's α-Mutual Information as a tunable privacy measure, in a privacy-preserving data release setting that aims to prevent disclosing private data to adversaries. By fine-tuning the privacy metric, we demonstrate that our approach yields superior models that effectively thwart attackers across various performance dimensions. We formulate a general distortion-based mechanism that manipulates the original data to offer privacy protection. The distortion metrics are determined according to the data structure of a specific experiment. We confront the problem expressed in the formulation by employing a general adversarial deep learning framework that consists of a releaser and an adversary, trained with opposite goals. This study conducts empirical experiments on images and time-series data to verify the functionality of α-Mutual Information. We evaluate the privacy-utility trade-off of customized models and compare them to mutual information as the baseline measure. Finally, we analyze the consequence of an attacker's access to side information about private data and witness that adapting the privacy measure results in a more refined model than the state-of-the-art in terms of resiliency against side information.Tunable privacy measure, Arimoto's α-mutual information, adversarial learning, data sharing, privacy-utility trade-off. § INTRODUCTION Despite technological advancements and increased data generation, the need for data sharing has risen dramatically. However, data sharing always carries the risk of security breaches, with unauthorized entities trying to extract private information from shared data. Notably, the privacy problem in data sharing differs from the data security issue. In data release privacy, any authorized receiver of the data is considered an anticipated invader. Therefore, data security methods are unprofitable in data sharing <cit.>. As data sharing has progressed with advancements in speed, feasibility, etc., addressing various privacy issues has become more challenging than ever before. For instance,many social media applications require individuals to share private data online <cit.>. Hence, various privacy-protecting techniques for data sharing have been studied for years. Differential Privacy (DP) has received significant attention in this area, especially due to its low computational overhead <cit.>. Although DP prioritizes data privacy, it may not be ideal for applications where preserving the utility of shared data is crucial, as it does not specifically address other data properties <cit.>. §.§ Related work Considering the mentioned shortcoming of DP, information-theoretical approaches are widely applied in privacy protection, offering improved privacy-utility trade-offs (PUTs) <cit.>. Mutual Information (MI) has been popular in information-theoretical privacy measures. In <cit.>, an MI-based method is designed to prevent leakage of private features in representation learning methods on graphs. Besides, efforts are made to extract the most from the patterns in data to determine convenient metrics. One such example is demonstrated in <cit.>, where Directed Information (DI) is selected as the privacy measure. Nonetheless, in order to achieve flexible PUTs, the necessity of discovering a tunable privacy measure has been perceived. An adjustable metric allows for tailoring the privacy definition to specific use cases, enhancing performance, and demonstrating the capacity of information-theoretical strategies.Configurable measures of information leakage based on Rényi entropy <cit.> and Arimoto α-mutual information (α-MI) <cit.> are designed in the literature. Suggesting tunable metrics in <cit.>, authors introduce α-leakage as a measure of information disclosure that quantifies how much an adversary can infer a specific private attribute of the data. The definitions have been extended in <cit.>. To the best of our knowledge, the closest study to our work is presented in <cit.>, which employs α-loss (equivalent to using Arimoto α-MI as privacy measure) within an adversarial learning framework for data sharing.However, they formulated the problem as a minimax game with constraints, which has been demonstrated to be unstable with regard to loss in deep learning <cit.>. Moreover, the influence of the α parameter in such a tunable measure and its impact on improving PUT has not been investigated.Furthermore, one may assess privacy-preserving data-sharing systems regarding their effectiveness in a scenario where a malicious attacker has access to sort of side information (SI) correlated with private data. The authors in <cit.> analyze this problem. However, we show that customizing the privacy measure can lead to more reliable models than in <cit.> in terms of PUT. Notably, the robustness of Maximal α-leakage to arbitrary SI is studied in <cit.>; however, their conclusion is drawn based on the availability of ground truth private attributes. Although this notion is reasonable in the training phase of a framework, it is unrealistic to imagine that private features are known in the testing stage. Moreover, the assumption of having all attributes of the original data as private features might not be practical in many applications. §.§ ContributionsIn this paper, a tunable privacy measure has been adopted on distortion-based privacy-preserving data release models. The main contributions of this work are as follows:* To the best of our knowledge, this is the first time that the impacts of the α parameter are practically investigated in α-MI as a measure of privacy in the privacy-preserving data release.* The impact of the tunable privacy measure is illustrated in the presence of SI that is correlated with the sensitive information of shareable data. * We suggest a framework that uses a stable strategy to address the optimization problem of privacy-preserving data release as opposed to a minimax formulation <cit.>.* Our framework is customized for several datasets with different structures to examine the advantages of using an adaptable privacy measure. §.§ Notation and conventions A sequence of random variables (X_1, X_2,…, X_T) is shown as X^T. A sample batch from X^T is written as {x^(b)T}^B_b=1. The probability distribution of X_t is p_X_t, and the conditional distribution of X_t given Y_t is shown as p_X_t|Y_t. The conditional distribution X^T given Y^T would be p_X^T|Y^T. A Markov chain composed of X,Y, and Z is written as XYZ. The expectation of a function f with respect to p_X is denoted as E[f(X)]. The Kullback-Leibler (KL) divergence between two distributions p_1 and p_2 is represented as KL(p_1||p_2).§ PROBLEM FORMULATION AND TRAINING OBJECTIVE Let variables Y^T denote the users' useful data. This data may be metered power consumption of houses over T time slots, or any non-sequence data (T=1) such as patients' health conditions. Private variables X^T represent the sensitive information that a particular user is unwilling to share in public, e.g., people's identities in the data collected by social media. We also define observed variables W^T as the variables that would normally be released or shared. We assume that W^T is not independent of X^T. The private information X^T may be present, together with the Y^T, in W^T, or X^T is correlated with Y^T and W^T is formed of Y^T. Therefore, for a particular task, sensitive information should be eliminated from valuable data before sharing the data publicly. In this scenario, a privacy-preserving system is of interest. This system contains a releaser that creates a new representation of Y^T, denoted as Z^T, generated by distorting Y^T to follow two objectives simultaneously: the releaser aims to hide private data from any possible attacker interested in inferring them from released data; at the same time, it tries to preserve useful data, as much as possible, based on specific criteria. Therefore, measures are needed to quantify the released data's privacy performance and utility achievement (i.e., preserving useful attributes). Moreover, harmful attackers could have access to some supplementary (side) information, S, that can assist them in attaining higher inference performance. To quantify the distortion between Z^T and Y^T, we define a distortion measure as 𝒟(Z^T,Y^T) ≜𝔼[d(Z^T,Y^T)], where d:ℝ^T×ℝ^T →ℝ can be any distortion metric on ℝ^T. Here, Arimoto'sα-Mutual Information is proposed for the privacy measure in the releaser as I^A_α(X; Z) = H_α(X) - H^A_α(X|Z) <cit.>, where H_α(X) is the Rényi entropy of order α∈ (0,1) ∪ (1,∞) <cit.> written as:H_α(X) = α/1-αlog(∑_xp^α_X(x))^1/α = α/1-αlog‖ p_X‖_α, and H^A_α(X|Z) is Arimoto's conditional α-entropy defined as: H^A_α(X|Z) = α/1-αlog∑_zp_Z(z)(∑_xp^α_X|Z(x|z))^1/α = α/1-αlog𝔼_Z[‖p_X|Z‖_α].Consequently, H^A_α(X|Z) is generalized to H^A_α(X^T|Z^T) as: H^A_α(X^T|Z^T)= α/1-αlog∑_z^Tp_Z^T(z^T)(∑_x^Tp^α_X^T|Z^T(x^T|z^T))^1/α =α/1-αlog𝔼_Z^T[‖p_X^T|Z^T‖_α], where p_X^T|Z^T = ∏_t=1^Tp_X_t|X^t-1,Z^T. Finally, the problem of finding the optimal releaser is formulated as follows:inf_p_Z^T|W^T I^A_α(X^T;Z^T|S) subject to𝒟(Z^T,Y^T) ≤ϵ,where ϵ≥ 0 is a parameter to force the releaser to control the trade-off between privacy and utility. In addition, the SI term is considered in I^A_α(X^T;Z^T|S) = H^A_α(X^T|S) - H^A_α(X^T|Z^T,S) by substituting all p_.|Z^T by p_.|Z^T,S. Given the fact that H^A_α(X^T|S) cannot be changed by the releaser, i.e., it does not depend on p_Z^T|W^T, we re-formulate (<ref>) as follows: inf_p_Z^T|W^T -1/TH^A_α(X^T|Z^T,S), s.t.𝒟(Z^T,Y^T) ≤ϵ,where the term 1/T is included for normalization purposes. Finding the solution for the optimization problem in (<ref>) is not generally tractable. In addition, tackling this problem requires the availability of p_X^T|Z^T,S. Hence, the privacy-preserving framework approximates p_X^T|Z^T,S by using an estimator network, called adversary. The problem of estimating p_X^T|Z^T,S by p_X̂^T|Z^T,S can be optimally tackled by minimizing the KL divergence between the distributions written as <cit.>: inf_p_X̂^T|Z^T,SKL(p_X^T|Z^T,S||p_X̂^T|Z^T,S)=inf_p_X̂^T|Z^T,S𝔼[logp_X^T|Z^T,S/p_X̂^T|Z^T,S],where the expectation is with respect to p_X^T,Z^T,S. Note that solving (<ref>) is equivalent to minimizing the negative log-likelihood 𝔼[- log p_X̂^T|Z^T,S(X^T|Z^T,S) ]. Furthermore, we try to simplify (<ref>) by decomposing the probability distribution p_X̂^T|Z^T,S, leveraged from the natural characteristics of the defined privacy-preserving problem. We denote the releaser and the adversary as ℛ_θ and 𝒜_ϕ, which are controlled by their parameters θ and ϕ, respectively. For t∈{1, 2, …, T}, the releaser ℛ_θ takes observed variables, W^t, as its input and generates released variables represented as Z_t. Using Z^t, the adversary 𝒜_ϕ aims to estimate sensitive information x_t by approximating p_X_t|Z^t,S at each time t as p_X̂_t|Z^t,S and then solving x̂^*_t=x̂_t ∈𝒳argmaxp_X̂_t|Z^t,S(x̂_t|z^t,s). This means, while the goal of 𝒜_ϕ is to estimate X^T as precisely as possible based on Z^T, ℛ_θ aims to trade-off two different objectives. On the one hand, ℛ_θ intends to minimize the amount of information leaked about X^T from Z^T, which will mislead the adversary. On the other hand, ℛ_θ tries to keep Z^T as close as possible to Y^T by limiting the distortion between Z^T and Y^T below a designated value. Based on these assumptions about releaser and adversary, we can conclude that the Markov chains (X^t,Y^t)W^tZ^t X̂^t and X̂^t-1 Z^t,S X̂^t hold for t ∈{1, 2, …, T}. Therefore, p_X̂^T|Z^T,S is re-formulated as: p_X̂^T|Z^T,S(x̂^T|z^T,s) = ∏_t=1^Tp_X̂_t|X̂^t-1,Z^T,S(x̂_t|x̂^t-1,z^T,s) = ∏_t=1^Tp_X̂_t|Z^T,S(x̂_t|z^T,s)(i)=∏_t=1^Tp_X̂_t|Z^t,S(x̂_t|z^t,s).where (i) corresponds to the causality constraints that the problem may have. Hence, The adversary's objective in (<ref>) can be achieved by addressing the optimization problem written as: inf_p_X̂_t|Z^t,S1/T∑^T_t=1𝔼[- log p_X̂_t|Z^t,S(X_t|Z^t,S) ],and the optimization problem of the releaser, defined in (<ref>), is converted to a practical formulation as: inf_p_Z^T|W^T -1/TH^A_α(X̂^T|Z^T,S), s.t.𝒟(Z^T,Y^T) ≤ϵ,where the distribution on (<ref>) is used to compute H^A_α(X̂^T|Z^T,S). This optimization problem can be tackled with the availability of p_X̂^T|Z^T,S, the adversary's output.Based on (<ref>), 𝒜_ϕ tries to maximize the quantified information between X^T and Z^T by minimizing KL distance between p_X̂_t|Z^t and p_X_t|Z^t. On the other hand, ℛ_θ aims to minimize α-MI in (<ref>). Arimoto's α-MI is known to be a generalization for MI to measure the information shared between random variables <cit.>. This suggests that the adversary's goals and the releaser's are in opposite directions. Thus, addressing (<ref>) and (<ref>) can be done by a stable adversarial training procedure that uses the general modeling framework illustrated in Fig. <ref>. Two loss functions ℒ_ℛ(.) and ℒ_𝒜(.) are determined for ℛ_θ and 𝒜_ϕ, respectively. Using (<ref>), ℒ_𝒜(ϕ) is written as: ℒ_𝒜(ϕ) := 1/T∑^T_t=1𝔼[- log p_X̂_t|Z^t, S(X_t|Z^t,S) ]. As previously mentioned, (<ref>) represents cross-entropy loss which establishes a classifier that generates p_X̂^T|Z^T,S. The releaser's loss function is derived from (<ref>) as: ℒ_ℛ(θ,ϕ,ω,α,λ):= 𝒟(Z^T,Y^T) -λ/TH^A_α(X̂^T|Z^T,S).The presence of S in (<ref>) and (<ref>) depends on the availability of SI. Adjusting λ≥0 in (<ref>) is equivalent to changing ϵ in (<ref>).Considering the extreme cases, λ=0 leads the releaser to the full utility regime, meaning that ℛ_θ acts independently from 𝒜_ϕ, hence provides no privacy guarantees. For large λ values, the term -λ/TH^A_α(X̂^T|Z^T,S) will be dominant in ℒ_ℛ(.). Thus, the releaser tends to achieve full privacy, i.e., random guessing performance, by confusing the adversary totally. Moreover, ω in (<ref>) shows the parameters that the utility network could have. Depending on the application, this network should have a complex structure or should only evaluate the specified distortion measure. Moreover, this network may generate Ĉ, to which, in some applications, the distortion metric compares specific features of the useful data.§ FRAMEWORK AND IMPLEMENTATION §.§ Privacy-preserving framework for image dataConvolutional neural networks (CNNs) excel in various machine learning tasks, particularly with image datasets. Hence, in this application, we decided to build the networks shown in Fig. <ref> by using CNN modules and well-known structures related to each network's task. As illustrated in Fig. <ref>, Y^T is considered as the releaser's input (i.e., W^T = Y^T). An encoder-decoder approach has been employed to design ℛ_θ, while the adversary and utility network are image classifiers. In this work, we choose a dataset of hand-written digits where the digits' thickness is considered as private information. Thus, 𝒜_ϕ tries to determine whether an image shows a thick or a thin digit. On the other hand, ℛ_θ aims to generate an image with the same dimensions as Y^T while minimizing the distortion between the generated and original image.The distortion measure typically quantifies the difference between the network's input and output, either on an element-wise basis or through a higher-level approach. For example, while the thickness of digits is the sensitive information that we try to hide, the ability to classify the digits is of interest. Here, an element-wise measure cannot guarantee digit classification. We consider that the distortion measure consists of two parts: (i) a p-norm metric that quantifies the distortion happened to the input variables, written as d_p(Z^T,Y^T) ≜1/T‖ Z^T-Y^T ‖_p for p ≥ 1; (ii) the loss function of the utility network, which is a categorical cross-entropy loss for an image classifier that recognizes digits. The first part of the distortion measure ensures that the released image will have element-wise similarity with the input, while the second part promotes the similarity in terms of the results of image classification.In this application, We consider p=1 as the first part of the distortion measure, and the second part comes from the utility network, 𝒞_ω, written as: d_𝒞(Z^T,Y^T) ≜ℒ_𝒞(ω)= 𝔼[- log p_Ĉ|Z^T(C|Z^T,S) ],where C represents particular utility features (e.g., the labels of hand-written digits images), and Ĉ is the utility network's output. Finally, the distortion measure is derived as: d_IMG(Z^T,Y^T) = d_𝒞(Z^T,Y^T) + 1/T‖ Z^T-Y^T ‖_1 For the model shown in Fig. <ref>, 𝒜_ϕ has a cross-entropy loss, defined in (<ref>), and the loss function of ℛ_θ is formulated as: ℒ_ℛ(θ,ϕ,ω,α,λ):=𝔼{d_IMG(Z^T,Y^T)}-λ/TH^A_α(X̂^T|Z^T,S). The training process for the data releaser model of this work has multiple stages. At every training iteration, 𝒜_ϕ is trained k times, while ℛ_θ is only trained once per iteration.The choice of k is crucial as it affects the adversary's strength <cit.>. Algorithm <ref> provides a detailed training procedure. After the training phase, a distinct network, called attacker, is considered for the test phase. This network is trained with the released data and will test the privacy achieved by the model. This network plays the role of a real-world attacker, which has an approximately similar structure to 𝒜_ϕ and tries to infer sensitive information from released data.§.§ Privacy-preserving framework for time-series data Our second example deals with time-series data. The most important feature of time series is the correlation of data points over time. In order to extract this feature, we use Long Short-Term Memory (LSTM) modules to build releaser and adversary networks of the general model shown in Fig. <ref>. We form W^T by concatenating Y^T and X^T. This study focuses on time-series applications where utility is defined as the similarity between released data and actual observations, such as smart grid applications <cit.>. Hence, a p-norm distortion is sufficient to compare the input and output of the releaser. Therefore, we choose d_TS(Z^T,Y^T) =1/T‖ Z^T-Y^T ‖_p=2 as the distortion measure in this application, and there is no need to have a complex utility network. Finally, the loss function for 𝒜_ϕ is the same as (<ref>), and, for releaser ℛ_θ, it becomes:ℒ_ℛ(θ,ϕ,α,λ):= 𝔼{d_TS(Z^T,Y^T)} - λ/TH^A_α(X̂^T|Z^T,S). The training procedure of this framework is available by adjusting Algorithm <ref> based on time-series properties. Similar to section <ref>, a distinct attacker evaluates the privacy attained by the model. § RESULTS AND DISCUSSION§.§ Datasets description §.§.§ Annotated MNIST (AMNIST) datasetWe use the well-known MNIST dataset <cit.> and modify it by adding a label of thickness level to images using the method provided in <cit.>. In <cit.>, authors have defined mathematical formulas with different parameters for each digit. Therefore, the digit thickness in a particular sample image can be classified into thick, normal, or thin. We customized the provided code in <cit.> to label all training and testing images, and we excluded those images with a normal thickness for computational simplicity. We ended up with 28,568 trainingand 4,681 testing samples. §.§.§ ECO dataset The Electricity Consumption and Occupancy (ECO) dataset <cit.>contains power consumption data of 6 households and their ground truth occupancy information. Since, in this work, the consumption data and occupancy labels are re-sampled at every hour, ECO would be considered a time-series dataset with T=24. Here, the power consumption represents the utility feature Y_t, while the household occupancy is the private information X_t. We partitioned data into 8980 training and 2245 testing time-series sequences. Moreover, week's day and month are possible SI available in ECO that can be concatenated to training and testing samples. §.§ MetricsWe choose Normalized Error (NE), i.e., Normalized Mean Squared Error (NMSE), to evaluate the distortion between Y^T and Z^T. We employ balanced accuracy to compare models' performance. This metric is used instead of accuracy to mitigate the unbalanced data effects. Henceforward, we use the word accuracy to refer to balanced accuracy, for brevity. §.§ Tunable privacy measure for AMNIST datasetThe effects of the proposed tunable privacy measure are evaluated by performing an experiment using the modified AMNIST dataset and the proposed framework for image data. We selected α=1 (equivalent to MI) and explored the intervals (0,1) and (1,∞) to examine the model's performance by varying α. The outcomes revealed that models with α<1 exhibit analogous behavior, with only insignificant differences. The same phenomenon holds for α>1.Thus, the following values are considered for the experiments in this work: α=0.9, 1, 3.The details of the layers used in the framework are demonstrated in Fig. <ref>. The hyperparameters in Algorithm <ref> are set to B=256 and k=3. Here, full privacy is achieved when the attacker cannot guess better than 50% since we consider that the thickness has two possible values. The attacker's structure is similar to the adversary model described in Fig. <ref>. In Fig. <ref>, the PUT for digits' thickness inference is shown.Note that by using the original images, Y^T, a model can classify the digits and predict their thicknesses with 97.25% and 91.50% accuracy, respectively. As illustrated in Fig. <ref>, for all models, the classification accuracy is almost preserved where the attacker's accuracy is around 60%. Moreover, the classification accuracy is significantly high around the first point in the full privacy region (FPR). This result ensures achieving the essential utility goal, which is the ability to classify the released digits with high accuracy.The behavior around edge cases is almost the same for all models, except that the model with α=0.9 reaches the FPR with lower classification accuracy than others. The result shows the power of α=3 while transitioning from full utility region (FTR) to the middle of the curve by reducing attacker's accuracy the most, with a slight change in digit classification. However, in the (FTR), α=1 suggest better classification accuracy. Notably, the model with α=0.9 is very sensitive to small changes of λ in (<ref>), which is necessary for generating points of the PUT curve. Due to this sensitivity, finding a point in the middle of the curve requires more effort than other α values.Fig. <ref> shows examples of the released images for selected models. For each sub-figure, we select a point in the middle of the PUT and the first point in the FPR. The results corresponding to middle of the PUT illustrate that by losing a small quantity of digit classification accuracy, the attacker's accuracy is dropped by about 30%. Interestingly, each model's distortion has occurred differently in the full privacy examples.These results indicate no best value of α for all desired operating points on the PUT. Therefore, α gives a degree of freedom to find a model that works best in a desired region. We design another attacker which has gained access to the algorithm of <cit.>. We refer to it as the "Thickness-Computing Attacker (TCA)." Using the algorithm, TCA can label digits as thick, normal, or thin. Since we excluded digits with normal thickness from the experiment's data, the attacker has three options for labeling digits for which the algorithm predicts normal thickness:to assign 1) random, 2) thin, or 3) thick labels. We considered all cases for each model and reported their maximum accuracy. Some results of TCA are compared with the deep attacker (DA) in Table <ref>. DA is stronger than TCA around the middle of the PUT; however, TCA achieves better accuracy near the FPR. Interestingly, this large gap happens for models with α=3 and 1 when the attackers decide to convert normal labels to thin. However, for α=0.9, converting to thick labels is the selected approach. Since the gap is negligible in this case, we conclude that the model with α=0.9 is more robust against different attackers than others. §.§ Tunable privacy measure for ECO dataset Moving forward with ECO dataset, α=3,1,0.9 are selected based on the discussed reason in section <ref>. The general framework illustrated in Fig. <ref> is customized based on section <ref>. In addition, an independent uniformly distributed (over [0,1]) noise U^T is integrated into W^T beside Y^T and X^T to randomize Z^T. It is seen to be helpful in practical applications where an adversarial framework's input consists of noise <cit.>. The releaser network consists of 4 LSTM layers, each with 64 cells, and the adversary network is formed of 3 LSTM layers, each with 32 cells. The distinct attacker has the same structure as the adversary. The hyperparameters indicated in Algorithm <ref> are set to B=128, k=4. As discussed in section <ref>, household occupancy is private information. Thus, the corresponding attack accuracy of the FPR is 50%. Notably, an attacker can predict household occupancy from the actual power consumption with more than 90% accuracy.The PUT for house occupancy inference is available in Fig. <ref>a. In <cit.>, a similar experiment is investigated where MI is the privacy measure. Here, around FTR and FPR, all models accomplish almost the same trade-off. However, the model with α=1 performs best in the middle of the PUT. Fig. <ref> shows 7-day-long samples from modified power consumption signals. In this figure, two operating points are selected for each α. The models corresponding to the left side of Fig. <ref> preserve most of the original data (NE is less than 0.26 in the worst case), while the attacker's accuracy is dropped by more than 26%. In addition, different distortion patterns can be realized on the right side of the figure for different α values. Another experiment is designed with ECO for a situation where the SI discussed in section <ref> is available to an attacker. Fig. <ref>b shows the PUT for selected α values. Similar work is conducted in <cit.>, where MI is the privacy measure. In <cit.>, an attacker trained and tested with only SI achieves an accuracy of 57.8%, concluding that the attacker is not completely confused even by signals with large distortion. In Fig. <ref>b, the model with α=1 attains the attacker's accuracy of 57.8% on large NE, while surprisingly, the model with α=0.9 maintains the accuracy of 55.2%. In addition, The baseline for the model with α=3 is 56.8%. These results suggest better performance than <cit.> in preserving sensitive information of a highly distorted signal when SI is available to the attacker. § CONCLUSIONThis research proposes a general privacy-preserving data-sharing model that allows for tunable privacy measures, particularly leveraging α-Mutual Information. A key finding of the research is the influential role of the α parameter, which can be adjusted to balance privacy and utility in various scenarios. Experimental tests, using an image dataset of handwritten digits and a time-series sequence of power consumption measurements, revealed that tuning α allows for tailored data-sharing frameworks, with signals released per specific features of interest. The research also considered scenarios where attackers have access to correlated SI. The results indicated that fine-tuning of the privacy measure should consider not just the PUT, but also the model's resilience against SI. Lastly, in addition to the generic attacker of the framework, an arithmetic attacker was considered in relation to the AMNIST dataset used in this work. The results highlighted that certain models (with different privacy metrics) may be more or less successful at concealing sensitive information, depending on whether the attacker knows private information's pattern in the actual data. ieeetr
http://arxiv.org/abs/2310.18241v1
{ "authors": [ "MirHamed Jafarzadeh Asl", "Mohammadhadi Shateri", "Fabrice Labeau" ], "categories": [ "cs.LG", "cs.CR", "cs.IT", "eess.SP", "math.IT" ], "primary_category": "cs.LG", "published": "20231027162614", "title": "$α$-Mutual Information: A Tunable Privacy Measure for Privacy Protection in Data Sharing" }
D ℛ Γ̂ γ̂ 𝒟̂ PART:_n∂_⊥ ℒ 𝒳 _ϕ _ϕ ℰ ℋ ℳ ℛ ∂ γ̂ Â D̂ Ê R̂ 𝒜̂∇ d[#1]#2 footnote[#2][email protected] Department of Physics, K.N. Toosi University of Technology, P.O. Box 15875-4416, Tehran, Iran PDAT Laboratory, Department of Physics, K. N. Toosi University of Technology, P.O. Box 15875-4416, Tehran, Iran [email protected] Department of Physics, K.N. Toosi University of Technology, P.O. Box 15875-4416, Tehran, Iran PDAT Laboratory, Department of Physics, K. N. Toosi University of Technology, P.O. Box 15875-4416, Tehran, [email protected] Department of Physics, Sharif University of Technology, P. O. Box 11155-9161, Tehran, Iran PDAT Laboratory, Department of Physics, K. N. Toosi University of Technology, P.O. Box 15875-4416, Tehran, Iran [email protected] Department of Physics, K.N. Toosi University of Technology, P.O. Box 15875-4416, Tehran, Iran PDAT Laboratory, Department of Physics, K. N. Toosi University of Technology, P.O. Box 15875-4416, Tehran, IranSchool of Physics, Institute for Research in Fundamental Sciences (IPM), P. O. Box 19395-5531, Tehran, Iran All kinds of simulations of the intergalactic medium, such as hydrodynamic simulation, N-body simulation, numerical and semi-numerical simulation, etc., have been used to realize the history of this medium. One of these simulations is 21SSD, which is specifically focused on the epoch of reionization. This simulation deepens our understanding of the physics behind the intergalactic medium by considering the free parameters related to the Wouthuysen–Field coupling fluctuations and X-ray and Lyman line transfers in the intergalactic medium, and by presenting the plots of the power spectrum, brightness temperature, etc. in different redshifts. However, due to many physical phenomena that play significant roles in this epoch, simulations of the intergalactic medium are usually extremely complex, time-consuming, and require very powerful hardware. In this work, by using the Support Vector Regression algorithm and based on the 21SSD simulation datasets, we have tried to make the machine fully understand the brightness temperature changes in terms of redshift for different astrophysical free parameters values. At first, we trained the machine with the results of the 21SSD simulation. Then, the machine was able to predict the brightness temperature in terms of redshift with very high accuracy for other interval coefficients. Although we have used this algorithm to estimate the brightness temperature, it seems that this algorithm can be easily used for other parts of cosmology and astrophysics. With its help, it is possible to save time and obtain results with extraordinary accuracy similar to complex simulations, even with normal hardware. § INTRODUCTION Almost a decade after the prediction of the 21-cm line as a tool for probing neutral hydrogen atoms in space, the first detection of this line using a radio telescope marked a turning point in astrophysics by opening new avenues for exploring the cosmos <cit.>. The 21-cm line corresponds to the wavelength of the hyperfine transition between the singlet and triplet states of neutral hydrogen atoms in their ground state. Although this transition is difficult to be observed in laboratory conditions, it is detectable in astronomical phenomena because neutral hydrogen exists in sufficient quantities <cit.>. The 21-cm data can be used to map large-scale structures (LSS) and test cosmological theories. While the cosmic microwave background (CMB) data have provided valuable information about the early and late universe, many questions have remained unanswered in z ∼ 1100 to z ∼ 3. The Thompson scattering effects on the CMB photons and observations of high-redshift quasars <cit.>, galaxies <cit.>, and gamma-ray burst <cit.> shed light on the evolution of the universe during the Epoch of Reionization (EoR). In order to comprehend the formation of the first stars and galaxies, we need to acquire 21-cm data from the end of the recombination to the EoR <cit.>.It was incredibly challenging to distinguish the neutral hydrogen signal from the galactic synchrotron radiation and extragalactic foreground signals, which can be up to 5 times brighter <cit.>, and it was detected for the first time by Experiment to Detect the Global EoR Signature (EDGES;). For this reason, facilities currently in operation, including LOw Frequency ARray (LOFAR;), Murchison Widefield Array (MWA;), Giant Metrewave Radio Telescope (GMRT;), and Precision Array for Probing the Epoch of Reionization (PAPER;), focus on gaining statistical data, such as power spectra, with the advantage of much better signal-to-noise ratios. Since each of the mentioned interferometers has an upper limit in respective redshifts, future research requires new evolving methods for obtaining astrophysical data.Considering that the 21-cm signal depends on the cosmological and astrophysical processes that are even unknown in some cases, one of the challenges in this field is the construction of mock samples. Many endeavors have been made to simulate the expected signals that image the EoR over a redshift range between 27 and 6. These signals depend on various astrophysical parameters, such as gas density, ionization fraction, kinetic temperature, and local Lyα flux. Cosmological simulations are essential for understanding the EoR and help to guide observational efforts to detect the 21-cm signal from the early universe. They have also provided valuable insights into the formation and evolution of cosmic structures and the physical processes that govern their development. For instance, 21cmFAST and SimFast21 simulations have been generated for computing the 21-cm signal without using 3D radiative simulations <cit.>. <cit.> employed a combination of the Markov Chain Monte Carlo (MCMC) method and the 21cmFAST semi-numerical code to conduct a large-scale simulation of Cosmic Dawn and the EoR.<cit.> demonstrated productive efforts by using the MCMC framework to include an emulator that enables Bayesian parameter constraints across a range of model parameters. <cit.> used neural networks to address the issue of the cost of using Bayesian parameters and simulate the power spectrum. <cit.> introduced Cosmological Reionization and Deep Learning (CRADLE), an autoencoder convolutional neural network (CNNs) that used two-dimensional maps of the star and gas density and described 3D maps of the intergalactic medium (IGM) in the EoR. Moreover, the 21SSD simulation is a database of possible signals publicly available at 21ssd.obspm.fr21ssd.obspm.fr that uses semi-numerical simulations in 3D space to avoid irrelevant calculations. The 21SSD simulation covers the EoR between redshifts 15 and 6. The simulation incorporates radiative hydrodynamics and accounts for fluctuations in heating and Wouthuysen-Field coupling through the utilization of X-ray and Lyman line transfer<cit.>.In this paper, we present the general equations governing the physics of hydrogens in the IGM in <ref>, discuss the significance of the EoR in <ref>, and explain the 21SSD simulation and its free parameters in <ref>. The machine learning (ML) methods used in this paper, specifically multilayer perceptron (MLP) and support vector regression (SVR) algorithms, are described in <ref>. <ref> focuses on evaluating the accuracy and correctness of the results obtained from the MLP and SVR methods. The analysis also involves comparing the outputs generated by these methods with the 21SSD outcomes. In <ref>, the advantages of employing these algorithms for estimating crucial astrophysical and cosmological parameters of the IGM are discussed.§ GENERAL EQUATIONS In this section, we review fundamental definitions and general equations that are essential for understanding the physics of the 21-cm signal. The 21-cm signal is the spectral lines of neutral hydrogen resulting from the interaction of the proton magnetic moment and the ground state electron. T_s is the spin temperature of the gas that is defined as the ratio between the number density of the singlet and triplet states of the hydrogen in the ground state which follows the Boltzmann equationn_1/n_0=g_1/g_0exp (-T_*/T_s),where T_*≡ hc/k_Bλ_21cm = 0.068 K, h is the dimensionless Hubble constant, c represents the speed of light, k_B≃ 1.38×10^-23 J.K^-1 is the Boltzmann constant, and (g_1/g_0) = 3 is the ratio of the statistical degeneracy factors of the two states <cit.>. The optical depth of the IGM for the 21-cm signal incorporates various physical parameters, including the neutral hydrogen fraction, the hydrogen comoving number density, the spin temperature, and the peculiar velocity along the line of sight, which is given byτ=3c^3ħ A_10x_HIn_H/16k_BT_sν_0^21/H(z)+(1+z)∂_rv_r,where ħ is the reduced Planck constant, A_10 = 2.85× 10^-15 s^-1 is the Einstein spontaneous emission rate coefficient <cit.>, x_HI denotes the neutral fraction of hydrogen, n_H = 8.6×10^-6Ω_bh^2(1 + z)^3 cm^-3 is the hydrogen comoving number density with Ω_b=0.049 which is the fractional energy content of baryonic matter <cit.>, ν_0 = 1420.4 MHz is the rest-frame frequency of the 21-cm signal, H(z) is the Hubble parameter, and ∂_rv_r is the comoving gradient of the peculiar velocity along the line of sight <cit.>.The brightness temperature, T_b, is commonly used to measure the apparent temperature of an object based on its observed specific intensity. In the case of the CMB, it closely resembles the spectrum of a blackbody and exhibits an approximate temperature of 2.73(1 + z) K. Since it is substantially greater than T_*, the Rayleigh-Jeans approximation can also be used to estimate it. As a result, we may write the brightness temperature at z=ν_0/ν_obs-1 as follows <cit.>T_b=T_s(1-e^-τ)+T_CMBe^-τ,where T_CMB is the CMB temperature. In Eq.<ref>, the first term corresponds to the component originating from the 21-cm signal, capturing its inherent characteristics. This term quantifies the contribution of the neutral hydrogen distribution and its corresponding temperature fluctuations. On the other hand, the second term accounts for the influence of the CMB radiation on the observed brightness temperature. It represents the impact of the CMB, which pervades the universe and contributes to the overall measured intensity alongside the 21-cm signal. The difference in brightness temperature between the 21-cm line and the CMB is an important quantity for studying the EoR, as it provides a way to map the distribution of neutral hydrogen in the IGM at different redshifts. A common way to express the difference in brightness temperature between the 21-cm line and the CMB is <cit.>δ T_b=T_s-T_CMB/1+z(1+e^-τ)=27x_HI(1-T_CMB/T_s)(0.15/Ω_m1+z/10)^1/2(Ω_bh/0.023)mK,where Ω_m=0.315 denotes the fractional energy content of baryonic matter. Three factors affect the spin temperature of the 21-cm line: i) CMB photons absorption, ii) Collisions with other hydrogen atoms, free electrons, and protons, and iii) UV photons scattering. The spin temperature is affected by an equilibrium between excitation and de-excitation as n_1(C_10+P_10+A_10+B_10I_CMB)=n_0(C_01+P_01+A_01+B_01I_CMB),where B_10 and B_01=3B_10 are the Einstein coefficients, C_01 and P_01 (C_10 and P_10) are excitation (de-excitation) rates from collisions and UV scattering. Additionally, I_CMB=2ν_21^2k_BT_CMB/c^2 is the energy flux of CMB photons, and A_01=2hν_21^3c^-2B_10 indicates spontaneous emission rate. According to Eq.<ref>, the spin temperature can be rewritten as Eq.<ref> based on the mentioned three effective processes T_s^-1=T_γ^-1+x_αT_α^-1+x_cT_k^-1/1+x_α+x_c. Here, T_γ refers to the temperature of the background photons, which is commonly determined by the CMB, so T_γ=T_CMB. T_k and T_α are the kinetic temperature of the gas and Lyα temperature. Ultimately, x_α≡ P_10T_*/A_10T_γ and x_c≡ C_10T_*/A_10T_γ are coupling coefficients for the Lyα and collisions scattering. When collisions and Lyα are efficient, the spin temperature is independent of the CMB and follows one of the T_k and T_α. More specifically, particle collisions take one of the following forms: hydrogen-hydrogen, hydrogen-electron, and hydrogen-proton. Therefore, x_c can be calculated as follows x_c=x_c^HH+x_c^eH+x_c^pH=T_*/A_10T_γ[k_10^HH(T_k)n_H+k_10^eH(T_k)n_e+k_10^pH(T_k)n_p], where k_10^HH, k_10^eH, and k_10^pH are rate coefficients of the three collision forms. n_p and n_e are the number densities of proton and electron in the IGM.§ THE EPOCH OF REIONIZATION The emergence of the first galaxies several hundred million years after the Big Bang marked a critical juncture in the evolution of the universe. This transition involved a shift from a simple and homogeneous state to a more complex and structured one <cit.>. During the formation of the first stars, composed primarily of pure hydrogen and helium without heavy elements (zero metallicity), and the subsequent formation of second-generation stars containing small amounts of heavy elements, ultraviolet photons ionized the neutral hydrogen gas in the IGM <cit.>. While it is widely accepted that star-forming galaxies and accreting black holes were the primary sources of ionizing radiation for hydrogen reionization, there is still an ongoing debate regarding the relative contributions of these sources over time. However, the James Webb Space Telescope (JWST) along with other experiments featuring unique sensitivity, large wavelength coverage, and advanced spectroscopic and imaging capabilities may soon provide answers to these uncertainties <cit.>. Ultraviolet radiation has the capability to ionize the hydrogen gas in the local vicinity of a radiation source, resulting in the creation of ionized bubbles within the gas. As these ionized bubbles merge, they encompass more radiation sources, leading to the accelerated expansion of ionization fronts that delineate their boundaries <cit.>. This process triggers a chain reaction of bubble expansion until the mean free path of Lyman continuum (LyC) photons is controlled by high-density regions referred to as Lyman Limit Systems (LLSs), which can retain a substantial fraction of their hydrogen in a neutral state. The mean free path of LyC can be determined from the absorption spectra of high-redshift quasars, and existing data point to a rapid decrease in the mean free path at redshift z∼6. However, observations suggest that this scenario is not fully realized in our universe. Whilst many simulations employ straightforward scaling relationships linking the total mass of Population III stars to the mass of the dark matter halo hosting them <cit.>, other potential sources of LyC radiation during reionization, such as dark matter annihilation <cit.>, primordial globular clusters (dense clusters of stars formed in the early universe) <cit.>, and mini- or micro-quasars <cit.> are also under consideration <cit.>. Studies involving quasar absorption spectra and Thomson scattering optical depth from CMB observations reveal that this transition occurs between z∼ 6 and 10 <cit.>. More precisely, the Planck Collaboration conducted recent observations on the temperature and polarization angular power spectra of the CMB. Their findings estimated τ to be 0.054±0.007, indicating that the midpoint of reionization occurred at z≈7.7± 0.6 <cit.>. Nevertheless, some studies of quasars, such as <cit.> imply that the end of the EoR may be at z<6. Notwithstanding, significant progress made in understanding cosmic reionization over the past decade <cit.>, myriad questions remain unresolved regarding the underlying processes, geometry, and history of the EoR <cit.>, and the Square Kilometre Array (SKA) will play a crucial role in unraveling these mysteries <cit.>. Furthermore, researchers have utilized anisotropies in the CMB temperature map through the Sunyaev-Zel'dovich (SZ) effect to impose constraints on the EoR <cit.>. Recent advancements in ground-based CMB observatories, such as the South Pole Telescope (SPT;) and the Atacama Cosmology Telescope (ACT;) have contributed to these efforts. § THE 21SSD SIMULATION The 21SSD simulation dataset comprises the brightness temperature of 45 models that cover a selected 3D parameter space. These models were simulated at high and low resolutions, with a spatial resolution of 1024^3 elements, using the LICORICE code, which couples with hydrodynamics <cit.>. The 21SSD simulation considers heating and Wouthuysen-Field coupling by incorporating X-ray and Lyman line transfer processes into the model. In this simulation, the initial conditions for all models are the same, and varying parameters are the Lyman band emissivity, f_α, the X-ray emissivity, f_X, and the hard-to-soft X-ray, r_H/S. f_α: In <ref>, the correlation between the brightness temperature and the spin temperature was discussed. f_α is employed to quantify the efficiency of Lyman band emission. In accordance with Eq.<ref>, the Lyα scattering coefficient, x_α, is identified as one of the parameters that remarkably influence the spin temperature. To calculate the local Lyα flux, radiative transfer calculations are conducted, disregarding the impacts of metal enrichment in the Lyman lines. A constant luminosity for the Lyman band is assumed, and the simulation calculates the energy emitted within a specific frequency range by a stellar population with different masses of stars is expressed as <cit.> E(ν_1,ν_2)=∫_ν_1^ν_2∫_M_min^M_maxζ(M)L(M,ν)T_life(M)dMdν, where ζ(M) represents the Initial Mass Function (IMF), T_life(M) is the lifetime of a star with mass M, and L(M,ν) is the energy emission per time and frequency. Afterward, f_α is defined as the ratio of the energy effectively emitted in the simulation, E_eff, and the theoretical energy emission in the corresponding frequency range f_α=E_eff(ν_α,ν_limit)/E(ν_α,ν_limit), where ν_α and ν_limit areLyα frequency and Lyman limit frequency. Increasing the value of f_α leads to enhanced prominence of peaks and troughs in the differential brightness temperature, but this change does not considerably impact the computational time required for the calculations. <cit.>. The determination of the brightness temperature in this research involves assigning the values of 0.5, 1, and 2 to f_α. f_X: After recombination, T_k decreases steadily over time. However, X-ray heating eventually initiates a reversal in this trend, leading to a shift from the absorption to the emission regimes for the 21-cm signal. Due to the uncertainties surrounding the characteristics of high-redshift objects, it is not possible to accurately describe the high-redshift X-ray background with confidence. Therefore, it is reasonable to assume a correlation between the X-ray luminosity and the locally observed star formation rate (SFR). The effectiveness of X-ray generation during the EoR is typically characterized as <cit.> L_X=3.4×10^40f_X(SFR/1M_⊙.yr^-1) erg.s^-1. An equivalent SFR is calculated for each newly generated source within the simulation by utilizing the source mass and lifetime. The X-ray luminosity of the source is then determined using a consistent f_X value throughout the entire simulation, employing the aforementioned formula. To account for the uncertainty of the environment, f_X is considered as 0.1, 0.3, 1, 3, and 10. r_H/S: X-ray heating sources mainly consist of binaries and active galactic nucleus (AGNs). X-ray heating from binaries results in more uniform heating that reduces 21-cm fluctuations during the heating transition. Hard X-rays have a high-energy and short-wavelength spectrum and receive special treatment in simulations due to their larger mean free path. On the other hand, soft X-rays from AGNs typically produce localized energy emissions and spatial variations. These soft X-rays have a spectral index of 1.6 and a range of energies between 0.1 and 2 keV <cit.>. f_X^XRB refers to the X-ray energy emitted by X-ray binaries, while f_X^AGN denotes the X-ray energy emitted by AGNs. Then, the fraction of hard X-ray emissions, r_H/S, is determined by the ratio of X-ray emissivities originating from X-ray binaries and AGNs r_H/S=f_X^XRB/f_X^XRB+f_X^AGN. In this study, r_H/S is varied across the values of 0, 0.5, and 1. Moreover, we ensure uniformity in the initial conditions and the constants employed in the calculations across all simulations. The simulated environment for each model encompasses a spatial volume 200 h^-1Mpc cube and contains 1024^3 particles, half of which are baryons, and the rest are dark matter.The Lyα flux is estimated in a post-processing step by using a fixed grid with 512^3 cells which emit 4×10^8 photons once per 10^7 years. The simulation begins at z = 100 and uses second-order Lagrangian perturbation theory (2LPT) by the MUlti-Scale Initial Conditions package (Music code). The numerical integration of the system employs a dynamical timestep of 1 Myr, except for the case where the scale factor a is less than 0.03, a smaller timestep of 0.33 Myr is used. Additionally, the gravitational softening length applied in the simulation is 5 ckpc <cit.>.To preserve the compatibility of simulation results with observations, some astrophysical parameters have been fixed in the 21SSD simulation. For instance, within the LICORICE framework, the number of particles remains constant, and all baryonic particles possess uniform masses. Consequently, the process of star formation occurs exclusively within these baryonic particles. Once a particle exceeds a specific particular density, 100 times the mean baryonic density for the 21SSD simulation, it initiates star formation in accordance with the Kennicutt–Schmidt law where the exponent is set to one: dρ_s/dt = c_effρ_g. Here, ρ_g is the gas density, ρ_s is the star density, and c_eff denotes an efficiency parameter <cit.>. In this case, c_eff can be interpreted as the reciprocal of the gas conversion time scale, which, in the simulations, is predetermined as 2 Gyr. Each star fraction emits min(10^5,510^7/nb of sources) packets of UV ionizing photons and an equal quantity of X-ray photons in each dynamical time step. IMF in the mass ranges 1.6 M_⊙ and 120 M_⊙ is used in MC sampling to determine the frequency of UV photons <cit.>.In this way, after the end of the simulation, ∼ 15×10^9 photon packets propagate. The fraction of ionizing UV photons escape for the unresolved structures is set to 0.2. The constants utilized in the simulation are H_0=67.8kms^-1, Ω_m=0.308, Ω_Λ=0.692, Ω_b=0.0484, σ_8=0.8149, n_s=0.968, and τ=0.0692 <cit.>. For each model, the resulting dataset consists of 135 brightness temperature lightcones, generated in the x, y, or z direction, at high and SKA resolutions ranging from approximatelyΔθ∼ 0.3' to 3'-8' respectively. This dataset offers enhanced resolution and improved physical accuracy in comparison to similar databases, like the one presented by <cit.>. § METHODS ML methods have enabled cosmologists to analyze data more effectively in recent years. These methods involve creating and applying algorithms that discover patterns in data <cit.>. ML algorithms can identify patterns in high-dimensional spaces. Thus, simulations can act as a laboratory for finding patterns that are not obvious and enhancing our comprehension of the physics governing the phenomena being studied <cit.>. §.§ Multilayer Perceptron Neural networks are artificial systems that mimic the biological brain and the nervous system <cit.>. They learn abstract features by using activation functions that apply non-linear transformations <cit.>. A neuron is a basic unit of neural networks that can process information. As a matter of fact, it receives weighted information from other neurons through synaptic connections and generates an output by applying an activation function to the weighted sum of those input signals <cit.>. The tanh function is an activation function that has been used in neural networks and has the zero-centric property <cit.>. A feedforward neural network, or an MLP algorithm, usually consists of several fully connected layers that have non-linear activation functions <cit.>. MLPs can approximate continuous non-linear functions universally and can learn from input-output patterns. They also have complex network architectures with multiple inputs and outputs. MLPs employ feedforward and recurrent networks <cit.>. The hidden layer is a group of neurons that uses an activation function and acts as an intermediate layer between the input layer and the output layer <cit.>. The neuron outputs from the hidden layer are derived through computation as follows <cit.> y_j(p)=f(∑_i=1^nx_i(p).ω_ij-θ_j), where y_j represents the output of neuron j, f is the sigmoid activation function, n is the number of inputs for a particular neuron j in the hidden layer, x_i is the input value of the i-th neuron in the previous layer, ω_ij is the weight associated with each input (for each neuron), and θ_j is the bias term for neuron j. The final output of the network is <cit.> y_k(p)=f(∑_i=1^mx_jk(p).ω_jk-θ_k),where m is the number of inputs for the neuron k from the output layer. We use Adam <cit.>, a solver for weight optimization, which is a gradient-based optimization algorithm for stochastic objective functions, based on adaptive estimates of lower-order moments. The method is easy to implement, fast, memory-efficient, invariant to diagonal rescaling of the gradients, and suitable for large problems in terms of data and parameters. The method also works well for non-stationary objectives and problems with very noisy and sparse gradients.The hyperparameters (HP) are easy to understand and usually need little tuning. We update the initial parameter vector where our parameter vector at the given step does not converge <cit.> ω_t←ω_t-1-α.m̂_̂t̂/(√(v̂_̂t̂)+ϵ), where α is stepsize which defines the learning rate, m̂_̂t̂ and v̂_̂t̂ are the first and second order moments of gradient, and ϵ is a small constant (usually 10^-7) used to avoid division by zero. §.§ Support Vector Regression Recently, Support Vector Machines (SVM) have emerged from the framework of statistical learning theory <cit.>. Unlike neural networks, SVM does not suffer from local optima, and the training is fairly easy. It also performs well on high dimensional data, and it has explicit control over the trade-off between classifier complexity and error <cit.>. The principle of maximal margin, dual theory, and kernel trick are the three core elements that make SVMs successful <cit.>. In the maximum margin method, the solution only depends on the support vectors, and the supporting planes are pushed apart until they bump into the support vectors <cit.>. The standard SVM is a maximum margin classifier that has a hyperplane as its decision function, which maximally separates samples from different classes. The formulation of the SVM is based on this principle <cit.>.SVR is a technique that uses kernels to estimate a function from an infinite-dimensional function space, based on a finite number of observations at specific points. This is a generalization of the classification problem <cit.>. Given that x_i∈ℝ^n is a feature vector and z_i∈ℝ^1 is the corresponding output, (x_i,z_i),...,(x_l,z_l) is the set of training points. The standard SVR is ω,b,ξ,ξ^*min1/2ω^Tω+C∑_i=1^lξ_i+C∑_i=1^lξ^*_i, with the following restrictions {[ ω^Tϕ(x_i)+b-y_i≤ϵ+ξ_i; y_i-ω^Tϕ(x_i)-b≤ϵ+ξ^*_i;ξ_i,ξ^*_i≥0i=1,...,l ]., where b is the scalar bias term, ξ_i and ξ^*_i are slack variables to determine the deviation of training, and C is the regularization hyperparameter that determines the trade-off between minimizing the error and maximizing the margin. ϵ>0 is a hyperparameter that determines the width of the ϵ-insensitive tube and ϕ(x_i) maps x_i into a higher-dimensional space <cit.>. Indeed, C has a crucial role in balancing the smoothness against the allowance for deviations beyond a certain threshold ϵ. This is achieved by utilizing a loss function known as |ξ|_ϵ-insensitive, which is defined by <cit.> |ξ|_ϵ := {[ 0if |ξ|≤ϵ; |ξ|-ϵ otherwise ].. Due to the possible high dimensionality of the vector variable ω, usually we solve the following dual problem <cit.>.Solving the duality of the problem above is advantageous for several reasons. First, even when the primal is not convex, the dual problem will still have a unique optimal solution. Second, the optimal function value of the primal form has a lower bound, which is the objective function value. Finally, an optimization problem in the dual form can be solved more quickly and effectively because it may have much fewer variables than the primal form <cit.>. The dual problem is α,α^*min1/2(α-α^*)^TQ(α-α^*)+ϵ∑_i=1^l(α_i+α^*_i)+∑_i=1^ly_i(α_i-α^*_i), satisfying the following requirements {[e^T(α-α^*)=0; 0≤α_i,α_i^*≤ Ci=1,...,l ]., where α_i and α^*_i are vectors of Lagrange multipliers associated with the training samples and support vectors. e^T is the transpose of a vector e, which is a vector of ones and Q_ij=K(x_i,x_j)≡ϕ(x_i)^Tϕ(x_j) is the kernel function. Consequently, the prediction isy_i=i∈ SV∑(α-α^*)K(x_i,x)+b,where (α_i-α^*_i) are dual coefficients, n is the number of training examples, x is the input to be predicted, and x_i is the i-th training example. A kernel function can be any function that complies with Mercer’s theorem <cit.>. In fact, the kernel is used to build a model in a feature space with higher dimensions, without needing to specify the mapping function from the input space to the feature space. This way, the input space can be linearly separable in the feature space for non-linear separable cases. Furthermore, the hyperplane can efficiently serve as a decision boundary in this setting <cit.>.For the radial basis function (RBF) case, the Gaussian kernel is a common choice, while the spread parameter, σ, in the Gaussian kernel is essential to the generalization performance of SVMs. The good features of the Gaussian kernel make it the most prevalent one <cit.>. K(x,y)=exp(-∥ x-y ∥^2/(2σ^2))§.§ Grid search HPs can be highly configured for ML algorithms. Grid search (GS) is the technique of splitting the range of each HP into discrete values and trying every combination of values. Numeric and integer HP values are usually uniformly spaced in their box constraints. The number of distinct values per HP is the resolution of the grid. For categorical HPs, either a subset or all possible values are chosen <cit.>. § RESULTGiven that the 21SSD simulation requires powerful hardware resources due to its computational intensity, our main goal is to utilize ML techniques to accelerate the results from the IGM simulations without sacrificing precision. We aim to obtain brightness temperature data within significantly reduced timeframes by adjusting the free parameters, f_α, f_X, and r_H/S. To achieve this, we employ the MLP and SVR algorithms. MLP performs reasonably well in characterizing the IGM for z<7 and z>10 where f_X=0.1. Despite successfully identifying the decreasing trend in brightness temperature between redshifts 7 and 10, the MLP algorithm, as depicted in Fig.<ref>, can not precisely estimate the minimum point on the graph for f_α = 1. In addition, an inconsistency exists in the MLP results for f_X=0.1, a peak at z ∼ 7, which contradicts the outcomes of the 21SSD simulation. This incompatibility is more detectable in Fig.<ref>, where f_α is set to 0.5 for r_H/S= 0 and 0.5. In spite of properly identifying the overall trend at f_X = 0.1, these differences undermine the credibility of MLP for accurate purposes. The MLP algorithm encounters challenges in Fig.<ref> as it seeks to learn the minimum and maximum values (z<10) observed at f_X=0.3 across different values of f_α. Nevertheless, MLP continues to capture the trend and effectively describes the brightness temperature for z>10.MLP outputs show a reduction in the correctness as f_X=1, even at z>10. The majority of the plots in Fig.<ref> display irregular fluctuations that deviate from the anticipated trend. Although these fluctuations are relatively small (around 10 mK), their physical interpretation cannot be inferred due to any known physical processes. Furthermore, a distinct peak is detected at z∼ 7.5 for f_α= 0.5 and 2, which is inconsistent with the determinations of the 21SSD simulation. The problem of the downward trend has improved at f_X=1 (excluding f_α = 0.5 and r_H/S= 0.5 and 1), and the MLP has learned to predict the appropriate decline. Therefore, as f_X increases the MLP outputs are less closely matched to the outputs of the 21SSD simulation. Upon reaching f_X=3, the fluctuations at redshifts greater than 10 exhibit an intensification that cannot be overlooked. The 21SSD simulation reveals a peak at z∼ 7.5, which is not effectively captured by the MLP algorithm. The outcomes depicted in Fig.<ref> and Fig.<ref> highlight the limitations of the MLP algorithm as a descriptor for the brightness temperature in scenarios where a substantial abundance of X-rays is produced within the IGM. For f_X=10 and f_α= 0.5, pronounced variations are observed in the outputs of the MLP, resulting in suboptimal performance across all redshifts. As f_α increases, the issue of tracking the minimum at z∼ 10 is largely mitigated, but significant fluctuations persist at redshifts higher than 10 and lower than 8. Consequently, the MLP algorithm is better for environments with low X-rays. In the comprehensive evaluation, the MLP method attained an accuracy rate of 95.7%.Following the observed shortcomings of MLP, our investigation proceeds to evaluate the results obtained from SVR. The outputs generated by this algorithm effectively characterize the IGM and display a high level of dependability. However, to reduce the risk of overfitting,the careful selection of training data becomes critically important. It is crucial to realize that the brightness temperature inherently includes fluctuations that should not be eliminated during the ML process.The machine is trained to capture the trends associated with distinct IGM conditions and varying redshifts by employing the SVR algorithm. The outcomes of the SVR exhibit a noteworthy agreement between the obtained results and the outputs derived from the 21SSD simulations, providing strong evidence of a high level of accuracy at 97.2%. The findings from the two methods are thoroughly analyzed in the following. Even for high f_X values, the SVR algorithm maintains the reliability of the predictions by tracking the valleys and peaks. The accuracy of SVR is obvious in Fig.<ref>, where it successfully captures the descending slope of T_b, followed by an increase with high precision. The SVR presents elegance in capturing the characteristics of z∼ 9 and adeptly tracks minor fluctuations at this specific redshift. Besides, the slight rise in brightness temperature following the descent in Fig.<ref> denotes a difference of approximately 0.5 to 2 mK, which can be negligible.As the X-ray intensity increases, the SVR algorithm displays remarkable credibility in contrast to MLP. It perfectly matches the data points, demonstrating that it is trustworthy. It seems that the SVR properly follows the 5mK increase in z∼7.5, as seen in Fig.<ref>, which is another great attainment. In Fig.<ref>, SVR avoids the issues observed in MLP at f_X=10. The outputs are not affected by the difficulties in capturing the depression at redshifts between 8 to 10 for f_α=0.5 or estimating exact values for all f_α.It becomes evident that SVR is a suitable algorithm for efficiently describing the IGM. The reliability of the data used for both SVR and the comparative algorithms is ensured by sourcing it from simulation 21SSD, as discussed in <ref>. Moreover, the outputs generated from SVR denote a high level of authenticity, enabling its application to larger simulations.Currently, the integration of ML techniques across various fields has shown tremendous efficiency gains in terms of time and cost savings in conducting simulations. This study highlights the capability of SVR as an alternative method to overcome the computational burden associated with time-consuming and resource-optimized simulations. Future research endeavors can explore the implementation of ML approaches, such as SVR, to predict other key parameters of the IGM, including the neutral hydrogen fraction and spin temperature, at different redshifts.To account for potential small fluctuations or sharp variations in the data, SVR smoothed-fit is applied to all plots. It is important to note that the 21SSD simulation may have inherent limitations, which could result in these irregularities. Nevertheless, it is helpful to explore plausible physical justifications for these deviations.§ CONCLUSION The EoR holds meaningful importance not only for studying the behavior of the neutral hydrogen fraction, but also for understanding the role of dark matter in the evolution of the universe. Baryons, including neutral hydrogens, act as tracers of the underlying dark matter distribution. Therefore, studying the neutral hydrogen fraction during the EoR provides valuable insights into the interplay between dark matter and baryonic matter. By investigating the ionization state of the IGM and the impact of ionizing photons on neutral hydrogens, researchers can further unravel the intricate relationship between dark matter, baryons, and the EoR.To comprehend the EoR, sophisticated simulations have been employed, and the 21SSD simulation emerging as one of the foremost options. The 21SSD simulation focuses specifically on the EoR and offers advantages, such as the consideration of essential parameters (f_X, f_α, and r_H/S) and the generation of brightness temperature in informative plots at different redshifts. Moreover, another aspect of the 21SSD simulation is the findings are readily available to the public.To overcome the challenges posed by the complexity, duration, and resource requirements of previous simulations of the IGM, this study explores ML techniques. By training ML models on existing simulation data, these models can learn the underlying trends and relationships within the IGM. Specifically, the MLP and SVR algorithms are employed as alternative methods to alleviate the computational burden. By leveraging the datasets produced by the 21SSD simulation, the SVR algorithm showcases exceptional precision in predicting brightness temperature changes concerning redshift for different free parameter values.The comparison of MLP and SVR algorithms, in characterizing the IGM based on the 21SSD simulation reveals notable findings. MLP excels for z<7 and z>10, but struggles to examine the minimum values at f_X=0.1. Inconsistencies arise, including a peak around redshift 7, which contradicts the expected results from the 21SSD simulation. On the other hand, as illustrated in Fig.<ref>, the SVR algorithm indicates impressive precision in capturing the low points, subsequent increases, and minor fluctuations.Fig.<ref> underlines the issue which MLP confronts in assessing the extremums at f_X=0.3 for redshifts below 10, but MLP remains credible in approximating the general trend and describing the brightness temperature for higher redshifts. SVR performs exceptionally well in accurately tracking the data points.The MLP performance diminishes at f_X=1, particularly for redshifts above 10. In Fig.<ref>, the plots exhibit irregular fluctuations, deviating from the expected trend, with relatively uninterpretable variations. A significant peak is observed at z∼ 7.5, contradicting the 21SSD simulation for most cases with this X-ray fraction; however, the MLP algorithm shows improvement in capturing the concave shape. The SVR algorithm maintains overall consistency with the 21SSD simulation, apart from a considerable discrepancy occurring in the case of f_α=2 and r_H/S=0.5 and 1 at a redshift of approximately 8, which is promptly rectified.While f_X = 3, MLP fails to capture a peak seen in the simulation at z∼ 7.5, indicating limitations in estimating brightness temperature in high X-ray abundance scenarios, whereas SVR performs reliably and identifies peaks and tracks changes with exactitude, as shown in Fig.<ref>.Lastly, in Fig.<ref>, the MLP algorithm confronts challenges when dealing with f_X=10. It fails to capture the decline in values at a redshift of approximately 10 for cases with f_α=0.5. Similarly, for larger values of f_α, the MLP faces difficulties in representing precise values at z<8 and z>10. The SVR consistently provides authentic results, showcasing its potential as a valuable tool in enhancing our understanding of the IGM through its ability to learn brightness temperature trends.Incorporating ML techniques, such as SVR, into the IGM simulations, including the 21SSD simulation, offers a promising path toward increasing our comprehension of the evolution of various cosmological and astrophysical parameters in the EoR. These approaches enable researchers to explore the interplay of various parameters and obtain accurate predictions with reduced computational requirements. By harnessing the power of ML, time and cost savings can be realized, making it easier to conduct simulations and obtain results with extraordinary accuracy, even with ordinary hardware.The efficacy of SVR as a valuable tool in overcoming the computational challenges of complex simulations is evident. In this study, SVR has been employed to examine the brightness temperature. However, in a broader context, leveraging training data from extensive simulations, such as N-body simulations, that capture the evolution of parameters can enable the algorithm to be applied to a delimited number of samples and estimate other states. Furthermore, it enables the prediction of other parameters of the IGM and even different physical phenomena. Future endeavors have the opportunity to delve deeper into the application of ML algorithms in cosmology and astrophysics, leveraging their potential to save time and resources. This holds great potential for further advancements in our knowledge of the universe.aasjournal
http://arxiv.org/abs/2310.17789v1
{ "authors": [ "S. Mobina Hosseini", "Mahsa Berahman", "Seyed Sajad Tabasi", "Javad T. Firouzjaee" ], "categories": [ "astro-ph.CO" ], "primary_category": "astro-ph.CO", "published": "20231026213714", "title": "SVR Algorithm as a Tool for More Optimal Intergalactic Medium Simulation in the Epoch of Reionization" }
Probabilistic Multi-product Trading in Sequential Intraday and Frequency-Regulation Markets Saeed Nordin0000-0003-1823-9653, Student Member, IEEE,Abolfazl Khodadadi0000-0003-4791-8380, Student Member, IEEE,Priyanka Shinde0000-0002-4854-976X, Student Member, IEEE,Evelin Blom0000-0002-8905-3277, Student Member, IEEE, Mohammad Reza Hesamzadeh0000-0002-9998-9773, Senior Member, IEEE, and Lennart Söder0000-0002-8189-2420, Senior Member, IEEE January 14, 2024 ========================================================================================================================================================================================================================================================================================================================================================================== Large language models (LLMs) show amazing proficiency and fluency in the use of language. Does this mean that they have also acquired insightful linguistic knowledge about the language, to an extent that they can serve as an “expert linguistic annotator”?In this paper, we examine the successes and limitations of the GPT-3, ChatGPT, and GPT-4 models in analysis of sentence meaning structure, focusing on the Abstract Meaning Representation (AMR; ) parsing formalism, which provides rich graphical representations of sentence meaning structure while abstracting away from surface forms. We compare models' analysis of this semantic structure across two settings: 1) direct production of AMR parses based on zero- and few-shot prompts, and 2) indirect partial reconstruction of AMR via metalinguistic natural language queries (e.g., “Identify the primary event of this sentence, and the predicate corresponding to that event.”). Across these settings, we find that models can reliably reproduce the basic format of AMR, and can often capture core event, argument, and modifier structure—however, model outputs are prone to frequent and major errors, and holistic analysis of parse acceptability shows that even with few-shot demonstrations, models have virtually 0% success in producing fully accurate parses. Eliciting natural language responses produces similar patterns of errors. Overall, our findings indicate that these models out-of-the-box can capture aspects of semantic structure, but there remain key limitations in their ability to support fully accurate semantic analyses or parses.*Equal contribution § INTRODUCTIONLLMs in recent years have revolutionized artificial intelligence, showing advanced proficiency and fluency in the use of language, and appearing to possess high levels of expertise and analytical capability across a wide variety of specialized domains. Observation of these capabilities has raised important questions about the extent, robustness, and limitations of the knowledge and analysis abilities of these models in specialized domains.In this paper we zero in on the domain of linguistic analysis: these models have shown great proficiency with language, but here we ask not just how well the models use language, but how much they know about language. Specifically, we explore to what extent models are able to analyze the meaning of a sentence and reproduce the structure of that meaning. Most directly this allows us to conduct a status check on the level of expertise that LLMs have acquired in linguistic analysis, and to assess to what extent linguistic structural annotation can be done reliably by LLMs out of the box. At a higher level, this investigation has potential implications for the robustness of models' abstract representation of meaning in language inputs. We intend for this to serve as a brief status report with respect to model capabilities in this domain.For examining models' capability in analysing linguistic meaning structure, we focus on a case study of the Abstract Meaning Representation formalism (AMR) <cit.>. AMR is designed to capture the abstract structure of sentence meaning, disentangling this structure from surface forms of language. It formalizes semantic structure of a sentence into directed graphs that capture “who did what to whom” as well as detailed abstract information on how aspects of the sentence meaning modify and relate to one another.In our explorations, we examine models' ability to produce the structural meaning information contained in AMR parses across three settings: zero-shot generation of AMR parses, few-shot generation of AMR parses, and zero-shot generation of natural language descriptions. We test GPT-3, ChatGPT, and GPT-4. Our results show that all models are able to reproduce the basic AMR format and structure, and they can in principle produce correct outputs at any level of AMR—with greatest reliability on core event-argument triplets corresponding to subject-verb-object structures. However, models are prone to frequent and major errors in capturing the semantic structure (see Fig. <ref>, <ref>), and when we assess the parses for overall acceptability, we see virtually 0% success rate across models.Comparisons between patterns in parse and natural language output settings suggest that these limitations are not simply artifacts of the output type, and may reflect more fundamental limitations in models' capacity for semantic analysis. Overall, our findings indicate that although models can execute impressively formatted and partially correct semantic parse outputs, the prevalence of errors outside of basic components is such that these models cannot be used reliably out-of-the-box for generating this type of structured abstract meaning information, and more involved techniques are needed to adapt these models effectively for such purposes.§ RELATED WORK A large body of work has examined various aspects of syntactic and semantic capabilities in language models <cit.>, indicating that LLMs show strong knowledge of syntactic structure, while semantic capabilities are more mixed.Nonetheless, LLMs have also been used for few-shot semantic parsing with some success. In particular, <cit.> and <cit.> find that few-shot learning in GPT-3 and Codex produces semantic parses that outperform baselines with comparable training sizes. These semantic parsing datasets, which focus on producing database queries in particular domains, are less complex and domain-general than AMR, but the results suggest that LLMs should contain aspects of the knowledge needed to analyze semantic structure. As for AMR, pre-trained transformer models have helped to advance the state of the art in AMR parsing, with recent AMR parsers building on the foundation of models like BART <cit.>. This indicates that pre-trained models may also pick up on representational capabilities relevant for supporting AMR. Though these prior works are suggestive that LLMs and pre-trained transformers capture certain aspects of linguistic structure, it is not clear from existing results how detailed or reliable LLMs' ability to analyze meaning structure may be—formalisms used for prior few-shot semantic parsing are simpler and more domain-specific than AMR, and the supervised fine-tuning of BART for AMR parsing obscures the details of what original knowledge may have been contained in the pre-trained model. To achieve a clearer picture of LLMs' ability to analyze rich semantic structure, we directly examine pre-trained models' ability to produce AMR information, and we do so across a number of potentially productive zero- and few-shot settings for maximum insight about model capabilities. We also prioritize fine-grained, manual analysis of models' accuracies at multiple levels of AMR information, in order to provide more detailed insights into model capabilities. § EVALUATION §.§ Evaluation FrameworkStandard metrics for evaluating AMR include Smatch <cit.> and SemBLEU <cit.>, which provide holistic analysis of node matches between generated and gold AMR parses.[Additional work on AMR similarity metrics includes <cit.>; <cit.>; <cit.>.]While these metrics are well-suited for large-scale quantitative evaluation, they are not adequate for detailed understanding of models' strengths and limitations in capturing AMR information. For more detailed insight, we lay out a novel fine-grained evaluation framework. We define two levels: Level 1 criteria to capture basic format, highest-level nodes, and overall semantic accuracy; and Level 2 criteria for assessing accuracy with arguments and modifiers. Table <ref> outlines the analysis criteria (further details in <ref>). §.§ Data To ensure maximum flexibility and expert-level accuracy inassessment of the above criteria, we carry out our evaluation manually. In choosing to use fine-grained manual evaluation, we necessarily accept a tradeoff with respect to scale and generalization guarantees, as expert manual evaluationis time-consuming. We are not the first to accept this tradeoff: due to cost and increasing complexity of LLM outputs, there is increasing precedent for analyzing model capabilities even on samples of single outputs <cit.>. Here we seek a balance between this kind of single-instance analysis and larger-scale coarse-grained evaluation, via fine-grained manual analysis on a small exploratory test set sampled across several domains. To this end, we compile a sample of 30 AMR gold-parsed sentences, randomly selecting 10 sentences of varying character lengths from the gold AMR annotated AMR 3.0 (AMR3; ) and Little Prince[<https://amr.isi.edu/download.html>] (LPP) datasets, and also sampling and annotating 10 sentences from websites published in 2023 (2023), to test the possibility of memorization of public AMR annotations available in pre-training.This is a large enough sample to gain some insight into trends at our different levels of analysis, and future studies at larger scale can provide further insight into patterns that emerge in larger samples. See <ref> for more details. § ZERO-SHOT AMR PARSING Given findings of superior zero-shot performance in a wide variety of domains <cit.>, we begin by testing models' zero-shot capability for generating AMR graphs directly. Instructions and examples for AMR annotation are widely available online, so it is reasonable to imagine that models may learn to do this task zero-shot as well.We input to the model the target sentence and the simple instruction “Provide an AMR (Abstract Meaning Representation) parse for this sentence.” For ChatGPT and GPT-4, we include the system message “You are an expert linguistic annotator.” Our goal here is to use clear and fair prompts that allow us to assess model capabilities and limitations. We do not do elaborate prompt engineering, but take the stance that if a prompt is sufficiently clear, then a failure to perform the task is simply a failure—reliance on particular prompt phrasing or structure is an indication of model brittleness. Since the zero-shot parses fail often even at the basic levels, we limit to Level 1 analysis for this setting. Results are in the top segment of Table <ref>. The outputs indicate clearly that these LLMs have been exposed to AMR parse annotations in their pre-training, and have managed to learn surface characteristics of AMR structure: we see that for all models a majority of outputs (>70%) show basic AMR format despite the absence of any demonstration in the prompt. Comparison between publicly annotated sentences and newly annotated sentences from 2023 (<ref>)also shows no noteworthy difference, indicating that output quality is not reliant on presence of test AMR annotations in pre-training. However, beyond the basic form, all models show frequent and substantial errors in the parsing. The parses identify the correct top node only 20-40% of the time, reflecting routine failures to incorporate clausal and discourse relations that AMR often captures in the top node—and even with the more relaxed criterion of identifying the main event relation, LLMs succeed in only about half of the parses (40%).When we consider the viability of the full structure as an appropriate meaning representation of the sentence, none of the models produce any fully acceptable AMR parse. These results suggest that out-of-the-box, zero-shot LLM capabilities are limited primarily to mimicking surface format of AMR representations, with understanding of the linguistic functions and phenomena being beyond their zero-shot capabilities. § PARSING WITH FEW-SHOT DEMONSTRATIONS Given that zero-shot parsing shows non-trivial limitations across all models, we next test how parses improve with few-shot demonstrations of AMR parsing. We use the same instructions (and, in ChatGPT and GPT-4, system message), but we now include the specification "I will first show some examples." followed by five example sentences with corresponding AMR parses, selected based on similarity to the test sentence.[Sentence similarity is computed via Universal Sentence Encoder embeddings <cit.>.]For few-shot parses, we apply both Level 1 (Table <ref>) and Level 2 (Table <ref>) evaluations. We see that all parses now conform to AMR format, and the main event is now correct a majority of the time. Identification of the top node has also improved, with correct outputs in approximately half of cases. However, the percentage of overall parse acceptability has made virtually no improvement, despite the explicit few-shot demonstrations.For Level 2 analysis, we see that models have limited reliability in identifying a given event's arguments and modifiers (40-50%) or argument modifiers (10-40%).Additionally, just under half of parses include at least one spuriously-identified argument or modifier (“Extra Mods”). Qualitative analysis indicates that models make diverse errors that can occur at any level of AMR structure, though they show the most reliable accuracy in representing event-argument triplets corresponding to subject-verb-object structures. See <ref> for additional examples and discussion on these points. § METALINGUISTIC NL RESPONSES By far the most thoroughly trained format for LLMs is that of natural language, so we next explore models' ability to use natural language to identify and describe the abstract meaning structure relevant for AMR, via prompting for meta-linguistic information about the target sentence. To do this, we formulate a natural language prompt instructing the model to identify and break down aspects of the sentence meaning structure corresponding to components of AMR, similar to the process that an AMR annotator would use. Our prompt for this setting is shown in <ref>.In this setting, the prompt asks for a breakdown of events, arguments, and event/argument modifiers, but does not elicit enough information to enable complete parses. For this reason, we focus on our Level 2 criteria for analyzing these outputs.Results are shown in the bottom of Table <ref>. We see that the overall patterns of accuracy are strikingly similar to those in the few-shot case (with marginal differences that aretoo small to read into with these small samples).[Though not included in Table <ref>, we note that the NL outputs also show comparable accuracy to the few-shot parses in the Main Rel category for all models.]This suggests that limitations in the zero- and few-shot parses are not due simply to difficulty in generating parse format, but may reflect more fundamental limitations in models' current capacity to analyze semantic structure in language. This conclusion is further supported by the observation that the instructions contained in the metalinguistic prompt do not appear at any level to be fundamentally too difficult for models to interpret: though models make many errors, for every component of the prompt they show in at least some cases the ability to produce correct outputs for that component. See <ref> for example outputs in the NL setting and discussion of instruction-following successes and error patterns.Comparing parse vs NL output Side-by-side inspection of parse and NL outputs supports the conclusion that, although errors are not identical, the two output formats may reflect real patterns in models' analytical capabilities. An illustration can be seen in Figure <ref>, which shows GPT-4 few-shot parse and zero-shot NL outputs for the sentence “He woke to an angry house and darkness in the windows”. This is a simple sentence, but the argument structure of the verb woke lacks the simple subject/object structure that models succeed at most often—and perhaps for this reason in both output formats the model misinterprets the sentence to include two separate events (missing the fact that the angry house and darkness in the windows is a single argument of the waking event) and creating nonsensical argument and modifier structures as a result of the mistaken analysis. Additional side-by-side examples are included in Figures <ref>, <ref>, and <ref> in the Appendix, further illustrating similarities between parse and NL outputs, and supporting the possibility that the observed errors in these outputs reflect real underlying analytical limitations rather than artifacts of instructions or output format.§ SMATCH COMPARISON To anchor our results relative to an existing metric, we obtain Smatch scores <cit.> for our five-shot GPT-4 parsesand compare against those for the supervised AMRBART parser <cit.> on the same test sentences. Since GPT-4's generated parses are often flawed to the point of Smatch not being able to run, we report three methods for obtaining these scores: no fixes, in which all failing parses are simply replaced with single-node placeholder parses; auto fixes, in which some automated format fixes are applied (see <ref> for details) and remaining errors are replaced with placeholders; and manual fixes, in which we supplement automatic fixes with manual fixes to correct remaining format errors.The Smatch results (Table <ref>) clearly show thatGPT-4 output quality is far below that of the supervised AMR parser, supporting our general observation that the quality of these LLM parses is limited.§ CONCLUSION Our analyses show that LLMs have acquired sufficient knowledge of AMR parsing and semantic structure for reliable generation of basic AMR format and partially correct representations of sentence meaning. However, we see abundant, diverse errors in model outputs, virtually no fully accurate parses, and error patterns suggesting real underlying limitations in models' capacity to analyze language meaning. Our findings indicate that models are not currently sufficient out-of-the-box to yield reliable and accurate analyses of abstract meaning structure, and overall that this is a domain in which models show only mixed levels of expertise. We are confident that additional fiddling and clever manipulations can further improve the outputs of these models, at least on certain dimensions. However, we present these results as a current status report and reality check to counterbalance frequent claims focused on widespread success and intelligence of these models out-of-the-box. We look forward to continuing work to better understand the fundamental strengths and limitations of these models in this domain, and to improve the reliability of semantic analysis capabilities achievable through collaboration with these models.§ LIMITATIONS In this paper we intend to provide an overview and status check on the out-of-the-box capabilities of current LLMs for the rich semantic analysis captured by AMR parses. To enable fine-grained manual evaluation not possible through standard metrics, we have used a small exploratory test set, and consequently our results do not enable statistical comparisons or claims about how patterns may play out at larger scale. We look forward to future work applying comparable fine-grained analysis on larger samples, to verify what additional patterns of success and failure may emerge, and what broader generalizations can be made about model capabilities in this domain.A potentially valuable extension that we do not include here would be a detailed comparison with models' success in other (likely simpler) semantic or syntactic parsing formalisms. Given the richness of AMR and our focus on abstract semantic structure per se, we do not include such an analysis in the current work.§ ACKNOWLEDGEMENTS This research was supported by the NSF DMS-2134012, DARPA MCS program through NIWC Pacific (N66001-19-2-4031), and the Allen Institute for AI.acl_natbib § DATASET DETAILS §.§ More on Abstract Meaning Representation AMR formalizes semantic structure of a sentence into directed graphs that capture the “who did what to whom” of the sentence <cit.>. In AMR, events and entities are represented as concept nodes, and semantic relationships or semantic roles as edges. AMR abstracts away from syntactic and morphological surface variations in favor of conceptual representation of predicate-argument structure of a sentence in part by adopting English PropBank <cit.> for event representation. In this way, AMR allows meaning generalization across various surface expressions (e.g., “The girl adjusted the machine” and “The girl made adjustments to the machine" would have the same AMR graph). AMR supports other linguistic analysis such as coreference, named entities, and modality (among others).§.§ Data SourcesAMR 3.0. We use the AMR 3.0 dataset (AMR3; LDC2020T02), which includes 59K AMR graphs. The graphs are manually annotated from English natural language sentences from various genres of text including newswire, discussion forums, fiction and web text. We particularly focus on two English subsets: BOLT discussion forum data and LORELEI data. Our few-shot in-context examples and test instances are pulled from the AMR 3.0 training and dev sets, respectively.The Little Prince. We use publicly available AMR annotations of the novel The Little Prince by Antoine de Saint-Exupéry (translation of original French, Le Petit Prince; LPP). The corpus contains 1.5K sentences with their corresponding, manually-created AMR graphs. Our few-shot in-context examples and test instances are non-overlapping samples drawn from this dataset. 2023 Sentences.To experiment with sentences verifiably not present in pre-training, we randomly sample sentences from websites published in 2023. To obtain the AMR gold parses (2023), we run the Spring Parser <cit.> on the sentences, and then the output is manually corrected by one of the authors with expertise in AMR annotation. These 2023 sentences are used in the test set, and few-shot examples for the 2023 sentences are drawn from the AMR 3.0 training set. §.§ Test Data SelectionAMR3 data was sourced from the AMR 3.0 BOLT and LORELEI instances with publicly available unified annotation from PropBank <cit.>.[<https://github.com/propbank/propbank-release>] The Little Prince has also been partially annotated with Universal Dependencies <cit.> parses <cit.>, and for this work we sourced from those The Little Prince instances that had both AMR and Universal Dependencies annotation.In both cases, we selected sentences of 40-300 character length to eliminate incomplete phrases as well as overly long sentences, producing 413 AMR 3.0 and 67 The Little Prince instances. We then narrowed the sets to 10 random instances from each of the two data subsets.[We ensured inclusion of diverse lengths via manual verification.] For 2023 Sentences, we manually selected sentences from online news sources and blogs with article date stamps of January 2023 or later. We selected 30 sentences, then narrowed the set to 10 instances of varying character lengths. § CONCERNS OF MEMORIZATIONWe took into consideration the possibility that the parses of AMR3 and LLP could be present in the pre-training data of the tested LLMs. This served as the motivation for including the 2023 sentences. Table <ref> shows Level 1 zero-shot parse results broken down by dataset. These results suggest that the quality of the AMR generations is not reliant on direct memorization of the annotated parses from pre-training—in fact, we find the results for 2023 parses to be nearly identical to the LPP results. A closer qualitative look at the parses did not surface any noteworthy differences in parses. § ANALYSIS CRITERIA DETAILSHere we provide further elaboration and illustration with respect to the analysis criteria outlined in Table <ref>. Assessment of Basic Form For the Basic Form criterion we simply ask whether or not the produced output looks (generally) consistent with AMR's standard structure. More specifically, a parse should critically retain three basic AMR format components: concept nodes, edge relationships (whether ARG# or modifier), and hierarchical bracketing notation. We do not require the parses to contain variables (e.g. b in (b / boy)) or to use rounded parentheses (e.g. ()), but we find that every generation with the three critical components also includes variables and parentheses.An example of a parse that receives a zero for the Basic Form is given in Figure <ref>.Figure <ref> shows a parse that is not fully up to AMR standards (e.g. default-01∼e.0 is not a standard format for concept representation) but that we credit for retaining the Basic Form.Overall acceptabilityFor our overall acceptability measure, an AMR expert among the authors assessed whether each parse could be a valid representation of the sentence meaning, based on the AMR annotation guidelines, regardless of match to the gold annotation. This was intended to give fairer credit to model outputs—none of the models’ generated parses managed to perfectly match the gold annotation, but it was possible that some parses may still accurately represent the meaning of the sentence, with some annotation differences from the gold parse. So this measure used expert assessment to judge parse validity in this broader sense. These assessments also forgave minor structural/formatting errors, as long as a correct semantics could be interpreted. Main Rel vs Top Node At times the main event relation in a parse will also be the top node, but in AMR non-eventive relations (e.g., discourse markers, conjunctions, modality) can instead take the top node position (e.g. warrant in Figure <ref> is below the top node contrast-01 representing and). Main Rel disregards whether models can recognize these non-eventive relations, and focuses instead on models' ability simply to recognize the main event of the sentence.Relaxation on exact matchFor all of our match-based criteria, we evaluate based on relaxed matches rather than exact match. For concept nodes, we ignore PropBank sense labels and inflectional variations, and allow matching based on synonyms or otherwise differently-realized versions of the target concept. For example, serve-01 is considered a match to serve, served, or serve-02—and although the AMR gold parse for “She served as a president for ...”) uses have-organization-role-91 for “serve” (the standard AMR method for annotating organizational role, occupation, or profession) we also give credit to generated nodes labeled as any variant of serve.For edge match, the only critical distinction is that between arguments (e.g., ARG0, ARG1)[We simply treat ARG#-ofs as ARGs.] and modifiers (e.g., :time, :purpose). Models receive credit for identifying an argument as ARG even if the number is mismatched, and similarly receive credit for identifying a modifier as a modifier without regard to the semantic specificity.We also relax exact match on AMR's named entity types (e.g., node concept organization in (o / organization :name (n / name :op1 “Morgan” :op2 “Stanley”))). So long as the node concept is a reasonable match (e.g., company vs. organization), the models receive credit.This use of relaxed match increases the need for expert manual annotation, but allows us to credit general semantic competence beyond match to specific AMR conventions.§ METALINGUISTIC PROMPTWe use a single instruction prompt for the meta-linguistic natural language output setting reported in <ref>. The prompt used is shown below: (System: You are an expert linguistic annotator.)Sentence: <replaced with input sentence>Identify the primary event of this sentence, and the predicate corresponding to that event. If there are multiple equally primary events connected by a conjunction like “and”, identify the conjunction, and then identify each of the primary events and their corresponding predicates. For each primary event, identify the arguments of the event predicate, and identify the modifiers of those arguments. Then for each primary event, identify any additional modifiers of that event. § EXAMPLE OUTPUTS AND DISCUSSION In this section we illustrate with additional examples some of the successes, failures, and overall patterns in the LLM outputs. Figures <ref>–<ref> show representative example outputs from GPT-4 in both parse generation (few-shot) and NL response settings (and an additional NL output example is in Figure <ref>). We highlight a number of points. Instruction-following success For the NL output setting in particular, the prompt instructions are somewhat complex, so it is worth considering whether the instructions are too difficult for models to map to correct outputs. However, examination of successes across model generations indicates that no part of the NL setting instructions is fundamentally too difficult for the models to interpret and respond to. In the NL response setting shown in Figure <ref>, the model is able to identify the primary event and arguments, and sort through and label modifiers for both the arguments and the event. Similar competence can be seen in the parse generation setting (Figure <ref>): GPT-4 correctly identifies the main event selection and major arguments and modifiers. For conjunctions between events, we see in Figure <ref> that in both NL and parse settings the model is able to handle the central conjunction “and”, and break down the two coordinated components accordingly—even breaking down the second coordinate into its own two component sub-events. On this basis we can have reasonable confidence that the prompts are sufficiently clear and interpretable for the models. Errors are abundant and diverse Though models show the capacity in principle to handle any component of AMR information, examination of generated outputs shows that errors are abundant, diverse, and observable at every level of AMR structure. In addition to the illustration of output errors in Figure <ref> and Figure <ref>, we see that even in the largely successful example in Figure <ref>, the model has misidentified the primary event in the first coordinate—the main point should be that there was no issue, not that the speaker boarded the train. Further, in the NL response, it has mistakenly identified “next to us” as a modifier of the luggage, rather than an argument of the put event.In Figure <ref> we see that the model is unable to identify the main event in either the parse generation or the NL setting. Though the main event relation is most appropriately identified as “imagine”, in the generated parse the event “amaze” rises as the top event, and in the NL response output, “awaken” is identified as the main event. Even if we ignore the non-eventive information captured by cause (arising from the discourse marker “thus”) and possible-01 (signalled by the modal “can”) concepts, the model fails to show sensitivity to the fact that “imagine my amazement” conveys the central information through which the content in the rest of the sentence is introduced. Finally, Figure <ref> shows another more extreme failure in the NL response setting. Here the model has mistakenly zeroed in on “will look wonderful” as the primary event, rather than “it is a standout piece”, and as a result it has defined arguments and modifiers in a variety of nonsensical ways.Most reliable with core event triplets Though errors are diverse and fairly idiosyncratic, one trend that emerges is that models show the most reliable performance with core event-argument triplets for individual verbs, most often corresponding to subject-verb-object triplets. For example, in Figure <ref>, in both formats the model clearly identifies the event “churn” and its arguments “the K-pop music sphere” and “newest catchy songs”. In Figure <ref>, in both formats the model correctly structures the event “board” with its arguments “we” and “train”, and the event “put” with its arguments “we” and “the luggage”. In Figure <ref>, in both formats the model correctly captures the event “awaken” with its arguments “I” and “an odd little voice”. This suggests that the model has a solid grasp on core verb argument structure—or at least that corresponding to subject-verb-object triplets—and can reliably map this to AMR form. However, beyond this core event structure, model performance becomes substantially less reliable. Parallel patterns between parse and NL We observe in Section <ref> that there are similarities in the basic patterns of success and failure across LLM parse and NL outputs, and we highlight Figure <ref> as an example. We see these parallels in Figures <ref>–<ref> as well: for instance, as we have just discussed, the consistent success on verb-argument triplets described above is seen in both parse and NL outputs for each example. More broadly, in Figure <ref> we see that in both formats the model is successful on nearly the full AMR structure: it identifies the main verb (“churn”) and its arguments (“sphere” and “song”) and modifiers (“vie for”), and it captures semantic modifiers for the arguments in a coherent manner (e.g., “K-pop” is recognized as modifier of “music sphere”). In Figure <ref>, in both settings the model is successful in identifying “and” as a top-level conjunction joining two events, but makes the error of choosing “boarding a train” as the main event in the first coordinate of the structure. Similarly, in Figure <ref> both output settings capture core event structure of “awaken”, but miss out on the central event “imagine”. There are certainly divergences in output errors between different output formats for a given sentence. However, these divergences often stem from the fact that the tasks in these two settings do differ. For example, errors like the missing have-degree-91 in Figure <ref>—an AMR device meant to structure information extent—is not possible in the NL setting, as this level of structured detail is not requested in the prompt. Similarly, “sit down” in the parse in Figure <ref> is split into two concept nodes, but this is a level of semantic structuring that cannot be gauged in the NL responses.These parallels suggest that patterns of successes and errors in our observed outputs are not simple idiosyncrasies of instruction or output format, but may indicate deeper patterns of strength and limitation in the models' capacity for semantic analysis.§ AUTOMATED FORMAT FIXES Our automated format fixes, used for two out of three of our settings for Smatch calculation on GPT-4 few-shot parses, consist of the three simple rule-based fixes detailed below:* Keep only first full AMR, and ignore any subsequently generated content (e.g., if model generates multiple separate AMR structures for a single sentence).* For retained AMR parse, delete any unmatched right parentheses.* Identify duplicates among concept variable names (e.g., “s” in (s / sphere)), and replace with non-duplicates.
http://arxiv.org/abs/2310.17793v2
{ "authors": [ "Allyson Ettinger", "Jena D. Hwang", "Valentina Pyatkin", "Chandra Bhagavatula", "Yejin Choi" ], "categories": [ "cs.CL", "cs.AI" ], "primary_category": "cs.CL", "published": "20231026214759", "title": "\"You Are An Expert Linguistic Annotator\": Limits of LLMs as Analyzers of Abstract Meaning Representation" }
[email protected] Hyderabad Professor CR Rao Rd, Gachibowli Hyderabad Telangana India 500032 printfolios=trueThis research study investigates the minimization of inequality in the ranks of vertices obtained using the PageRank algorithm. PageRank is a widely used algorithm for ranking webpages and plays a significant role in determining web traffic. This study employs the Gini coefficient, a measure of income/wealth inequality, to assess the inequality in PageRank distributions on various types of graphs. The investigation involves two experiments: one that modifies strategies for handling dead-end nodes and another that explores six deterministic methods for reducing inequality. Our findings indicate that a combination of two distinct heuristics may present an effective strategy for minimizing inequality. <ccs2012> <concept> <concept_id>10003752.10003809.10003635</concept_id> <concept_desc>Theory of computation Graph algorithms analysis</concept_desc> <concept_significance>500</concept_significance> </concept> </ccs2012> [500]Theory of computation Graph algorithms analysisHeuristics for Inequality minimization in PageRank values Subhajit Sahu January 14, 2024 =========================================================§ INTRODUCTION The PageRank algorithm is a critical tool in web search and ranking, determining the order of search results on popular search engines. This algorithm evaluates webpages' popularity based on the idea that pages linked by other popular pages are themselves considered popular. This can result in a positive feedback loop, where already popular pages receive even more traffic, exacerbating inequality.Inequality in web ranking is a pressing concern, as excessive inequality can lead to social unrest. To address this issue, this research focuses on minimizing inequality in PageRank rankings using various heuristics. Two main experiments are conducted: adjusting dead-end handling strategies and comparing deterministic approaches for inequality minimization.§ RELATED WORK Algorithmic fairness has attracted significant attention in the past years <cit.>. Given two groups of nodes, we say that a network is fair, if the nodes of the two groups hold equally central positions in the network <cit.>. Saxena et al. <cit.> note that structural bias of social networks impact the fairness of a number of algorithms used for social-network analysis. Xie et al. <cit.> present a visual analysis framework for exploring multi-class bias in graph algorithms.Tsioutsiouliklis et al. <cit.> two classes of fair PageRank algorithms - fairness-sensitive PageRank and locally fair PageRank. They define a stronger fairness requirement, called universal personalized fairness, and show that locally fair algorithms also achieves this requirement. Krasanakis et al. <cit.> present an algorithm for fair ranking with personalization, even when the personalization suffers from extreme bias while maintaining good rank quality. In another paper, Tsioutsiouliklis et al. <cit.> provide formulae for estimating the role of existing edges in fairness, and for computing the effect of edge additions on fairness. They then propose linear time link recommendation algorithms for maximizing fairness. § PRELIMINARIES§.§ PageRankConsider a directed graph G(V, E, w), with V (n = |V|) as the set of vertices and E (m = |E|) as the set of edges. The PageRank R[v] of a vertex v ∈ V in this graph measures its importance based on incoming links and their significance. Equation <ref> defines the PageRank calculation for a vertex v in G. G.in(v) and G.out(v) represent incoming and outgoing neighbors of v, and α is the damping factor (usually 0.85). Initially, each vertex has a PageRank of 1/n, and the power-iteration method updates these values iteratively until they converge within a specified tolerance τ, indicating that convergence has been achieved. R[v] = α×∑_u ∈ G.in(v)R[u]/|G.out(u)| + 1 - α/n §.§ Gini coefficient Gini coefficient G is a value which represents income/wealth inequality within a nation or group. It ranges from 0 to 1, with 0 representing total equality and 1 representing total inequality. It is calculated from the Lorenz curve, which plots cumulative income/wealth against cumulative number of households/people. It is calculated using Equation <ref>, where A is the area between the line of perfect equality and the Lorenz curve, and B is the total area under the line of perfect equality. G = A/A+B§ APPROACH In the first experiment, we investigate the impact of three different dead-end handling strategies on the Gini coefficient of PageRank values across various types of graphs. In the second experiment, we attempt six different deterministic heuristics for adding edges to the graph for minimization of Gini coefficient of PageRank values. Dataset for the experiments are obtained from the https://sparse.tamu.eduSuiteSparse Matrix Collection <cit.>. Our experiments are reproducible. The codebase is available at our repository.[https://github.com/puzzlef/pagerank-minimize-inequality]§.§ Gini coefficient of PageRank values with different dead-end handling strategies In this experiment we study the Lorenz curve and Gini coefficient of PageRank values on a number of graphs, and compare between PageRank values obtained with three different dead-end handling strategies: teleport from dead-ends (default), self-loop dead-ends (loop), and self-loop all vertices (loopall). The PageRank values of vertices in each graph is obtained with https://www.npmjs.com/package/nvgraph.shnvgraph.sh <cit.>, which internally uses https://docs.nvidia.com/cuda/archive/10.0/nvgraph/index.html#nvgraph-pagerank-examplenvGraph PageRank <cit.>. The Lorenz curve of ranks is obtained by sorting the ranks in ascending order and cumulatively summing them up to obtain 100 samples. These 100 samples are then compared with the ideal (total equality) Lorenz curve to obtain the Gini coefficient. Note that this is output into YAML files by nvgraph.sh itself. This measurement process of Lorenz curve and Gini coefficient is repeated for loop and loopall variants of graphs, which are generated from the original graph using https://github.com/puzzlef/graph-generategraph-generate <cit.>. Finally we process all YAML files into CSV, separately for Gini coefficient and Lorenz curve, and compare the results.§.§.§ Results Results, shown in Figures <ref> and <ref>, indicate that web graphs in general (except ) have high Gini coefficient (i.e., high inequality) along with a social network , and a citation network . Road networks are observed to have the lowest Gini coefficient (i.e., low inequality) among all graph classes. If we take a look at the average Lorenz curve of all graphs, we observe that 50% of popularity (ranks) is owned by ≈20% of the vertices. However, on the web-Stanford graph 50% of popularity is owned by only ≈3% of vertices, and on the(another web graph) is owned by only ≈1% of the vertices. This would be a significantly worrying level of inequality if each vertex represented a unique person. However, it is possible that many low-ranked pages are low-effort ones and thus have a high page-to-person ratio.On the social network , 50% of popularity is owned by only ≈7% of vertices (Gini coefficient of ≈0.66), but on the(a communication graph) 50% of popularity is owned by ≈46% of vertices (Gini coefficient of ≈0.07). This is quite interesting, given that wiki users are usually not ranked, while search engines always rank web pages. Road networks, such as , are observed to have have a distribution similar to that of . §.§ Heuristics for Inequality minimization In this experiment we study the minimization of Gini coefficient of PageRank values on a number of graphs, using six different deterministic heuristics for adding edges to the graph. First, the PageRank of each vertex is computed in the original graph, and the original Gini coefficient is obtained. A heuristic is then run to obtain the most suitable edge to be added. After this edge is added, the same heuristic is run again. For each heuristic 1000 edges are added. We plot the variation of Gini coefficient with each added edge for each heuristic.Our first heuristic, edgeInsertCxrx, adds an edge between the highest contributing vertex to the lowest rank vertex. The idea behind this heuristic is to provide the highest possible increase in rank to the lowest rank vertex. We obtained the highest contributing vertex by finding the vertex with highest R/(d+1) value.The second heuristic called edgeInsertCxSx is based on the idea of providing the highest possible increase in rank to a vertex which directly or indirectly links to many other vertices (so that it increases the rank of a large number of other vertices as well). This is achieved by adding an edge from the highest contributing vertex to the vertex with highest reverse PageRank. Here, the reverse PageRank of a vertex is obtained by reversing (transposing) the graph, and calculating the PageRanks.The third heuristic called edgeInsertCxSr is an extension of edgeInsertCxSx, and it prioritizes increasing the rank of vertices which link (directly or indirectly) to a large number of vertices having a low PageRank score. This is done by calculating a modified reverse PageRank, that prioritizes contribution from vertices with low forward PageRank. Here, the reverse rank of each vertex is calculated as r_u = α R_u r_v / d_v + (1-α)/N, where r_u is the reverse rank of a given vertex and R_u is its forward rank (precomputed), r_v is the reverse rank of a target vertex and d_v is its in-degree, α is the damping factor, and N is the number of vertices in the graph.The remaining three heuristics edgeInsertCRrx, edgeInsert-CRSx, and edgeInsertCRSr are a variation of the three heuristics mentioned above where the source vertex is chosen such that it minimizes the rank of the highest ranked vertex. That is, we choose the source vertex with highest contribution to the highest rank vertex. The idea is to reduce rank of high-ranked vertices and increase the rank of low-ranked vertices at the same time, thus reducing inequality. §.§.§ Results It is observed that web graphs tend to have the highest inequality (Gini coefficient), while road networks tend to have the lowest.As shown in Figure <ref>, results indicate that the heuristics usually succeed in reducing inequality on graphs with high Gini coefficient (such as web graphs and social networks), but mostly fail on graphs with low Gini coefficient (such as road networks and collaboration networks). It is also observed that the rate of decrease in Gini coefficient decreases as more and more edges are added to graph. In general, we observe that the heuristics edgeInsertCxrx, edgeInsertCxSx, and edgeInsertCxSr perform the best, with edgeInsertCxSx, and edgeInsertCxSr performing almost identically. edgeInsertCxrx and edgeInsertCxSx heuristics would therefore be the best choices, given that edgeInsertCxSr requires a modified PageRank computation.Based on these results, a suitable approach to minimizing inequality would be to apply both the edgeInsertCxrx and edgeInsertCxSx heuristics and choose the the best among them for each edge addition. Future research work can include exploring randomized heuristics or looking for better deterministic heuristics.§ CONCLUSION The study highlights that inequality is prevalent in web graphs, and demonstrates that efforts to minimize it are more effective in contexts with high Gini coefficients. The choice of heuristics plays a crucial role in reducing inequality. Our research suggests that a combination of edgeInsertCxrx and edgeInsertCxSx heuristics may offer an effective approach to minimize inequality. Future research should continue to explore strategies to mitigate inequality in web ranking algorithms and promote a more equitable web environment. ACM-Reference-Format
http://arxiv.org/abs/2310.18537v1
{ "authors": [ "Subhajit Sahu" ], "categories": [ "cs.CY", "cs.SI", "K.4.2" ], "primary_category": "cs.CY", "published": "20231027233612", "title": "Heuristics for Inequality minimization in PageRank values" }
Graph Convolutional Networks for Complex Traffic Scenario Classification Tobias Hoek^1,2 Holger Caesar^1^1TU Delft Andreas Falkovén^2^2Kognic Tommy Johansson^2January 14, 2024 ===============================================================================================A proper evaluation of stories generated for a sequence of images—the task commonly referred to as visual storytelling—must consider multiple aspects, such as coherence, grammatical correctness, and visual grounding. In this work, we focus on evaluating the degree of grounding, that is, the extent to which a story is about the entities shown in the images. We analyze current metrics, both designed for this purpose and for general vision-text alignment. Given their observed shortcomings, we propose a novel evaluation tool, GROOViST, that accounts for cross-modal dependencies, temporal misalignments (the fact that the order in which entities appear in the story and the image sequence may not match), and human intuitions on visual grounding. An additional advantage of GROOViST is its modular design, where the contribution of each component can be assessed and interpreted individually.§ INTRODUCTIONGenerating a textual story that is plausible given asequence of images is a challenging task involving aspects such as cross-modal interactions, temporal dependencies between linguistic and visual content, and causal reasoning. In the language-and-vision community, <cit.> operationalized the task and released the Visual Storytelling Dataset (VIST), a collection of English stories created by speakers on top of 5-image visual sequences. Several models have been proposed for the task of generating plausible stories for a given sequence, ranging from RNNs <cit.> to Transformers, trained either end-to-end or leveraging additional knowledge-graphs <cit.>.Evaluating the quality of the automatically generated stories is extremely difficult: Given the creative nature of the task (many stories could be sensible for a given image sequence), reference-based metrics like METEOR <cit.> or CIDEr <cit.> are not appropriate—they indeed poorly correlate with human judgments <cit.>. Moreover, a proper evaluation must consider multiple aspects, such as coherence, grammaticality and, importantly, visual grounding. Yet, most evaluation metrics proposed specifically for visual storytelling do not consider the images at all <cit.>.-1 In this paper, we focus on evaluating a story's degree of grounding, that is, the extent to which a story is about the entities shown in the images. To the best of our knowledge, there is only one metric proposed to date for evaluating grounding in visual storytelling, the Visual Grounding scorer (RoViST-VG) by <cit.>. We carry out an extensive analysis of this metric and reveal that it has critical shortcomings. To overcome this, we propose a novel, modular evaluation tool, which we name GROOViST (grounding objects in visual storytelling). We show that GROOViST is robust to temporal misalignments, correlated with human intuitions about grounding, and easy to interpret. Our code is available at: <https://github.com/akskuchi/groovist>-1 § ANALYSES OF EXISTING METRICSTo assess the level of visual grounding of a story in visual storytelling, <cit.> proposed RoViST-VG. This metric is the output of a model pre-trained on the Flickr30K Entities dataset <cit.> to learn the relationships between the nouns in a story and the regions of an image in a contrastive learning regime. For a given <image-sequence, story> pair, RoViST-VG extracts: from each image, the bounding boxes and corresponding visual features of its 10 most salient regions, using FasterRCNN <cit.>; from the story, the GloVe <cit.> representations of each noun in it. The pre-trained model receives these extracted embeddings (from GLoVe and FasterRCNN) and returns the final representations T and I, respectively. The grounding score is then calculated using Eq. (<ref>) as the maximum cosine similarity between T and I, weighted by inverse document frequencies (idf) of the nouns.[More details on RoViST-VG are provided in Appendix <ref>.].8967!RoViST-VG = log∑_i=1^|T_e|exp(idf(T_i)max_I_e,j∈ I_e(cos(T_e,i, I_e,j))) To analyze the suitability of RoViST-VG, we compare it to CLIPScore <cit.>. CLIPScore has not been designed to evaluate visual storytelling. Here, we use it to score each image-sentence pair independently in a story sequence. This approach is not ideal as it cannot capture temporal misalignments between a text and a visual content (e.g., an early sentence may be `they were getting ready to go to the circus' but the circus may only appear later). However, since CLIPScore has been designed for general vision-text alignment, we expect it to be reasonably effective at capturing visual grounding at the image-sentence level. It corresponds to the cosine similarity between CLIP's <cit.> representations of a sentence c and an image v (with 2.5 as re-scaling factor).0Next we explore how good the above metrics areat capturing grounding in visual storytelling data.-1 §.§ Grounding in visual storytelling datasetsWe analyze the scores assigned by these metrics to the stories in three visual storytelling datasets: (1) VIST <cit.>, that comprises sequences of five natural images (from Flickr) and corresponding five-sentence stories;(2) AESOP <cit.>, that includes sequences of three synthetic images <cit.> and corresponding three-paragraph long stories; (3) VWP <cit.>, which comprises sequences of movie shots, each including 5-10 images with corresponding sentences that make up their stories.We compute RoViST-VG and CLIPScore on the original <image sequence, story> pairs in the test splits of these datasets,[5055 samples for VIST and 991 for AESOP. Due to the lack of a separate test split for VWP, we considered all 13843 samples in the dataset.] and compare these scores to the ones obtained on a random setting where each image sequence is paired with five random stories (from the corresponding dataset); among these, we consider the pair that receives the highest score. We expect a metric that properly captures visual grounding to assign higher scores to the original stories than to the randomly paired stories. Figure <ref> shows the average scores of the metrics in both settings. Surprisingly, RoViST-VG scores are not higher in the original setting than in the random setting. In fact, on VIST, the random <image sequence, story> pairs receive higher RoViST-VG scores than the original ones. In contrast, CLIPScore follows the expected pattern. §.§ Correlation with Flickr8k-Expert ratingsWe assess the ability of the two metrics to capture general image-caption grounding using Flickr8k-Expert <cit.>, a publicly available dataset with human ratings for image-caption pairs. In particular, we consider the subset of 3391 samples where all three annotators agree.[Human annotators rated captions on a scale of 1 to 4.] CLIPScore is designed for this purpose and is therefore well-suited for the task. RoViST-VG is not meant for measuring image-caption grounding, although it should align with human ratings to some extent, given its purpose and pre-training. However, as we can see in Table <ref>,RoViST-VG shows no correlation with human ratings—while CLIPScore does. § GROOVIST Our analyses showed that RoViST-VG has some important limitations as a metric for assessing the degree of visual grounding—both in stories and image captions. To overcome these issues, we propose GROOViST, a modular metric consisting of various components informed by insights from both CLIPScore and RoViST-VG. These are: Noun phrase (NP) extraction We process the story and extract all the NPs;[Using spaCy's English transformer pipeline for chunking: <https://spacy.io/models/en#en_core_web_trf>] this is similar to RoViST-VG but better because RoViST-VG only considers nouns and fails to handle compounds such as `parking lot'. Additionally, focusing on NPs allows for the contribution of accompanying adjectives (e.g., `silly faces'). Vision-language alignment We compute alignment scores between all the extracted bounding boxes and NPs and select the highest score for each NP. This step is similar to RoViST-VG but, instead of training a dedicated model, we use the off-the-shelf CLIP <cit.> model. Penalizing poorly grounded NPs The previous steps result in a positive score for all the NPs in a story. Yet, some may in fact be poorly grounded (i.e., have low visual alignment score). Such NPs, therefore, should contribute negatively to the overall degree of grounding of a story. To operationalize this, we select the mean score over all NPs in the entire dataset as a threshold θ and calculate the distance of each NP's score from θ, assigning negative values to NPs with scores below θ (NP_neg) while retaining the scores of NPs with values above θ (NP_pos). Concreteness weighting RoViST-VG uses inverse document frequencies (idf) for weighting the similarity scores of nouns to handle abstract frequent words such as `time'. However, we observe that idf weights tend to increase the similarity scores of some less-frequent non-grounded nouns and decrease the scores of some frequent-and-grounded nouns, adversely affecting the overall score.[Examples are provided in Appendix <ref>.]Hence, after the penalization step, we use word concreteness ratings <cit.> for weighting the resulting scores (instead of idf) and capture the fact that concrete NPs are more likely to be visible.[98.7% of NPs in the VIST test set contain words for which concreteness ratings are available.] NormalizationFinally, to obtain the GROOViST score of a story, we aggregate the weighted scores of all its NPs and normalize the sum by the total number of NPs in the story, which results in a value unaffected by story length (or more precisely, by the number of NPs in it):( ∑_i=1^n NP_pos_i + ∑_i=1^m NP_neg_i)/( n+m )where n and m are the number of NPs with positive and negative scores, respectively. See Figure <ref> for how this facilitates interpretability. The pseudo-code and a working example for GROOViST are provided in Algorithm <ref> and Figure <ref>, respectively. GROOViST scores are unbounded by default, but tanh can be used to map them to the [-1, 1] range. § ROLE OF GROOVIST COMPONENTS We test GROOViST on the same evaluation criteria used in Section <ref>. From Figure <ref> and Table <ref>, we observe that GROOViST fares well on both evaluation criteria. First, it assigns higher grounding scores to original compared to random stories. Second, it moderately correlates with human image-caption ratings. This indicates that GROOViST is a more robust metric than RoViST-VG.To understand the impact of GROOViST's components on the final grounding score, we conduct several experiments by both ablating the components and replacing them with plausible alternatives. Ablations Penalizing poorly grounded NPs and Concreteness weighting are the two components of GROOViST that can be ablated from the metric. Replacements The Concreteness weighting and Noun phrase (NP) extraction components of GROOViST can be replaced with idf weights and nouns, respectively.In total, we consider six alternative versions of our metric, which we obtain by applying all possible combinations of ablations and replacements. We test these versions on the same evaluation criteria used in Section <ref>. Table <ref> reports how they fare with respect to the two criteria we consider.We observe that ablating or replacing components from GROOViST results in scores that either do not meet at least one of the criteria or do so to a much lower extent.[The resulting values are provided in Appendix <ref>.] This is particularly apparent in the metric versions where the Penalizing poorly grounded NPs component is ablated, which further confirms its importance. The GROOViST (-C +idf) version satisfies Criterion 1, indicating that frequency-based information can be helpful as a heuristic. However, it may result in discrepancies as shown in Appendix <ref>, Figure <ref>. We consider concreteness to be a more theoretically motivated notion than frequency to capture visual grounding. Its value is apparent with respect to Criterion 2: replacing Concreteness weighting with idf weighting decreases the correlation of the metric scores with Flickr8k-Expert ratings.§ EVALUATION OF GROOVISTTo further evaluate the extent to which GROOViST captures intuitions on stories' degree of visual grounding, we compare our metric to human judgments. Since no previous work collected human data for this specific purpose, we run a small data collection by asking 5 participants to rate a sample of the VIST data. In particular, we ask participants to provide ratings for 100 randomly sampled VIST <image sequence, story> pairs, using a 4-point Likert-like scale (instructions: “a score of 4 indicates that most aspects mentioned in the story are depicted in the sequence of images”).[Appendix <ref> provides further details.] We formulate two hypotheses about the strengths and weaknesses of GROOViST and CLIPScore and experimentally test their validity using the human grounding judgments. §.§ Temporal misalignmentEffective metrics for measuring grounding in visual storytelling should account for possible temporal misalignments between the visual and textual modality. That is, they should account for the fact that entities that are grounded in an image could be mentioned earlier or later in the story—not necessarily in the corresponding sentence. We hypothesize that GROOViST—since it takes into account the entire story holistically—correlates better with human judgments than CLIPScore on samples with high temporal misalignment. To test this hypothesis, we define temporal misalignment t of a sentence_i in a sequence as the number of its NPs matching with visual entities in images (img_j ≠ i) at other positions of the sequence, normalized by the total number of its NPs. The overall temporal misalignment T of a story is then the average of its sentence-level t values:t(sentence_i) = #(NPs matching img_j ≠ i)/#(NPs in sentence_i) T(story) = ∑_i=1^n t(sentence_i)/ nwhere n is the number of sentences in a story. We consider a story to have high temporal misalignment if T ≥ 1.0, i.e., at least as many as the average number of NPs per sentence are misaligned. In the annotated data, T∈[0.16, 1.53] and 18% of the stories exhibit high temporal misalignment, indicating the prevalence of the phenomenon. As can be seen in Figure <ref>, our hypothesis is confirmed: GROOViST exhibits a higher correlation with human ratings than CLIPScore on samples with a high T, i.e., its scores are overall more aligned with human intuitions when in the presence of temporally misaligned entities. This confirms the ability of GROOViST to handle non-trivial grounding dynamics in a story, different from CLIPScore. At the same time, we notice that CLIPScore achieves a higher correlation than our metric in samples with low T, which confirms once again that the former is an effective tool for capturing grounding in well-aligned multimodal data. §.§ Proportion of noun phrasesGROOViST builds on noun phrases. As explained above, this has some obvious advantages, e.g., it allows to measure the individual contribution of each NP toward the final score (seeFigure <ref>), but also some possible limitations. For example, we hypothesize that GROOViST scores may be dependent on the number of NPs; for stories where grounding hinges mostly on NPs, we expect GROOViST to be well aligned with human intuitions; less so when it hinges on verbs, for example, in which case CLIPScore may be better. To test this hypothesis, we define proportion-of-NPs (P) of a story as the fraction of NPs to all the words in the story: P(story) = #(NPs in story)/#(all words in story) We focus on the subset of <image sequence, story> pairs with high human ratings,[Human rating ≥ 3 on a scale of 1 to 4.] to ensure our analysis genuinely explores the role of NPs in well-grounded stories without being influenced by other factors. We then compute P values for these sequences and bin them into two sets—low P and high P—using the distribution's mode (0.2325).[The same results also hold when using mean and median.] The high P bin comprises 32.7% of the total number of subset samples. In Figure <ref>, we see that our hypothesis is confirmed. GROOViST scores turn out to be very well aligned with human intuitions—and indeed significantly more correlated than CLIPScore—in the high P bin. In contrast, our metric lags behind CLIPScore in the low P bin, though the distance between the metrics is rather small, and the two metrics generally achieve very low correlations. Although the dependency of GROOViST on the proportion of NPs in a story might be seen as a limitation of the metric, we argue that nouns and accompanying phrases tend to offer the most visual information <cit.>. As for RoViST-VG, it achieves a very low correlation with human ratings in both analyses, which confirms its flaws.§ CONCLUSION We proposed GROOViST, a novel reference-free metric for evaluating grounding in visual storytelling, an aspect that surprisingly is often overlooked in this task. We showed that existing metrics have serious shortcomings, and analyzed the strengths and limitations of our proposed metric. GROOViST is modular, highly interpretable, and aligned with human intuitions on visual grounding. Preliminary results indicate that GROOViST is a suitable tool for evaluating automatically generated stories. We plan to test this aspect extensively in future work. § LIMITATIONSIn this section, we discuss the limitations specific to our metric and to the general reference-free evaluation paradigm. As discussed in Section <ref>, GROOViST is heavily dependent on noun phrases making it oblivious to other visually informative words, such as verbs. For identifying poorly grounded NPs, GROOViST relies on a threshold value, which is determined based on the dataset of interest. This makes GROOViST vulnerable to the skew of the dataset. Despite our preliminary analysis, GROOViST's evaluation of model-generated stories is yet to be fully tested. Also, in general, reference-free metrics rely on an underlying pre-trained model, which often is stagnant in learning and might require regular fine-tuning for prolonged future relevance. We would also like to underline that throughout this work, we only used and evaluated models trained in the English language text. However, given the modularity of GROOViST, it is possible to switch to models such as multilingual-CLIP <cit.>.§ ETHICS STATEMENT For collecting human judgments, we recruited participants on a voluntary basis among colleagues of our institution. All data collected for this work is de-identified to ensure the privacy and security of everyone involved. The authors of the VIST dataset <cit.> mention that all images used are CC-licensed.§ ACKNOWLEDGEMENTSWe thank the participants of the human evaluation study and the members of the Dialogue Modelling Group for their vital feedback on the design and experiments. Furthermore, we are grateful to the anonymous EMNLP reviewers for their helpful suggestions and for engaging in fruitful discussions. AKS is supported by the TIMELY project under EU H2020 grant 101017424. RF is supported by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 819455).acl_natbib § APPENDIX § TEXT CONCRETENESS EXAMPLE Figure <ref> shows that through idf weighting RoViST-VG penalizes the alignment score of the relatively frequent noun `church' (0.266). Inversely, idf weighting increases the low alignment score of the abstract and relatively less-frequent noun `pic' (0.232). This could have unintended effects on the overall score of the metric, resulting in several discrepancies discussed in Sections <ref> and <ref>.§ HUMAN EVALUATION We recruited participants on a voluntary basis among colleagues of our institution. We asked the participants for their consent through an informed consent form (see Figure <ref>). Participants who expressed their consent, were provided access to a scoring web interface with instructions. Each of the participants provided ratings for 100 <image sequence, story> pairs of the VIST data test set on a 4-point Likert-like scale.§ ROVIST-VGModel The RoViST-VG model comprises an image-encoder and a text-encoder. The image-encoder encompasses a pre-trained ViT model, an additional linear layer (W_i), and a tanh activation function for obtaining the image representations I_e. The text-encoder has a linear layer (W_t) and a tanh activation function for encoding GLoVe vectors into text embeddings T_e.Pre-training The procedure used for pre-training the RoViST-VG model is provided in Algorithm <ref>. § GROOVIST PSEUDOCODE For a given <image sequence, story> pair, the pseudocode in Algorithm <ref> outlines the steps involved in computing the GROOViST score.§ ABLATION AND REPLACEMENT RESULTS
http://arxiv.org/abs/2310.17770v1
{ "authors": [ "Aditya K Surikuchi", "Sandro Pezzelle", "Raquel Fernández" ], "categories": [ "cs.AI", "cs.CL", "cs.CV", "cs.LG" ], "primary_category": "cs.AI", "published": "20231026202716", "title": "GROOViST: A Metric for Grounding Objects in Visual Storytelling" }
Vision-Based Reconfigurable Intelligent Surface Beam Tracking for mmWave CommunicationsSpecial thanks to the Sony Research Center in Lund for providing their reconfigurable intelligent surface for testing and research.This work has been funded by the Horizon Europe EU Framework Programme under the Marie Skłodowska-Curie grant agreement No. 101059091, the Horizon 2020 EU Framework Programme under Grant Agreement No. 861222, the Swedish Research Council (Grant No. 2022-04691), the strategic research area ELLIIT, Excellence Center at Linköping – Lund in Information Technology, and Ericsson.Juan Sanchez, Xuesong Cai, and Fredrik Tufvesson Department of Electrical and Information Technology, Lund University, Lund, Sweden {juan.sanchez, xuesong.cai, fredrik.tufvesson}@eit.lth.se2023-10-25 ======================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== Reconfigurable intelligent surfaces have emerged as a technology with the potential to enhance wireless communication performance for 5G and beyond. However, the technology comes with challenges in areas such as complexity, power consumption, and cost. This paper demonstrates a computer vision-based reconfigurable intelligent surface beamforming algorithm that addresses complexity and cost issues and analyzes the multipath components that arise from the insertion of such a device into the wireless channel. The results show that a reconfigurable intelligent surface can provide an additional multipath component. The power of this additional path can be critical in blockage scenarios, and a capacity increase can be perceived in both line-of-sight and non line-of-sight scenarios. Reconfigurable intelligent surfaces, computer vision, wireless channel, beamforming, multipath component.§ INTRODUCTIONThe ever-increasing need for higher data rates and the scarcity of the radio spectrum have been influential factors for the exploration of communication at higher frequencies with each new technology release. Propagation at radio bands such as millimeter wave (mmWave) or subterahertz starts to show characteristics typical of optical propagation, where line-of-sight (LOS) and specular reflections are the dominant propagation mechanisms <cit.>. Propagation at these frequencies also suffers from higher propagation loss and higher absorption effects. Massive multiple-input multiple-output (MIMO) systems have exploited a large number of antenna elements to enable coherent processing of electromagnetic waves (EM) and withstand the harsh wireless channel at higher frequencies at the transmitting and receiving ends.Recently, the paradigm has further evolved into a joint manipulation of the EM waves from the transmitting and receiving ends and from the environment itself, by using so-called reconfigurable intelligent surfaces (RISs). These RISs are devices that are thought to scatter incident EM waves at will depending on the desired configuration. The technology comes with practical challenges such as manufacturing cost, power consumption, processing complexity – which in turn affects delay and power consumption, among all. Multiple studies have shown that RISs can leverage the geometry of nodes and the environment to form a beam at low complexity. For this purpose, low-cost optical devices such as cameras can be used as sensors whose output can be processed using computer vision algorithms, revealing the geometry of the environment. The authors in <cit.> demonstrated that the geometric information extracted from cameras can improve the SNR and consequently the achievable bitrate by using different processing methods.The present paper demonstrates an arrangement of a computer vision-aided RIS that tracks the RX and TX location and updates its scattering pattern according to the beamforming algorithm presented in <cit.>. Furthermore, the effect of introducing such a structure into the wireless channel is analyzed. The rest of the paper is organized as follows: Section <ref> presents the RIS arrangement and measurement setup. Section <ref> establishes the post-processing and measurement analysis framework. Section <ref> presents the measurement results for the double-directional channel impulse response with and without RIS activation and parameterizes the wireless channels experienced in the measurements for each scenario. Section <ref> gives conclusive remarks and outlook for future studies.§ MEASUREMENT CAMPAIGNFor the measurement campaign, the mmWave channel sounder available at Lund University <cit.> was used together with a commercial RIS <cit.>, in an indoor scenario. The channel sounder measures the channel transfer functions (CTFs) of all antenna combinations available in a so-called snapshot. The channel sounder RF ends consisted of a rectangular TX array and an octagonal RX array, both dual polarized. There were 128 TX and 256 RX antenna elements, for a total of 32768 channels measured per snapshot. The RIS consisted of a 1-bit surface arranged in 16x16 antenna patches, which can be reconfigured via serial communication with another device. A form of geometry-based maximum ratio transmission (MRT) beamforming <cit.> was used to control the device's phase shifts upon visual input from a commercial, low-cost optical camera. Fig. <ref> shows the measurement setup and close-ups of the mmWave components used for this measurement campaign, while Table <ref> summarizes all relevant system parameters.The RX was moved along a 1 m straight trajectory normal to the RIS boresight at a constant speed of 1 cm/s, in a line-of-sight (LOS) scenario, as seen in Fig. <ref>. The trajectory included a number of static measurements at the beginning and end of the trajectory, meaning that the experienced propagation channel should not vary significantly for these positions. This observation implies that the number of available positions is larger than the expected 100 positions for such a trajectory and speed, and that these static measurements are used to validate the consistency in the measurement campaign. The same trajectory was measured for subscenarios where the RIS was active and inactive. The shadowed region in the top left corner of Fig. <ref> corresponds to a virtual blocker added after in the middle of the processing to generate a synthetic non line-of-sight (NLOS) scenario, as discussed in Section <ref>. Calibration was performed in the position axis to align the trajectories for an active and an inactive RIS after measurements were taken.§ MULTIPATH COMPONENT EXTRACTION AND CAPACITY ANALYSISThe propagation channel can be characterized by a transfer function. The channel transfer function, in turn, is considered as a superposition of multipath components (MPCs) modeled as follows,𝐇(f,s,m_T,m_R;Θ) = ∑_l=1^L 𝐛_m_R^T (ϕ_R,l,φ_R,l,f) [ γ_HH,l γ_HV,l; γ_VH,l γ_VV,l ]𝐛_m_T^T (ϕ_T,l,φ_T,l,f) ·𝐛(f) e^-j2 π f τ_l e^ -j2 π f ν_l t_s,m_T,m_R + 𝐍(f,s,m_T,m_R),where 𝐇 is the CTF dependent on frequency f, snapshot index s, transmit/receive antenna indices m_T, m_R, and structural parameters of the propagation channel Θ such as directions ϕ and φ, delay τ, Doppler ν and polarimetric coefficients γ. Furthermore, 𝐛_m_T,𝐛_m_R are mappings from departure / arrival directions to polarimetric antenna responses, for transmit and receive antenna elements m_T,m_R, respectively. In turn, 𝐛(f) is the mapping from frequency to polarimetric responses of the antenna elements. 𝐍 denotes white Gaussian noise. The CTF can also be seen as the Fourier transform of the channel impulse response.An implementation of the space-alternating generalized expectation maximization algorithm (SAGE) <cit.> was used to detect the MPCs present at each measured position and estimate their parameters. Using the double-directional polarimetric capabilities of the channel sounder <cit.> and the knowledge of the effective aperture distribution function (EADF) of the sounder antenna arrays <cit.>, it is possible to extract a very accurate representation of the characteristics of the propagation channel that is independent of the antenna architectures.An additional synthetic NLOS region was generated after the estimation of the MPCs, for the assessment of the potential of the RIS technology under different conditions. The synthetic scenario considered an additional blocker shadowing the regions of the measurement layout shown in Fig. <ref>. The insertion of blocking was virtually performed by removing the MPCs with associated angular departure or arrival values that would impinge on the hypothetical surface. This means that the virtual blocker was assumed to be a perfect absorber. The synthetic scenario relied on equation (<ref>) – with a reduced number of MPCs – for the reconstruction of the CTF. Fig. <ref> shows the stark contrast in the transition from a LOS to a NLOS scenario, which is an approximation of the smoother transition expected in a real scenario. The transition from LOS to NLOS is completed at position 40, implying that the NLOS scenario was bounded from position 40 and onward. Qualitative analysis of the estimated MPCs, as well as quantitative analysis of power and capacity values, were performed to assess the RIS capabilities in the LOS and NLOS scenarios. Since the RIS is, among other things, composed of a metal ground plane, a comparison between an active and an inactive RIS is equivalent to a comparison between a RIS and a metal plate located in the same position.The effect of introducing a RIS in the environment can be characterized in power by the ratioΔ_P = ||𝐇_on||^2/||𝐇_off||^2 = ∑_l=0^L ||γ_l,on||_F^2/||γ_l,off||_F^2,where Δ_P is the ratio between the experienced channel gain for a RIS-enabled setup with respect to a scenario with a metal plate in place, 𝐇 is the CTF from (<ref>), and ||γ_l||_F is the Frobenius norm of the polarimetric matrix for the lth channel path. Since the RIS used in this study was found to have polarization-dependent performance variations, we only considered the vertical to vertical polarization pair as a propagation mechanism and left the rest of the polarization pairs for further study. Therefore, (<ref>) can be simplified toΔ_P = ∑_l=0^L |γ_VV,l,on|^2/|γ_VV,l,off|^2,where |γ_VV,l,on|^2 is the squared magnitude of the vertical-vertical polarization coefficient of the channel for the lth path. Δ_P was calculated for each position across the RX trajectory, and a moving average filter of 5 position samples was implemented on Δ_P to filter out noise-like fast variations and better visualize the tendency of the channel gain ratio.For the calculation of capacities, it was assumed that channel state information was known at the transmitter (CSIT). Studies in time <cit.> and literature <cit.> have shown that optimal power allocation and consequent capacity maximization can be achieved by using the water filling algorithm. The achieved capacity per bandwidth unit can be computed asC = ∑_k=1^R_Hlog_2 ( 1 + P_k/σ_n^2σ_k^2 ),where R_H is the rank of the CTF 𝐇, σ_n^2 is the noise power density, σ_k^2 is the kth singular value of 𝐇, and P_k is the power allocated to the kth eigenmode of the channel satisfying the constraint ∑ P_k = P, with P being the total power allocated to a single subcarrier across all channel eigenmodes. The noise spectral density σ_n^2 was normalized to 0ḋB/Hz. The total power P was defined to be the same across all frequencies, and dictated by signal-to-noise ratio (SNR) values ranging from -10 dB to 30 dB. Capacities were calculated for every frequency bin within the sounder's bandwidth, every SNR value, and every trajectory position, and mean and standard deviation statistics were extracted across frequency. In addition, the ratio between the mean capacities for active and inactive RIS was calculated for each position and the SNR value. Finally, for both active and inactive RIS, the mean capacities are further averaged across the positions belonging to the LOS or NLOS scenarios and plotted against the SNR evaluation range.§ RESULTS AND DISCUSSION §.§ Multipath Component Analysis Fig. <ref> shows the estimated received power and azimuth of arrival (AOA) values of each detected MPC at all positions. The behavior of the entire surface plot remains fairly constant for some few first and last measurement instances, validating the consistency of the measurement results for stationary positions at the beginning and end of the trajectory. It is clear that the strongest MPC that starts around 220^∘ corresponds to the LOS path between the TX and the RX. This path decreases in angle along with the movement of the RX across the measurement trajectory, and disappears when transitioning from the LOS to the NLOS region. The MPC cluster correlated in azimuth with the LOS path also gradually disappears. As expected for the LOS region, rich scattering is observed across the whole angular domain, with numerous MPCs coming from various elevation angles when further evaluated in the rest of the directional domains. The MPC cluster with AOA values just below 150^∘ corresponds to reflections from the cabinets in Fig. <ref>. The evolution of these MPCs shows power-level variations correlated with the scattering characteristics of the different components in the cabinets, mainly wood and glass. Since these reflections are coming from a direction close in angle with the direction of movement of the RX, their AOA remains relatively constant with respect to other MPCs observed in the measurement. Finally, there is a second most dominant MPC cluster that approximately varies from 0^∘ to 60^∘. This cluster corresponds to reflections coming from both the laboratory whiteboard and the RIS. As this cluster is of special importance for the purpose of this paper, let us take a closer look at the evolution of these MPCs, and foremost, at the comparison between the active and inactive RIS scenarios.Fig. <ref> shows a close-up of the MPC estimates for the AOA range of interest, for both active and inactive RIS scenarios. In both cases, we observe an MPC cluster with AOA values approximately ranging from 25^∘ to 30^∘, corresponding to reflections from the whiteboard. As the whiteboard is made up of metal, the reflections coming from this cluster have a significant contribution at the power level. Moreover, the reflections from the whiteboard are not affected by the blocker introduced into the layout and remain independent of the visibility region in which the RX is located. The same can be said of the contribution from the RIS MPC, seen in Fig. <ref> as the relatively straight line ranging from 5^∘ to 52^∘ in AOA and "crossing" the MPC cluster coming from the whiteboard, and as the rather shorter/interrupted line in Fig. <ref> mainly with power contributions from 32^∘ to 40^∘ in AOA. As the RIS in its inactive state behaves as a metal surface, reflection occurs as a natural process for the latter AOA interval. Even though the area of the RIS is much smaller than the area where reflections from the whiteboard occur, the RIS is closer to the RX and the wavefront travels a shorter distance until it reaches the RX. This results in less spreading and lower pathloss, which in turn implies that the received power is higher than for longer path distances. The AOA values for the whiteboard's MPC cluster increase along with the trajectory except for some noticeable variations seen approximately between positions 60 and 80. These variations can be explained by the geometry of the measurement trajectory: At a certain position in the trajectory, the MPCs coming from the whiteboard and the RIS align, but the RIS is closer to the RX, hence it blocks the radiation coming from the whiteboard. However, diffraction happens around the edges of the RIS, which changes the AOA with which the RX receives some of the whiteboard's MPCs with respect to the expected AOA for no obstruction from the RIS. Hence, the oscillation around the straight linear evolution of the AOA and the reduction in received power for these MPCs is, at a level where the RIS MPC becomes the most dominant among all MPCs in the environment.Looking further at the RIS MPC, it is very clear that the MPC's angular range for the active RIS is greatly extended, to around 400% of the angular range of ametal surface of the same dimensions. The straight evolution line of the MPC is interrupted between 15^∘ and 25^∘ in AOA. This artifact is a manifestation of the blockage that the RX performs when crossing the path between the TX and the RIS on its trajectory.Let us now look at how the power relations between MPCs in the different scenarios across the visibility regions. Fig. <ref> shows the received power ratio between all MPCs for the active RIS with respect to the inactive RIS. There is a slight improvement of the power ratio at the beginning of the trajectory followed by a rather unitary relation for the remaining positions belonging to the LOS region. This relation shows that the power enhancement that the RIS brings in a small indoor LOS environment is small, but noticeable in this setup. In contrast, the improvement induced in the NLOS region of the measurements is much higher and stays for the major part of the trajectory over 1, with most of the peaks above 1.4 and going up to 2 in the best case. The overall improvement corresponds to logarithmic gains of up to 0.8 dB for the LOS region and up to 3 dB for the NLOS region in our setup. Notice that for the trajectory around position 80 we see a drop in the ratio, going as low as 0.7. Two factors are involved in this result. The first factor is the natural reflection of the RIS that occurs in this position range for the scenario with inactive RIS. The only gain from an active RIS would be a focusing effect, which is not as pronounced because the beamforming algorithm only took an approximate value for the distance between the RX and the RIS for the whole trajectory. Clearly, for such a trajectory, the distance between the RX and the RIS changes, and beamforming loses focus. The use of multiple optical camera lenses is expected to increase the beamforming gain by accurately estimating the distance to the RX. The second factor is the absence of the strongest MPC that comes from the reflection of the whiteboard at position 82 in Fig. <ref>, which reduces the entire power contribution for the active RIS scenario. After a closer inspection, some disturbances to the smooth evolution of the mentioned MPC can be seen around the area, which happens to be very close to the area where the whiteboard reflection impinges on the RIS. Further studies would be needed to assess whether the activation of the RIS is playing a role in the disturbances observed for the whiteboard's MPC cluster. §.§ Capacity Analysis For the capacity analysis, a normalization of the CTFs across frequency bins and antenna indices was carried out, thus removing the additional gain from the active RIS seen in Fig. <ref> and allowing an analysis of the small-scale effects of the RIS. Fig. <ref> shows the mean capacity across frequency points, for different positions and SNR values. For low SNR values the ratio remains relatively unitary, but as the SNR increases, there are more pronounced variations in the regions where the ratio is greater or less than 1. There is correspondence between the increases in mean capacity around positions 20, 60, 90 and 120, and the extension of the RIS MPC when going from an inactive to an active state in Fig. <ref>.Furthermore, averaging the capacities on all the positions belonging to a certain visibility region, Fig. <ref> shows the mean capacity between frequencies and positions for different SNR values, visibility regions, and active/inactive RIS scenarios. The overall gap between mean capacities for the LOS and NLOS regions is very clear, whereas the gap between mean capacities of active and inactive RIS scenarios is less evident, but still noticeable. An evaluation of the relative ratio between capacities shows that there are mean relative capacity increases of 0.29% and 0.00% and maximum relative capacity increases of 3.21% and 2.20% for the active RIS with respect to the inactive RIS across the entire SNR range, for the LOS and NLOS regions, respectively. This indicates that although the RIS influences a greater overall power increase in the NLOS scenario, its use to slightly enhance capacity in mmWave communications has a similar effect for both LOS and NLOS scenarios.§ CONCLUSIONSThis paper showed a detailed analysis of the detected multipath components (MPCs) of a measured indoor scenario with a virtual blocker for the line-of-sight (LOS) and non-LOS (NLOS) regions, where a reconfigurable intelligent surface (RIS) performing beam tracking sought to enhance communication at mmWave frequencies. Beam tracking was performed under a low-complexity geometry-based beamforming algorithm that exploits the angular and range information obtained by visual sources – a low-cost optical camera in this case. The measurement campaign considered positions across a straight trajectory where two scenarios were depicted, namely, one with an active RIS and the other with an inactive RIS, equivalent to placing a metal surface of the same dimensions. The results show that the activation of the RIS induces a significant extension of the visibility of the associated MPC over the AOA range, to the point where the RIS component becomes the dominant MPC for propagation for some positions in the trajectory. Furthermore, measurements show that the inclusion of a RIS could increase the overall received power and thus the channel propagation gain by up to 3 dB. This indicates that the implementation of RIS technology can positively influence the propagation environment, and that this can be done at low complexity and low cost.IEEEtran
http://arxiv.org/abs/2310.18012v1
{ "authors": [ "Juan Sanchez", "Xuesong Cai", "Fredrik Tufvesson" ], "categories": [ "eess.SP" ], "primary_category": "eess.SP", "published": "20231027094009", "title": "Vision-Based Reconfigurable Intelligent Surface Beam Tracking for mmWave Communications" }
Mixed pairwise cross intersecting families (I)This work is supported byNSFC (Grant No. 11931002).E-mail addresses: [email protected] (Yang Huang), [email protected] (Yuejian Peng, corresponding author). Yang Huang, Yuejian Peng^† School of Mathematics, Hunan University Changsha, Hunan, 410082, P.R. China2023-10-25 ==============================================================================================================================================================================================================Multiple testing is an important research direction that has gained major attention in recent years. Currently, most multiple testing procedures are designed with p-values or Local false discovery rate (Lfdr) statistics. However, p-values obtained by applying probability integral transform to some well-known test statistics often do not incorporate information from the alternatives, resulting in suboptimal procedures. On the other hand, Lfdr based procedures can be asymptotically optimal but their guarantee on false discovery rate (FDR) control relies on consistent estimation of Lfdr, which is often difficult in practice especially when the incorporation of side information is desirable. In this article, we propose a novel and flexibly constructed class of statistics, called ρ-values, which combines the merits of both p-values and Lfdr while enjoys superiorities over methods based on these two types of statistics. Specifically, it unifies these two frameworks and operates in two steps, ranking and thresholding. The ranking produced by ρ-values mimics that produced by Lfdr statistics, and the strategy for choosing the threshold is similar to that of p-value based procedures. Therefore, the proposed framework guarantees FDR control under weak assumptions; it maintains the integrity of the structural information encoded by the summary statistics and the auxiliary covariates and hence can be asymptotically optimal. We demonstrate the efficacy of the new framework through extensive simulations and two data applications.Keywords: Asymptotically optimal, False discovery rate, Local false discovery rate, p-value, ρ-value, Side information. § INTRODUCTION With the rise of big data and the improved data availability, multiple testing has become an increasingly pertinent issue in modern scientific research. Multiple testing involves the simultaneous evaluation of multiple hypotheses or variables within a single study, which may lead to an increased risk of false positives if statistical methods are not appropriately employed. A popular notion for Type I error in the context of multiple testing is the false discovery rate (FDR, <cit.>), which refers to the expected proportion of false positives. Since its introduction by <cit.>, FDR quickly becomes a key concept in modern statistics and a primary tool for large-scale inference for most practitioners. On a high level, almost all testing rules that control FDRoperate in two steps: first rank all hypotheses according to some significance indices and then reject those with index values less than or equal to some threshold. In this paper, we propose an optimal multiple testing framework based on a new concept called ρ-values, which unifies the widely employed p-value and Local false discovery rate (Lfdr) based approaches and in the meanwhile enjoys superiorities over these two types of methods. In addition, the ρ-value framework has a close connection with the e-value based approaches and therefore enjoys flexibility of data dependency. In what follows, we first provide an overview of conventional practices and identify relevant issues. Next, we introduce the proposed framework and then highlight the contributions of our approach in detail. §.§ Conventional practices and issues Some of the most popular FDR rules use p-value as significance index for ranking <cit.>. The standard way of obtaining p-values is to apply a probability integral transform to some well-known test statistics. For example, <cit.> uses a permutation test, <cit.> employs a Mann–Whitney U test and <cit.> adopts a t-test. However, the p-value based methods can be inefficient because the conventional p-values do not incorporate information from the alternative distributions<cit.>. The celebrated Neyman–Pearson lemma states that the optimal statistic for testing a single hypothesis is the likelihood ratio, while an analog of the likelihood ratio statistic for multiple testing problems is the Lfdr <cit.>. It has been shown that a ranking and thresholding procedure based on Lfdr is asymptotically optimal for FDR control <cit.>. Nevertheless, the validity of many Lfdr based methods relies crucially on the consistent estimation of the Lfdr statistics <cit.>, which can be difficult in practice <cit.>. To overcome this, the weighted p-value based approaches are proposed to emulate the Lfdr method; see, for example, <cit.>, <cit.> and <cit.>. However, all of those Lfdr-mimicking procedures either require strong model assumptions or are suboptimal. §.§ Our methods and contributions To address the aforementioned issues arising from p-value and Lfdr frameworks, this article proposes a new concept called ρ-value, which takes the form of likelihood ratios and allows wide flexibility in the choice of density functions. Based on such ρ-values (or weighted ρ-values, similarly as weighted p-values), we aim to develop a new flexible multiple testing framework that unifies p-value based and Lfdr based approaches. Specifically, the proposed ρ-value based methods also operate in two steps: first rank all hypotheses according to (weighted) ρ-values (with the goal of mimicking Lfdr ranking), then reject those hypotheses with (weighted) ρ-values less than or equal to some threshold, where the threshold is determined similarly as in p-value based procedures. Compared to existing frameworks, the new framework has several advantages. First, if the ρ-values are carefully constructed, then their ranking coincides with that produced by Lfdr statistics. Thus, methods based on ρ-values can be asymptotically optimal. Second, the strategies for choosing the threshold for ρ-value based methods are similar to those for p-value based methods. Therefore, the proposed approaches enjoy theoretical properties similar to methods based on p-values. Moreover, the FDRguarantee of the ρ-value approaches does not require consistent estimations of Lfdr statistics, and hence the proposed approaches are much more flexible than the Lfdr framework. Third, compared to Lfdr methods, side information can be easily incorporated into ρ-value based procedures through the simple weighting scheme to improve the ranking of the hypotheses. Finally, we provide a unified view of p-value based and Lfdr based frameworks. In particular, we show that these two frameworks are not as different as suggested by previous works <cit.>. §.§ Organization The paper is structured as follows. Section <ref> presents the problem formulation. In Section <ref>, we introduce ρ-values and provide several examples of ρ-value based procedures. Sections <ref> and <ref> present numerical comparisons of the proposed methods and other competing approaches using simulated and real data, respectively. More discussions of the proposed framework are provided in Section <ref>. Technical proofs are collected in the Appendix. § PROBLEM FORMULATION To motivate our analysis, suppose {X_i}_i=1^m are independent summary statistics arising from the following random mixture model: θ_iiid∼Ber(π), X_i|θ_iind∼ (1-θ_i)f_0+θ_i f_1, which has been widely adopted in many large-scale inference problems <cit.>. Note that, for simplicity, we assume homogeneous alternative density f_1 in Model (<ref>) for now, and it will be extended to heterogeneous scenarios in later sections. Upon observing the summary statistics {X_i}_i=1^m, the goal is to simultaneously test the following m hypotheses: H_0, i: θ_i=0 versusH_1, i: θ_i=1, i=1,…,m. Denote by δ=(δ_1, ⋯, δ_m)∈{0,1}^m an m-dimensional decision vector, where δ_i=1 means we reject H_0,i, and δ_i=0 otherwise. In large-scale multiple testing problems, false positives are inevitable if one wishes to discover non-nulls with a reasonable power. Instead of aiming to avoid any false positives, <cit.> introduces the FDR, i.e., the expected proportion of false positives among all selections, FDR(δ) = 𝔼[∑_i=1^m(1-θ_i)δ_imax{∑_i=1^mδ_i, 1}], and a practical goal is to control the FDR at a pre-specified significance level. A closely related quantity of FDR is the marginal false discovery rate (mFDR), defined by mFDR(δ) =𝔼{∑_i=1^m(1-θ_i)δ_i}𝔼(∑_i=1^mδ_i). Under certain first and second-order conditions on the number of rejections, the mFDR and the FDR are asymptotically equivalent <cit.>, and the main considerations for using the mFDR criterion are to derive optimality theory and facilitate methodological developments. An ideal testing procedure should control the FDR (mFDR) and maximize the power, which is measured by the expected number of true positives (ETP): (δ)=𝔼(∑_i=1^mθ_i δ_i). We call a multiple testing procedure valid if it controls the mFDR asymptotically at the nominal level α and optimal if it has the largest ETP among all valid procedures. We call δ asymptotically optimalif (δ)/ (δ')≥1+o(1) for all decision rule δ' that controls mFDR at the pre-specified level α asymptotically. A natural approach to the above problem is to first compute the p-value p_i for each H_0, i and then reject those H_0, i with p-value less than certain threshold.Indeed, this is the approach adopted by the first ever FDR control algorithm commonly known as the BH procedure <cit.>. Suppose we want to simultaneously test m null hypotheses while controlling FDR level at α. The BH procedure first ranks the n p-values p_(1)≤ p_(2)≤…≤ p_(n) and then reject all null hypotheses with p-value less than or equal to p_(k) where k=max_j{p_(j)≤jα/n}.The rationale for choosing this threshold is that if thep-values are assumed to follow (0,1) under the null, then the proportion of false positives of the rules that reject all null hypotheses with p-value less than or equal to p_(j) can be estimated conservatively by mp_(j)/j. To control FDR at level α we need to ensure np_(j)/j≤α. To maximize power we want to reject as many hypotheses as possible thus the choice k=max_j{p_(j)≤α j/m}. However, this seemingly reasonable procedure, which is based on p-values, fails to adapt to the structure of the alternative distributions. In what follows, we present a ρ-value based approach to exploit structural information from the alternatives. The resulting procedures have significantly higher power compared to their p-value based counterparts. § METHODOLOGY In this section, we first introduce the concept of ρ-value and discuss its connection with e-value. Then, several ρ-value based testing procedures will be proposed in turn in Sections <ref> - <ref>. Specifically, we will respectively introduce oracle and data-driven ρ-BH procedures, weighted ρ-BH procedures, as well as ρ-value methods with side information. §.§ ρ-value and its connection with e-value As introduced by the previous sections, ρ-value is an analog of the likelihood ratio, and we now present its rigorous definition below. Suppose X is a summary statistic and f_0(·) is the density of X under the null, then ρ≡ f_0(X)/g(X) is a ρ-value of X, where g(·) is any density function. It can be seen from the above definition that, the ρ-values are broadly defined. In particular, ρ-value can be viewed as a generalization of the likelihood ratio. The key difference is that the denominator of ρ-value can be any density function and is no longer required to be f_1(X). In addition, ρ-value has a close connection with e-value, which is defined below. A non-negative random variable E is an e-value if 𝔼(E)≤ 1, where the expectation is taken under the null hypothesis. Using Markov inequality, it is straightforward to show that the reciprocal of an e-value is a p-value<cit.>. Note that 𝔼(1/ρ)=∫g(x)/f_0(x)f_0(x)dx=∫ g(x)dx=1, which means that 1/ρ is an e-value and a ρ-value is a special p-value. Therefore, one can directly use ρ-values as the inputs for the BH procedure and obtain an e-BH procedure <cit.>. One attractive feature of the e-BH procedure is that it controls FDR at the target level under arbitrary dependence. However, in practice the e-BH procedure is usually very conservative. To see why, let c(·) be the distribution function of ρ_i≡ f_0(X_i)/g(X_i) under H_0,i and assume that ρ_i's are independent. Then if we reject the hypotheses with ρ_i≤ t, the e-BH procedure uses mt as a conservative estimate of the number of false positives despite the fact that ρ_i no longer follows (0,1) under the null. In the following section, we introduce a novel ρ-BH procedure which improves e-BH by using a tighter estimate of the number of false positives. §.§ The ρ-BH procedure For simplicity, we assume that the ρ-values ρ_i≡ f_0(X_i)/g(X_i) are independent under the null and there are no ties among them. Recall c(·) is the null distribution function of ρ_i's. If the null density f_0 is known, then c(·) can be easilyestimated by the empirical distribution of the samples generated from f_0; the details will be provided in the simulation section. Consider the decision rule that rejects H_0,i if and only if ρ_i≤ t, i=1,…,m, then a tight estimate of the number of false positives is m(1-π)c(t), where π is defined in Model (<ref>). Hence, a reasonable and ideal threshold t should control the estimated false discovery proportion (FDP) at level α, i.e., m(1-π)c(t)/#{i: ρ_i≤ t}∨1≤α. To maximize power, we can choose the rejection threshold to be ρ_(k), where k = max_1 ≤ j ≤ m{c(ρ_(j))≤α j/m(1-π)} with ρ_(j) the j-th order statistic. We call this procedure the ρ-BH procedure and summarize it in Algorithm <ref>. It is important to note that, since c(·) is a monotonically increasing function,the rankings produced by ρ_i andc(ρ_i) are identical. Also note that c(ρ_i)∼(0,1) under the null. It follows that the ρ-BH procedure is in fact equivalent to the original BH procedure with c(ρ_i) as the p-value inputs and α/(1-π) as the target FDR level. Consequently, the ρ-BH procedure enjoys the same FDR guarantee as the original BH procedure, as presented in Theorem <ref>, and has power advantage over e-BH in the meantime. Assume that the null ρ-values are mutually independent and are independent of the non-null ρ-values, then FDR_Algorithm <ref>≤α. It is worthwhile to note that, one can use any predetermined g(·) to construct ρ-values and Theorem <ref> always guarantees FDR control if X_i's are independent under the null. Recall the Lfdr statistic is defined as _i≡ℙ(θ_i=0|X_i)= (1-π)f_0(X_i)/(1-π)f_0(X_i)+π f_1(X_i), and it is shown in <cit.> that a ranking and thresholding rule based on Lfdr is asymptotically optimal among all mFDR control rules. Thus, ideally we would like to adopt {f_0(X_i)/f_1(X_i), i=1,…,m} as the ρ-values since the ranking produced by f_0(X_i)/f_1(X_i)'s is identical to that produced by Lfdr statistics. In fact, with this choice of ρ-values,the ρ-BH procedure is asymptotically optimal under some mild conditions as stated by the next theorem. Let ρ_i=f_0(X_i)/f_1(X_i) and suppose X_i's are independent. Denote by δ_ρ the ρ-BH rule and let δ be any other rule with (δ)≤α asymptotically. Suppose the following holds * mℙ(ρ_i ≤απ/(1-π)(1-α))→∞. Then we have (δ_ρ)/(δ)≥ 1+o(1). Since the rankings produced by Lfdr statistics and ρ-values are identical, the rule δ(ν)={𝕀(ρ_i≤ν)}_i=1^m is equivalent to the rule δ'(ν')={𝕀(Lfdr_i≤ν')}_i=1^m for some ν'. It is shown in <cit.> that the following procedure {𝕀(Lfdr_i≤Lfdr_(k))}_i=1^m, where k=max{j: 1/j∑_i=1^jLfdr_(i)≤α}, Lfdr_1≤⋯≤Lfdr_(m) is asymptotically optimal for maximizing ETP while controlling mFDR≤α and Lfdr_(k) converges to some fixed threshold t^*≥α in probability. If the number of rejections by (<ref>) goes to infinity as m→∞, then it guaranteesthat mℙ(Lfdr_i≤α)→∞, which is equivalent to mℙ(ρ_i≤απ/(1-π)(1-α))→∞ with ρ_i=f_0(X_i)/f_1(X_i). Hence, Assumption <ref> is mild. §.§ The data-driven ρ-BH procedure In practice, f_1 and π are usually unknown and need to be estimated from the data. The problem of estimating non-null proportion has been discussed extensively in the literatures <cit.>. To ensure valid mFDR control, we require the estimator π̂ to be conservative consistent, defined as follows. An estimator π̂ is a conservative consistent estimator of π if it satisfies 0≤π̂P→π̃≤π. One possible choice of such π̂ is the Storey estimator as provided by the following proposition. The estimator π̂^τ=1-#{i:c(ρ_i)≥τ}/{m(1-τ)} proposed in <cit.> is conservative consistent for any τ satisfying 0≤τ≤ 1. The problem of estimating f_1 is more complicated. If we use the entire sample {X_i}_i=1^m to construct f̂_1 and let ρ_i=f_0(X_i)/f̂_1(X_i), then ρ_i's are no longer independent even if X_i's are. One possible strategy to circumvent this dependence problem is to use sample splitting. More specifically, we can randomly split the data into two disjoint halves and use the first half of the data to estimate the alternative density for the second half, i.e., f̂_1^ (2) (e.g., we can use the estimator proposed in <cit.>),then the ρ-values for the second half can be calculated by f_0(X_i)/f̂_1^ (2)(X_i). Hence, when testing the second half of the data, f̂_1^ (2) can be regarded as predetermined and independent of the data being tested. The decisions on the first half of the data can be obtained by switching the roles of the first and the second halves and repeating the above steps. If the FDR is controlled at level α for each half, then the overall mFDR is also controlled at level α asymptotically. We summarize the above discussions in Algorithm <ref> and Theorem <ref>. A natural question for the data-splitting approach is whether it will negatively impact the power.Suppose that f̂_1^ (1), f̂_1^ (2) are consistent estimators for some function g, and π̂_1, π̂_2 are consistent estimators for some constant π̃. Denote by t_α the threshold selected by Algorithm <ref> with g and π̃ as inputs, on full data. Then it is expected that thresholds t̂_1, t̂_2 selected by Algorithm <ref> for each half of the data both converge to t_α. Hence, the decision rules on both halves converge to the rule on the full data. Therefore, the decision rule by data-splitting is asymptotically equally powerful as the rule on the full data. For each d = 1,2.where π̃_d is defined according to Definition <ref>. * ρ̂_d,(k_d)≥νπ̂_d/1-π̂_d andℙ(ρ̂_d,i≤νπ̂_d/1-π̂_d) ≍ 1 for some ν>0; * lim sup_t→ 0^+Q̃_d(t)<α, lim inf_t→∞Q̃_d(t)>α; * inf_t ≥ t_d,L + ϵ_tQ̃_d(t) ≥α + ϵ_α, Q̃_d(t) is strictly increasing when t∈(t_d,L - ϵ_t, t_d,L + ϵ_t), for some constant ϵ_α > 0 and ϵ_t > 0. Assume that X_i's are independent. Denote by {ρ̂_d,i}_i=1^m_d, ρ̂_d,(k_d) and π̂_d the ρ-values, selected thresholds and the estimated alternative proportions obtained from Algorithm <ref>, for the first and second halves of the data respectively, d=1,2. Denote by ĉ_d the null distribution function for ρ̂_d,i. Suppose 0≤π̂_dP→π̃_d≤π and let Q̃_d(t) = (1 - π̃_d)ĉ_d(t)/ℙ(ρ̂_d,i≤ t) and t_d,L = sup{t>0: Q̃_d(t)≤α}, d=1,2. Assume the following hold * ρ̂_d,(k_d)≥νπ̂_d/1-π̂_d andℙ(ρ̂_d,i≤νπ̂_d/1-π̂_d) >c, for some constants ν, c>0; *lim sup_t→ 0^+Q̃_d(t)<α, lim inf_t→∞Q̃_d(t)>α; * inf_t ≥ t_d,L + ϵ_tQ̃_d(t) ≥α + ϵ_α, and Q̃_d(t) is strictly increasing in t∈(t_d,L - ϵ_t, t_d,L + ϵ_t), for some constants ϵ_α, ϵ_t > 0. Then we have lim_m→∞mFDR_Algorithm <ref>≤α . Theorem <ref> and Remark <ref> imply that, in the oracle case when the alternative density and the non-null proportion are estimated by the truths, the threshold of the ρ-values should be at least απ/(1-π)(1-α). Since π̂_d's are conservative consistent, we have π̂_d/1-π̂_d converges in probability to a number less than π/1-π. Therefore, the first part of <ref> is mild. Moreover, by setting ν equal to some fixed number, say α/1-α, the first part of <ref> can be easily checked numerically. The second part of <ref> is only slightly stronger than the condition that the total number of rejections for each half of the data is of order m. It is a sufficient condition to show that the estimated FDP, m_d(1-π̂_d)ĉ_d(t)/∑_i=1^m_d𝕀(ρ̂_d,i≤ t), is close to Q̃_d(t), and it can be easily relaxed if π̂ satisfies certain convergence rate. <ref> is also a reasonable condition, it excludes the trivial cases where no null hypothesis can be rejected or all null hypotheses are rejected. If Q̃_d's are continuous, then the first part of <ref> is implied by <ref> and the definition of t_d,L. The second part of <ref> can be easily verified numerically and it is also mild under the continuity of Q̃_d. Finally, all of the above conditions are automatically satisfied in the oracle case. and π̂_1=π̂_2=π Algorithm <ref> is asymptotically equivalent to the rule (<ref>), which can also be written as {𝕀(ρ_i≤ν/den) } As discussed in Remark <ref>, In the ideal case when f̂_1=f̂_2=f_1 and π̂_1=π̂_2=π Theorem 2 ideally we want 1-π̂/π̂ρ̂_ito be close to _i/(1-_i) so that ρ̂_i and _i give similar rankings. It is shown in <cit.> and <cit.> that the ideal choice of threshold for Lfdr is greater than or equal to α. Therefore, the ideal threshold for 1-π̂/π̂ρ̂_i should be at least α/1-α. Hence, the first parts of <ref> and <ref> are mild. [It is important to explain the intuitive meaning of the second parts of <ref> and <ref>.] The second parts of <ref> and <ref> are satisfied when both f_0 and f̂_1 are mixture of Gaussian distributions. Since Gaussian mixtures has the universal approximation property <cit.>, the second parts of <ref> and <ref> are also mild. * The former of the first part assumes that the threshold ρ̂_d,(k_d) is of constant level, which is satisfied in oracle cases (This assumption requires that f̂_d(·) is not too misleading compared to f_1(·) and is numerically correct from simulation and real data results). The latter of the first part is analogous to Assumption <ref>. The order is assumed to be m_1ℙ(ρ̂_1,i≤νπ̂_1/1-π̂_1) ≍ m_1 for instance, which is stronger than in Assumption <ref>. This is due to the approximation of π by π̂. We do not assume any convergence rate on π̂, and if we do, we can assume smaller order than ≍ m_1. * The second part ensures that the threshold t_d,L is of constant level. This assumption is weak as we only require null weighted ρ-values do not concentrate near 0, and that π̃ is not too large. * The third part is given to prove the convergence of the threshold. The assumption on infimum requires that Q̃(t) is not too close to α if t is larger than a neighborhood of t_L. The strict increasing assumption is to ensure that Q̃(t) is not too flat within a small neighborhood of t_L. * We also note here that all these assumptions are satisfied in oracle cases like Theorem <ref> and Theorem <ref> as shown in the proof. §.§ ρ-BH under dependence We have mentioned in Section <ref> that a ρ-value is the reciprocal of an e-value. Therefore, if {ρ_i}_i=1^m are dependent with possibly complicated dependence structures, one can simply employ {ρ_i}_i=1^m as the inputs of the BH procedureand the resulting FDR can still be appropriately controlled at the target level. Alternatively, one can deal with arbitrary dependence by the proposal in <cit.>. They have shown that the BH procedure with target level α controls FDR at level α S(m), where S(m)=∑_i=1^m1/i≈log(m) is known as the Benjamini -Yekutieli (BY) correction. Since the ρ-BH procedure is equivalent to the BH procedure with c(ρ_i) as p-values and target FDR level α/(1-π), we also have the following theorem on BY correction. By setting the target FDR level equals to α/{(1-π)S(m)}, we have FDR_Algorithm <ref>≤α under arbitrary dependence. Note that if one chooses to use BY correction or the e-BH procedure, then the data splitting step in the data-driven procedures is no longer necessary. We also remark that, we can extend our FDR control results under independence in the previous sections to weakly dependent ρ-values and same technical tools as in <cit.> can be employed to achieve asymptotic error rates control. The current article focuses on the introduction of the new testing framework based on ρ-values and skips the discussions of the weakly dependent scenarios. We have two options when facing unknown dependenceamong . The first one is to use the Benjamini–Yekutieli correction <cit.>, we describe it in the next theorem. Under arbitrary dependence among , BH+ procedure controls FDR at level α∑_l=1^m1/l. Another option is to use {ρ̂_i}_i=1^m as input to the the original BH procedure. Sample splitting is not necessary for either option. §.§ Weighted ρ-BH procedure Similar to incorporating prior information via a p-value weighting scheme<cit.>, we can also employ such weighting strategy in the current ρ-value framework. Let {w_i}_i=1^m be a set of positive weights such that ∑_i=1^mw_i=m. The weighted BH procedure proposed in <cit.> uses p_i/w_i's as the inputs of the original BH procedure. <cit.> proves that, if p_i's are independent and {w_i}_i=1^m are independent of {p_i}_i=1^m conditional on {θ_i}_i=1^m, then the weighted BH procedure controls FDR at level less than or equal to α. Following their strategy, we can apply the weighted BH procedure to c(ρ_i)'s and obtain the same FDR control result. However, such procedure might be suboptimal as will be explained in the following section. Alternatively, we derive a weighted ρ-BH procedure and the details are presented in Algorithm <ref>. Note that {ρ_i/w_i} in Algorithm <ref> produces a different ranking than {c(ρ_i)/w_i}, which may improve the power of the weighted p-value procedure with proper choices of ρ-values and weights. On the other hand, the non-linearity of c(·) imposes challenges on the theoretical guarantee for the mFDR control of Algorithm <ref> compared to that of <cit.>, and we derive the following result based on similar assumptions as in Theorem <ref>. Assume that {X_i, θ_i}_i=1^m are independent. Denote by Q(t) = ∑_i=1^m (1 - π) c(w_i t)/𝔼{∑_i=1^m 𝕀(q_i ≤ t)} and t_L = sup{t>0: Q(t)≤α}. Based on the notations from Algorithm <ref>, suppose the following hold *q_(k)≥ν and ∑_i=1^m ℙ(q_i ≤ν) →∞ as m→∞, for some ν > 0; *lim sup_t→ 0^+Q(t)<α; lim inf_t→∞Q(t)>α; *inf_t ≥ t_L + ϵ_tQ(t) ≥α + ϵ_α, Q(t) is strictly increasing in t∈(t_L - ϵ_t, t_L + ϵ_t), for some constants ϵ_α, ϵ_t > 0. Then we have lim_m→∞mFDR_Algorithm <ref>≤α. It is worthwhile to note that, <cit.> requires ∑_i=1^mw_i=m, which makes the weighted p-value procedure conservative. In comparison, Algorithm <ref> no longer requires such condition, and it employs a tight estimate of the FDP that leads to a more powerful testing procedure. When the oracle parameters are unknown, we can similarly construct a data-driven weighted ρ-BH procedure with an additional data splitting step as in Algorithm <ref>. Due to the space limit, the details are presented in Algorithm <ref> in Section <ref> of the Appendix. §.§ ρ-BH with side information In this section, we propose a ρ-BH procedure that incorporates the auxiliary information while maintaining the proper mFDR control. In many scientific applications, additional covariate informations such as the patterns of the signals and nulls are available. Hence, the problem of multiple testing with side information has received much attention and is becoming an active area of research recently <cit.>. As studied in the aforementioned works, proper use of such side information can enhance the power and the interpretability of the simultaneous inference methods. Let X_i denote the primary statistic and s_i∈ℝ^l the side information. Then we model the data generation process as follows. θ_i|s_iind∼Ber(π(s_i)),i=1,…,m,X_i|s_i,θ_iind∼ (1-θ_i)f_0(·|s_i)+θ_i f_1(·|s_i), i=1,…,m. Upon observing {(X_i,s_i)}_i=1^m, we would like to construct a decision rule δ that controls mFDR based on ρ-values. As before, we assume the null distributions f_0(·|s_i) are known, and we define the ρ-values by ρ_i=f_0(X_i|s_i)/g(X_i|s_i) for some density function g(·|s_i). Let η:ℝ^l→ (0,1) be a predetermined function and c_i(·) be the null distribution of ρ_i. Then we incorporate the side information through a ρ-value weighting scheme by choosing an appropriate function η(·). The details are summarized in Algorithm <ref>. In contrast to the ρ-BH procedure,Algorithm <ref> is no longer equivalent to the BH procedure with {c_i(ρ_i)/ w_i}'s as the inputs since the rankings produced by {c_i(ρ_i) /w_i}'s and {ρ_i /w_i}'s are different. The ideal choice of g(·|s_i) is again f_1(·|s_i), while the ideal choice of η(·) is π(·), the rationale is provided as follows. Define the conditional local false discovery rate (Clfdr, <cit.>) as_i≡{1-π(s_i)}f_0(X_i|s_i)/{1-π(s_i)}f_0(X_i|s_i)+π(s_i)f_1(X_i|s_i). <cit.> shows that a ranking and thresholding procedure based on Clfdr is asymptotically optimal for FDR control. Note that if we take g(·|s_i) to be f_1(·|s_i) and η(·) to be π(·), then the ranking produced by ρ_i/w_i's is identical to that produced by Clfdr statistics. However, the validity of the data-driven methods proposed in <cit.> and <cit.> relies on the consistent estimation of Clfdr_i's. In many real applications, it is extremely difficult to accurately estimate Clfdr even when the dimension of s_i is moderate <cit.>. In contrast, the mFDR guarantee of Algorithm <ref> does not rely on any of such Clfdr consistency results and our proposal is valid under much weaker conditions as demonstrated by the next theorem. Assume that {X_i, θ_i}_i=1^m are independent. Denote by Q(t) = ∑_i=1^m {1 - π(s_i)} c_i(w_i t)/𝔼{∑_i=1^m 𝕀(q_i ≤ t)} and t_L = sup{t>0: Q(t)≤α}. Based on the notations from Algorithm <ref>, suppose the following hold *q_(k)≥ν and ∑_i=1^m ℙ(q_i ≤ν) →∞ as m→∞, for some ν > 0; *lim sup_t→ 0^+Q(t)<α; lim inf_t→∞Q(t)>α; *inf_t ≥ t_L + ϵ_tQ(t) ≥α + ϵ_α, Q(t) is strictly increasing in t∈(t_L - ϵ_t, t_L + ϵ_t), for some constants ϵ_α, ϵ_t > 0. Then we have lim_m→∞mFDR_Algorithm <ref>≤α. Note that, the validity of the above theorem allows flexible choices of functions g(·|s_i) and the weights w_i. Hence, similarly as the comparison between ρ-value and Lfdr, the ρ-value framework with side information is again much more flexible than the Clfdr framework that requires the consistent estimation of the Clfdr statistics. We also remark that, <cit.> recommends using π(s_i)/1-π(s_i) to weigh p-values arising from the two sample t-statistics, and the authors only provide a heuristic explanation on the superiority of using π(s_i)/1-π(s_i) over 1/1-π(s_i) as weights, while we have the following optimality result which provides a more rigorous justification forπ(s_i)/1-π(s_i). Assume that {X_i, θ_i}_i=1^m are independent. Denote by δ_ρ the rule described in Algorithm <ref> with η(·)=π(·) and g(·|s_i)=f_1(·|s_i), and let δ be any other rule that controls mFDR at level α asymptotically. Based on the notations from Algorithm <ref>, suppose the following holds *∑_i=1^m ℙ(q_i ≤α/1-α) →∞ as m→∞. Then we have (δ_ρ)/(δ)≥ 1+o(1). Assumptions <ref>-<ref> are automatically satisfied under the conditions assumed by Theorem <ref>. Therefore, in such ideal setting, Algorithm <ref> isoptimal among all testing rules that asymptotically control mFDR at level α. In addition, Theorem <ref> implies that the weighted BH procedure <cit.> based on the ranking of {c_i(ρ_i)/w_i} is suboptimal. Define Clfdr_i≡ℙ(θ_i=0|X_i,s_i)=(1-π(s_i))f_0(X_i|s_i)/(1-π(s_i))f_0(X_i|s_i)+π(s_i)f_1(X_i|s_i). <cit.> has shown that the following rule {𝕀(Clfdr_i≤Clfdr_(k'))}_i=1^m,wherek'=max{j: 1/j∑_i=1^jClfdr_(i)≤α},Clfdr_1≤…≤Clfdr_(m), controls mFDR at level α and isoptimal asymptotically. Moreover, the mFDR of the rule {𝕀(Clfdr_i≤ t)}_i=1^m is increasing in t and Clfdr_(k')converges in probability to some fixed number t^*≥α. If η(·)=π(·) then Clfdr_i≤α is equivalent to q_i≤α/1-α. Hence, the rule {𝕀(q_i≤α/1-α)}_i=1^m controls mFDR at level α. Therefore, in the setting of Theorem <ref>, we can set the threshold to be for q_i to be max(α/1-α, q_(k)) where q_(k) is as defined in Algorithm <ref>. Thus, without loss of generalitywe can assume q_(k)≥α/1-α and the first part of <ref> is satisfied. The second part of <ref> follows from <ref>. <ref> is satisfied when the number of rejections by rule (<ref>) goes to ∞ as m→∞. <ref> and <ref> are both mild under the setting of Theorem <ref>. In practice, we need to choose η(·) and g(·|s_i) based on the available data {(X_i,s_i)}_i=1^m. Again, if the entire sample is used to construct η(·) and g(·|s_i), then the dependence among w_i's and ρ_i's is complicated. Similar to Algorithm <ref>, we can use sample splitting to circumvent this problem. The details of the data-driven version of Algorithm <ref> is provided in Algorithm <ref>. To ensure a valid mFDR control, we require a uniformly conservative consistent estimator of π(·), whose definition is given below. An estimator π̂(·) is a uniformly conservative consistent estimator of π(·) if it satisfies sup_i𝔼{π̂(s_i) - π̃(s_i)}^2 → 0 as m →∞, where 0 ≤π̃(s_i) ≤π(s_i) for i=1,…,m. The problem of constructing such uniformly conservative consistent estimator π̂(·) has been discussed in the literatures; see for example, <cit.> and <cit.>. The next theorem shows that Algorithm <ref> indeed controls mFDR at the target level asymptotically under conditions analogous to those assumed in Theorem <ref>. Assume that {X_i, θ_i}_i=1^m are independent. Denote by {q̂_d,i}_i=1^m_d, q̂_d,(k_d) and π̂_d the weighted ρ-values, selected thresholds and the estimated alternation proportions obtained from Algorithm <ref>,for the first and second halves of the data respectively, d=1,2. Denote by ĉ_d,i the null distribution function forρ̂_d,i. Supposesup_i𝔼{π̂_d(s_i) - π̃_d(s_i)}^2 → 0 for some 0 ≤π̃_d(·) ≤π(·) and let Q̃_d(t) = ∑_i=1^m_d{1 - π̃_d(s_d,i)}ĉ_d,i(w_d,i t)/𝔼{∑_i=1^m_d𝕀(q̂_d,i≤ t)} and t_d,L = sup{t>0: Q̃_d(t)≤α}, d=1,2. Based on the notations from Algorithm <ref>, suppose the following hold *q̂_d,(k_d)≥ν, ∑_i=1^m_dℙ(q̂_d,i≤ν) ≥ cm, for some constants ν, c>0;*lim sup_t→ 0^+Q̃_d(t)<α, lim inf_t→∞Q̃_d(t)>α;*inf_t ≥ t_d,L + ϵ_tQ̃_d(t) ≥α + ϵ_α, Q̃_d(t) is strictly increasing in t∈(t_d,L - ϵ_t, t_d,L + ϵ_t), for some constants ϵ_α, ϵ_t > 0. Then we have lim_m→∞mFDR_Algorithm <ref>≤α. § NUMERICAL EXPERIMENTSIn this section, we conduct several numerical experiments to compare our proposed procedures with some state-of-the-art methods. In all experiments, we study the general case where side information is available, and generate data according to the following hierarchical model: θ_i ∼Ber{π(s_i)}, X_i | s_i, θ_i ∼ (1 - θ_i)f_0(·|s_i) + θ_i f_1(·|s_i), where θ_i ∈ℝ, X_i ∈ℝ and s_i ∈ℝ^l for i=1,…,m. We are interested in testing H_0,i:θ_i=0 versus H_1,i:θ_i=1, i=1,…,m. To implement our proposed data-driven procedure with side information, i.e., Algorithm <ref>, we use the following variation of the Storey estimator to estimate π(s_i) in Step 2: π̂_2(s_2,i) = 1 - ∑_j = 1^m_1 K(s_2,i,s_1,j)𝕀(p_1,j≥τ)/(1-τ)∑_j=1^m_1 K(s_2,i,s_1,j), i=1,…,m_2, where p_1,j = 2ℙ(X_1,j≥ |x_1,j| | θ_1,j = 0) is the two-sided p-value and τ is chosen as the p-value threshold of the BH procedure at α=0.9; this ensures that the null cases are dominant in the set {j:p_1,j≥τ}.We let K(s_2,i,s_1,j) = ϕ_H(s_2,i-s_1,j), where ϕ_H(·) is the density of multivariate normal distribution with mean zero and covariance matrix H. We use the function in the R package to chose H. Similar strategies for choosingτ and H are employed in <cit.> and <cit.>. We constructf̂_1^ (2)(·|s_2,i) using a modified version of the two-step approach proposed in <cit.> as follows. * Let π̂_1'(s_1,i) = 1 - ∑_j = 1^m_1 K(s_1,i,s_1,j)𝕀(p_1,j≥τ)/(1-τ)∑_j=1^m_1 K(s_1,i,s_1,j) for i=1,…,m_1. * Calculate f̃_1,2,j(x_1,j) = ∑_l=1^m_1K(s_1,j,s_1,l)ϕ_h_x(x_1,j - x_1,l)/∑_l=1^m_1K(s_1,j,s_1,l),and the weights ŵ_1,j = 1 - min{{1-π̂_1'(s_1,j)}f_0(x_1,j)/f̃_2,j(x_1,j), 1} for j=1,…,m_1. * Obtain f̂_1^ (2)(x | s_2,i) = ∑_j=1^m_1ŵ_1,jK(s_2,i,s_1,j) ϕ_h_x(x - x_1,j)/∑_j=1^m_1ŵ_1,jK(s_2,i,s_1,j) as the non-null density estimate for i=1,…,m_2. Here, the kernel function K is the same as the one in Equation (<ref>), and the bandwidth h_x is chosen automatically using the functionin the R package . To estimate the null densities c_i(·)'s, i.e., the distribution functions of f_0(· |s_2,i)/f̂_1^ (2)(· | s_2,i) under H_0,i, i=1,…,m_2, we independently generate 1000 samples Y_j's from f_0(·|s_2,i) for each i and estimate c_i(·) through the empirical distribution of f_0(Y_j|s_2,i)/f̂_1^ (2)(Y_j | s_2,i)'s. The estimations on the first half of the data can be obtained by switching the roles of the first and the second halves. The implementation details are provided at <https://github.com/seanq31/rhoBH>. We compare the performance of the following six methods throughout the section: * ρ-BH.OR: Algorithm <ref> with ρ_i=f_0(X_i|s_i)/f_1(X_i| s_i), c_i(t)=ℙ_H_0,i(ρ_i≤ t), η(·)=π(·). * ρ-BH.DD: Algorithm <ref> with implementation details described above. * LAWS: the data-driven LAWS procedure <cit.> with p-value equals to 2{1-Φ(|X_i| )}, where Φ is the cumulative distribution function (cdf) of the standard normal variable. * CAMT: the CAMT procedure <cit.> with the same p-values used in LAWS. * BH: the Benjamini-HochbergProcedure <cit.> with the same p-values used in LAWS. * Clfdr: the Clfdr based method <cit.> with Clfdr_d,i = q̂_d,i/1+q̂_d,i, where d=1,2 and q̂_d,i's are derived in (b). Specifically, we calculate the threshold k_d = max_1 ≤ i ≤ m_d{∑_j=1^i Clfdr_d,(j) / i ≤α} and reject those with Clfdr_d,i≤Clfdr_d, (k_d). All simulation results are based on 100 independent replications with target level α=0.05. The FDR is estimated by the average of the FDP, ∑_i=1^m{(1-θ_iδ_i)/(∑_i=1^mδ_i∨1)}, and the average power is estimated by the average proportion of the true positives that are correctly identified, ∑_i=1^m(θ_iδ_i)/∑_i=1^mθ_i, both over the number of repetitions. §.§ Bivariate side information We first consider a similar setting as Setup S2 in <cit.>, where the non-null proportions and non-null distributions are closely related to a two dimensional covariate. Specifically, the parameters in Equation (<ref>) are determined by the following equations. s_i=(s_i^ (1),s_i^ (2))iid∼ N(0,I_2), π(s_i) = 1/1 + e^k_e,i, k_e,i = k_c + k_d s_i^ (1),f_0(· | s_i) ∼ N(0,1), f_1(· | s_i) ∼ N(e^k_t2e^k_f s_i^ (2)/1+e^k_f s_i^ (2),1), i=1,…,5000,where I_2 is the 2× 2 identity matrix, k_c, k_d, k_f and k_t are hyper-parameters that determine the impact of s_i on π and f_1. In the experiments, we fix k_c at 2 or 1 (denoted as “Medium" and “High", respectively), (k_d, f_f) at (1.5,0.4) or (2.5,0.6) (denoted as “Moderate" and “Strong", respectively), and vary k_t from 2 to 6.We first generate 5000 covariate pairs s_i = (s_i,1, s_i,2) ∼ N(0, I_2) independently and then for the fixed {s_i}_i=1^5000 we genetate hypotheses using the following model:π(s_i) = 1/1 + e^k_e,i,k_e,i = k_c + k_d s_i,1, f_0(· | s)∼ N(0,1),f_1(· | s) ∼ N(k_t k_s,i,1), wherek_s,i = 2e^k_f s_i,2/1+e^k_f s_i,2.We vary the parameters k_c ∈{2.5, 1.5} (Medium, High), (k_d, k_f) ∈{(0, 0), (1, 0.25)} (None, Moderate) and log(k_t) equally spaced in [2.2,3]. The parameter pair (k_d, k_f) determines whether s_i's affect f_1(· |s_i) and π(s_i). The parameters k_c and k_t determine how small π(s_i)'s are and how different non-null distribution is from null.Note that, <cit.> assumes it is known that π(·) and f_1(·|s_i) each depend on one coordinate of the covariate when performing their procedure. Hence, for a fair comparison, we employ the same assumption, substitute s_d,i by s_d,i^ (1) for the estimations of π̂(·) (as defined in (<ref>)) and π̂'(·) (as defined in Step (a) of constructing f̂_1^ (2)(·|s_2,i)), and substitute s_d,i by s_d,i^ (2) in the rest steps of obtaining f̂_1^ (2)(·|s_2,i), for d=1,2.It can be seen from Figure <ref> that, except the Clfdr procedure, all other methods successfully control the FDR at the target level. Figure <ref> shows that, the empirical powers of ρ-BH.OR and ρ-BH.DD are significantly higher than all other FDR controlled methods. It is not surprising that ρ-BH.OR and ρ-BH.DD outperform LAWS and BH, because the p-values only rely on the null distribution, whereas the ρ-values mimic the likelihood ratio statistics and encode the information from the alternative distribution.Both ρ-BH.OR and ρ-BH.DD outperforms CAMT as well, because CAMT uses a parametric model to estimate the likelihood ratio, while ρ-BH.DD employs a more flexible non-parametric approach that can better capture the structural information from the alternative distribution.Finally, as discussed in the previous sections, the Clfdr based approaches strongly rely on the estimation accuracy of π(·) and f_1(·|·), which can be difficult in practice. Hence as expected, we observe severe FDR distortion of Clfdr method.Such phenomenon reflects the advantage of the proposed ρ-value framework because its FDR control can still be guaranteed even if f̂_1(·|·) is far from the ground truth. §.§ Univariate side information Next, we consider the univariate covariate case and generate data according to the following model θ_iind∼Bernoulli{π(i)}, i=1,…,5000,X_i|θ_iind∼ (1-θ_i)N(0,1)+θ_iN(μ,1), i=1,…,5000. Two settings are considered. In Setting 1, the signals appear with elevated frequencies in the following blocks: π(i)=0.9 for i∈ [1001,1200]∪[2001,2200]; π(i)=0.6 for i∈ [3001,3200]∪[4001,4200]. For the rest of the locations we set π(i)=0.01. We vary μ from 2 to 4 to study the impact of signal strength. In Setting 2, we set π(i)=π_0 in the above specified blocks and π(i)=0.01 elsewhere. We fix μ = 3 and vary π_0 from 0.5 to 0.9 to study the influence of sparsity levels. In these two cases, the side information s_i can be interpreted as the signal location i. When implementing CAMT, we use a spline basis with six equiquantile knots for π(i) and f_1(·|i) to account for potential complex nonlinear effects as suggested in <cit.> and <cit.>. Again, we compare the six procedures as in Section <ref>, and the results of Settings 1 and 2 are summarized in the first and second rows of Figure <ref>, respectively. We can see from the first column of Figure <ref> that, in both settings all methods control FDR appropriately at the target level. From the second column, it can be seen that both ρ-BH.OR and ρ-BH.DD outperform the other four methods. This is due to the fact that, besides the ability in incorporating the sparsity information, the ρ-value statistic also adopts other structural knowledge and is henceforth more informative than the p-value based methods. In addition, the nonparametric approach employed by ρ-BH.DD is better at capturing non-linear information than the parametric model used in CAMT, and again leads to a more powerful procedure. § DATA ANALYSIS In this section, we compare the performance of ρ-BH.DD with Clfdr, CAMT, LAWS and BH on two real datasets. §.§ MWAS data We first analyze a dataset from a microbiome-wide association study of sex effect <cit.>, which is available at <https://github.com/knightlab-analyses/american-gut-analyses>. The aim of the study is to distinguish the abundant bacteria in the gut microbiome between males and females by the sequencing of a fingerprint gene in the bacteria 16S rRNA gene. This dataset is also analyzed in <cit.>. We follow their preprocessing procedure to obtain 2492 p-values from Wilcoxon rank sum test for different operational taxonomic units (OTUs), and the percentages of zeros across samples for the OTUs are considered as the univariate side information. Because a direct estimation of the non-null distributions of the original Wilcoxon rank sum test statistics is difficult, we construct pseudo z-values by z_i = Φ^-1(p_i) × (2B_i-1), where B_i's are independent Bernoulli(0.5) random variables and Φ^-1 is the inverse of standard normal cdf. Then we run ρ-BH.DD on those pseudo z-values by employing the same estimation methods of π(·) and f_1(·|·) as described in Section <ref>. When implementing CAMT, we use the spline basis with six equiquantile knots as the covariates as recommended in <cit.>. The results are summarized in Figure <ref> (a). We can see that ρ-BH.DD rejects significantly more hypotheses than LAWS and BH across all FDR levels. ρ-BH.DD also rejects slightly more tests than Clfdr under most FDR levels, and is more stable than CAMT. Because Clfdr may suffer from possible FDR inflation as shown in the simulations, we conclude that ρ-BH.DD enjoys the best performance on this dataset. §.§ ADHD data We next analyze a preprocessed magnetic resonance imaging (MRI) data for a study of attention deficit hyperactivity disorder (ADHD). The dataset is available at <http://neurobureau.projects.nitrc.org/ADHD200/Data.html>. We adopt the Gaussian filter-blurred skullstripped gray matter probability images from the Athena Pipline, which are MRI images with a resolution of 197 × 233 × 189. We pool the 776 training samples and 197 testing samples together, remove 26 samples with no ADHD index, and split the pooled data into one ADHD sample of size 585 and one normal sample of size 362. Then we downsize the resolution of images to 30 × 36 × 30 by taking the means of pixels within blocks, and then obtain 30 × 36 × 30 two-sample t-test statistics. Similar data preprocessing strategy is also used in <cit.>. In such dataset, the 3-dimensional coordinate indices can be employed as the side information. The results of the five methods are summarized in Figure <ref> (b). Again, it is obvious to see that ρ-BH.DD rejects more hypotheses than CAMT, LAWS, Clfdr and BH across all FDR levels. § CONCLUSION AND DISCUSSIONS This article introduces a novel multiple testing framework based on the newly proposed ρ-values. The strength of such framework includes its abilities in unifying the current existing p-values and Lfdr statistics based procedures, as well as in achieving optimal power with much less stringent conditions compared to the Lfdr methods. In the meanwhile it can be extended to incorporate side information through weighting and again the asymptotic optimality can be attained. As a final message, based on our proposal, we briefly discuss in this section that the frameworks provided by p-values and Lfdr statistics are not as different as claimed in the literature. Note that, a central message of <cit.> is that reducing z-values to p-values may lead to substantial information loss. The narrative seems to view Lfdr and p-value as two fundamentally different statistics. However, we have shown in Section <ref> that ρ-value is a special case of p-value but in the meantime ρ-value can produce the same ranking as Lfdr. Therefore, a more accurate paraphrasing of the message in <cit.> should be that “Statistics that take into account the information from alternative distributions are superior to statistics that do not." To be more concrete, we show below that a Lfdr based procedure proposed in <cit.> is actually a special variation of Algorithm <ref> under Model (<ref>). Suppose f_1 and f_0 are known but π is not. As mentioned in Section <ref>, a natural choice of π̂ is the Storey estimator. Note that, the Storey estimator requires a predetermined tuning parameter τ. By replacing π with π̂, the threshold in Step 3 of the ρ-BH procedure becomes ρ_(k) where k = max_j{(1-π̂)c(ρ_(j))≤α j/m } . In a special case when we allow varying τ for different j and let τ=c(ρ_(j)), then it yields that k=max_j{#{i:c(ρ_i)≥ 1-c(ρ_(j)) }j≤α}. Now if we add 1 to the numerator and let k=max_j{1+#{i:c(ρ_i)≥ 1-c(ρ_(j)) }j≤α}, then the decision rule δ={𝕀(ρ_i≤ρ_(k))}_i=1^m is equivalent to the rule given by the ZAP procedure <cit.> that is based on Lfdr. Hence, the ZAP procedure can be viewed as a special case of the ρ-BH procedure under Model (<ref>), and can be unified into the proposed ρ-BH framework. Besides, some of the recent works <cit.> propose to use a working model to estimate Clfdr directly. Consider the setting described in Section <ref>, and let a:ℝ^1+l→ℝ be a predetermined function that is used to emulate the Clfdr, where l is the dimension of the auxiliary covariate s_i. Then we can transform a(X_i,s_i)'s to ρ-values through... If we apply the ρ-BH procedure to the derived ρ-values, then their ranking is the same as that of a(X_i,s_i)'s, and the mFDR control can be asymptotically guaranteed under similar assumptions stated in Theorem <ref>. Hence, more broadly, the Clfdr based methods can be unified into the framework proposed in this paper. [I temporarily remove the earlier discussion on Clfdr. I think we either remove it or shorten it. Will discuss tomorrow.] Some of the recent works <cit.> propose to use a working model to estimate Clfdr directly. Consider the setting described in Section <ref>, let the a:ℝ^1+l→ℝ be a predetermined function that is used to emulate the Clfdr. Denote c̃_i(·) the distribution function of a(X_i,s_i) under H_0,i. Even though a(X_i,s_i) is no longer guaranteed to be a ρ-value, Algorithm <ref> can still guarantee asymptotic mFDR control under similar assumptions stated in Theorem <ref>. Hence, more broadly, the Clfdr based methods can be unified into the framework proposed in this paper. § DATA-DRIVEN WEIGHTED Ρ-BH PROCEDUREThe data-driven version of the weighted ρ-BH procedure is described inAlgorithm <ref>. The next theorem provides the theoretical guarantee for the asymptotic mFDR control of Algorithm <ref>. Assume that {X_i, θ_i}_i=1^m are independent. Denote by{q̂_d,i}_i=1^m_d, q̂_d,(k_d) and π̂_d the weighted ρ-values, selected thresholds and the estimated alternative proportions obtained from Algorithm <ref>, for the first and second halves of the data respectively, d=1,2. Denote by ĉ_d the null distribution function for ρ̂_d,i. Suppose 0≤π̂_dP→π̃_d ≤π and let Q̃_d(t) = ∑_i=1^m_d (1 - π̃_d)ĉ_d(w_d,i t)/𝔼{∑_i=1^m_d𝕀(q̂_d,i≤ t)} and t_d,L = sup{t>0: Q̃_d(t)≤α}, d=1,2. Based on the notations from Algorithm <ref>, suppose the following hold *q̂_d,(k_d)≥ν, ∑_i=1^m_dℙ(q̂_d,i≤ν) ≥ cm, for some constants ν, c>0; *lim sup_t→ 0^+Q̃_d(t)<α, lim inf_t→∞Q̃_d(t)>α; *inf_t ≥ t_d,L + ϵ_tQ̃_d(t) ≥α + ϵ_α, Q̃_d(t) is strictly increasing in t∈(t_d,L - ϵ_t, t_d,L + ϵ_t), for some constants ϵ_α, ϵ_t > 0. Then we have lim_m→∞mFDR_Algorithm <ref>≤α. § PROOFS OF MAIN THEOREMS AND PROPOSITIONS Note that Theorems <ref> and <ref> follow directly from the proof of the original BH procedure as discussed in the main text. Theorem <ref> is a special case of Theorem <ref>. Theorems <ref> and <ref> are special cases of Theorem <ref>. Theorem <ref> is a special case of Theorem <ref>. Hence, we focus on the proofs of Proposition <ref>, Theorem <ref>, Theorem <ref> and Theorem <ref> in this section. To simplify notation, we define aprocedure equivalent to Algorithm <ref>. This equivalence is stated in Lemma <ref>, whose proof will be given later. Algorithm <ref> and Algorithm <ref> are equivalent in the sense that they reject the same set of hypotheses. §.§ Proof of Proposition <ref> Denote by ℙ̂{c(ρ_i)>τ}:=∑_i=1^m𝕀{c(ρ_i)>τ}m. Since ∑_i=1^m𝕀{c(ρ_i)>τ} follows Binomial(m,p) where p=ℙ{c(ρ_i)>τ}, we have that ℙ̂{c(ρ_i)>τ}P→ p. Let p_0=ℙ{c(ρ_i)>τ|H_0,i} and p_1=ℙ{c(ρ_i)>τ|H_1,i}, then p=(1-π)p_0+π p_1. Since c(ρ_i)∼Unif(0,1) under H_0,i, it follows that p=(1-π)(1-τ)+π p_1. Hence, 1-p/(1-τ)<π, and the proposition follows. §.§ Proof of Theorem <ref> By Lemma <ref>, we only need to prove the mFDR control for Algorithm <ref>. Assumption <ref> ensures that Q(t) is well defined when t ≥ν. Note that, by Assumption <ref> and standard Chernoff bound for independent Bernoulli random variables, we have uniformly for t ≥ν and any ϵ > 0 ℙ( | ∑_i=1^m 𝕀(q_i ≤ t)/𝔼{∑_i=1^m 𝕀(q_i ≤ t)} -1| ≥ϵ)≤2e^-ϵ^2∑_i=1^mℙ(q_i ≤ t)/3 ≤2e^-ϵ^2∑_i=1^mℙ(q_i ≤ν)/3→0, which implies sup_t≥ν |Q(t) - FDP(t)| P→ 0 as m→∞, where FDP(t) = ∑_i=1^m {1 - π(s_i)} c_i(w_i t)/{∑_i=1^m 𝕀(q_i ≤ t)} 1. Assumption <ref> implies t_L < ∞. Moreover, combining Equation (<ref>) with Assumption <ref>, we have FDP(t) > α for any t ≥ t_L+ϵ_t with probability going to 1. Thus, we only have to consider t < t_L+ϵ_t. Specifically, we consider t∈(t_L-ϵ_t, t_L+ϵ_t). As Q(t) is strictly increasing within this range by Assumption <ref>, we have t^* = Q^-1{Q(t^*)}P→ Q^-1{FDP(t^*)} = Q^-1(α) = t_L. Therefore, we have mFDR_Algorithm <ref>= ∑_i=1^m ℙ(q_i ≤ t^*, θ_i = 0)/𝔼{∑_i=1^m 𝕀(q_i ≤ t^*)}= ∑_i=1^m ℙ(q_i ≤ t_L, θ_i = 0)/𝔼{∑_i=1^m 𝕀(q_i ≤ t_L)} + o(1) = Q(t_L) + o(1) ≤α + o(1). §.§ Proof of Theorem <ref> We first state a useful lemma whose proof will be given later. Let g(·|s_i) ≡ f_1(·|s_i), i=1,…,m, η(·) ≡π(·). For any t > 0, let Q(t) = ∑_i=1^m {1 - π(s_i)} c_i(w_i t)/𝔼{∑_i=1^m 𝕀(q_i ≤ t)},t_L = sup{ t ∈ (0, ∞): Q(t) ≤α}. Suppose Assumption <ref> holds. Then we have * Q(t) < t/1+t; * Q(t) is strictly increasing; * lim_m→∞ (ETP_δ^L -ETP_δ') ≥ 0, for any testing rule δ' based on {X_i}_i=1^m and {s_i}_i=1^m such that lim_m→∞mFDR_δ'≤α, where δ^L = {𝕀(q_i ≤ t_L)}_i=1^m. Next we prove Theorem <ref>. By Lemma <ref>, Algorithm <ref> is equivalent to reject all hypotheses that satisfying q_i ≤ t^*, where t^* is the threshold defined in Algorithm <ref>. To simplify notations, let ν = α/1-α and we next show that t^* ≥ν in probability. By the standard Chernoff bound for independent Bernoulli random variables, we have ℙ( | ∑_i=1^m𝕀(q_i ≤ν)/∑_i=1^mℙ(q_i ≤ν) - 1| ≥ϵ) ≤ 2e^-ϵ^2∑_i=1^mℙ(q_i ≤ν)/3 for all 0 < ϵ < 1. By Assumption <ref>, the above implies | ∑_i=1^m𝕀(q_i ≤ν)/∑_i=1^mℙ(q_i ≤ν) - 1| = o_P(1). Combining Equation (<ref>) and the first part of Lemma <ref>, we have ∑_i=1^m{1-π(s_i)}c_i(w_i ν)/{∑_i=1^m𝕀(q_i ≤ν)} 1 = ∑_i=1^m{1-π(s_i)}c_i(w_i ν)/∑_i=1^mℙ(q_i ≤ν) + o_P(1) =Q(ν) + o_P(1) <ν/1+ν + o_P(1) = α + o_P(1), which implies ℙ(t^* ≥ν) → 1. Therefore, we will only focus the event {t^* ≥ν} in the following proof. For any t > 0, we let Q(t) = ∑_i=1^m {1 - π(s_i)} c_i(w_i t)/𝔼{∑_i=1^m 𝕀(q_i ≤ t)},FDP(t) = ∑_i=1^m {1 - π(s_i)} c_i(w_i t)/{∑_i=1^m 𝕀(q_i ≤ t)} 1, and t_L = sup{ t ∈ (0, ∞): Q(t) ≤α}, t^* = sup{ t ∈ (0, ∞): FDP(t) ≤α}. Following the proof of the third part of Lemma <ref>, we consider two cases: lim_m→∞π(s_i)/m≤ 1-α and lim_m→∞π(s_i)/m > 1-α. The first case is trivial by noting that mFDR can be controlled even if we reject all null hypotheses. For the second case, we need to show that t^* P→ t_L. Similar to the proof of Equation (<ref>), we have uniformly for t≥ν and any ϵ > 0, ℙ(| ∑_i=1^m 𝕀(q_i ≤ t)}/𝔼{∑_i=1^m 𝕀(q_i ≤ t)} - 1 | ≥ϵ)≤2e^-ϵ^2∑_i=1^mℙ(q_i ≤ t)/3 ≤2e^-ϵ^2∑_i=1^mℙ(q_i ≤ν)/3→0, which implies |FDP(t) - Q(t)| P→ 0 uniformly in t ≥ν. Thus, FDP(t^*) P→ Q(t^*). Moreover, by Lemma <ref>, we know that Q(t) is continuous and strictly increasing. Therefore, we can define the inverse function Q^-1(·) of Q(·). Thus, by the continuous mapping theorem, we have t^* = Q^-1{Q(t^*)}P→ Q^-1{FDP(t^*)} = Q^-1(α) = t_L. By the third part of Lemma <ref>, we have lim_m→∞(ETP_δ^L - ETP_δ) ≥ 0 and therefore, lim_m→∞ETP_δ_ρ/ETP_δ =lim_m→∞ETP_δ_ρ/ETP_δ_LETP_δ_L/ETP_δ≥lim_m→∞ETP_δ_ρ/ETP_δ_L=lim_m→∞𝔼{∑_i=1^m θ_i𝕀(ρ_i ≤ t^*)}/𝔼{∑_i=1^m θ_i𝕀(ρ_i ≤ t_L)}≥1 + lim_m→∞∑_i=1^m {1-π(s_i)} o(1)/∑_i=1^m {1-π(s_i)}c_i(w_i t_L)≥ 1. §.§ Proof of Theorem <ref> We first introduce Lemma <ref>, whose proof will be given later. Denote Steps 2 to 3 of Algorithm <ref> as `Half-procedure' and we inherit all other notations from Theorem <ref>. Suppose Assumptions <ref>-<ref> hold for d=2. Then we have lim_m→∞mFDR_Half-procedure≤α. Next we prove Theorem <ref>. Without loss of generality, we assume {X_1,i}^m_1_i=1 = {X_i}^m_1_i=1 and {X_2,i}^m_2_i=1 = {X_i}^m_i=m_1+1. By Lemma <ref> and Lemma <ref>, we have that 𝔼{∑_i=1^m_1 (1-θ_i)δ_i}/𝔼{∑_i=1^m_1δ_i}≤α + o(1),𝔼{∑_i=m_1+1^m (1-θ_i)δ_i}/𝔼{∑_i=m_1+1^mδ_i}≤α + o(1). On the other hand, we can decompose mFDR_δ as mFDR_δ = 𝔼{∑_i=1^m (1-θ_i)δ_i}/𝔼{∑_i=1^mδ_i} = 𝔼{∑_i=1^m_1 (1-θ_i)δ_i}/𝔼{∑_i=1^mδ_i} + 𝔼{∑_i=m_1+1^m (1-θ_i)δ_i}/𝔼{∑_i=1^mδ_i} = 𝔼{∑_i=1^m_1 (1-θ_i)δ_i}/𝔼{∑_i=1^m_1δ_i}𝔼{∑_i=1^m_1δ_i}/𝔼{∑_i=1^mδ_i} +𝔼{∑_i=m_1+1^m (1-θ_i)δ_i}/𝔼{∑_i=m_1+1^mδ_i}𝔼{∑_i=m_1+1^mδ_i}/𝔼{∑_i=1^mδ_i}. Therefore, by Equations (<ref>) and (<ref>), we conclude that lim_m→∞mFDR_δ ≤α{𝔼 (∑_i=1^m_1δ_i)/𝔼 (∑_i=1^mδ_i) + 𝔼 (∑_i=m_1+1^mδ_i)/𝔼 (∑_i=1^mδ_i)} = α. § PROOFS OF LEMMAS §.§ Proof of Lemma <ref> It is easy to see that t^* ≥ q_(k) as ∑_i=1^m{1-π(s_i)}c_i(w_i q_(k))/∑_i=1^m𝕀(q_i ≤ q_(k))≤α. Now it suffices to show that, for any t ≥ q_(k+1), we have ∑_i=1^m{1-π(s_i)}c_i(w_i t)/∑_i=1^m𝕀(q_i ≤ t)> α. By the definition of k, for any l ≥ k+1, we have ∑_i=1^m{1-π(s_i)}c_i(w_i q_(l))/∑_i=1^m𝕀(q_i ≤ q_(l)) > α. Then for any l ≥ k+1, for any t ∈ [q_(l),q_(l+1)) where q_(m+1) = ∞, we have ∑_i=1^m{1-π(s_i)}c_i(w_i t)/∑_i=1^m𝕀(q_i ≤ t) =∑_i=1^m{1-π(s_i)}c_i(w_i t)/l ≥ ∑_i=1^m{1-π(s_i)}c_i(w_i q_(l))/l =∑_i=1^m{1-π(s_i)}c_i(w_i q_(l))/∑_i=1^m𝕀(q_i ≤ q_(l)) >α. This proves Equation (<ref>) and concludes the proof. §.§ Proof of Lemma <ref> First of all, by Assumption <ref>, we have that Q(t) is well defined for t ≥ν. For any t such that 𝔼{∑_i=1^m𝕀(q_i≤ t)}=0, we set Q(t)=0 for simplicity and it will not affect the results. We can rewrite Q(t) as Q(t)= 𝔼{∑_i=1^m(1-θ_i)δ_i}/𝔼(∑_i=1^mδ_i) = 𝔼[∑_i=1^m 𝔼{ (1-θ_i)δ_i | X_i}]/𝔼(∑_i=1^mδ_i)= 𝔼[∑_i=1^m δ_i 𝔼{ (1-θ_i) | X_i}]/𝔼(∑_i=1^mδ_i) = 𝔼{∑_i=1^m 𝕀( q_i ≤ t) q_i/1+q_i}/𝔼{∑_i=1^m𝕀( q_i ≤ t)}. For the first part of this lemma, note that 𝔼{∑_i=1^m 𝕀(q_i ≤ t) q_i/1+q_i}-t/1+t𝔼{∑_i=1^m𝕀(q_i ≤ t)}=𝔼{∑_i=1^m 𝕀(q_i ≤ t) (q_i/1+q_i - t/1+t)}= 𝔼{∑_i=1^m 𝕀(q_i ≤ t) q_i-t/(1+q_i)(1+t)}≤ 0. The equality holds if and only if ℙ(q_i < t | q_i ≤ t) = 0. Therefore, by Equation (<ref>), we have Q(t) = 𝔼{∑_i=1^m 𝕀(q_i ≤ t) q_i/1+q_i}/𝔼{∑_i=1^m𝕀(q_i ≤ t)} < t/1+t. Denote by ν = α/1-α. By Equation (<ref>), we immediately have t_L ≥ν. Therefore, we only consider t ≥ν in the following proof. For the second part, let ν≤ t_1 < t_2 < ∞, Q(t_1) = α_1 and Q(t_2) = α_2. From the first part, we learn that α_1 < t_1/1+t_1. Therefore, Q(t_2) = 𝔼{∑_i=1^m 𝕀(q_i ≤ t_2) q_i/1+q_i}/𝔼{∑_i=1^m𝕀(q_i ≤ t_2)} = 𝔼{∑_i=1^m 𝕀(q_i ≤ t_1) q_i/1+q_i}/𝔼{∑_i=1^m𝕀(q_i ≤ t_2)} + 𝔼{∑_i=1^m 𝕀(t_1 < q_i ≤ t_2) q_i/1+q_i}/𝔼{∑_i=1^m𝕀(q_i ≤ t_2)} = 𝔼{∑_i=1^m 𝕀(q_i ≤ t_1) q_i/1+q_i}/𝔼{∑_i=1^m𝕀(q_i ≤ t_1)}𝔼{∑_i=1^m𝕀(q_i ≤ t_1)}/𝔼{∑_i=1^m𝕀(q_i ≤ t_2)}+ 𝔼{∑_i=1^m 𝕀(t_1 < q_i ≤ t_2) q_i/1+q_i}/𝔼{∑_i=1^m𝕀(q_i ≤ t_2)} = α_1 𝔼{∑_i=1^m𝕀(q_i ≤ t_1)}/𝔼{∑_i=1^m𝕀(q_i ≤ t_2)} + 𝔼{∑_i=1^m 𝕀(t_1 < q_i ≤ t_2) q_i/1+q_i}/𝔼{∑_i=1^m𝕀(q_i ≤ t_2)} ≥ α_1 𝔼{∑_i=1^m𝕀(q_i ≤ t_1)}/𝔼{∑_i=1^m𝕀(q_i ≤ t_2)} + t_1/1+t_1𝔼{∑_i=1^m 𝕀(t_1 < q_i ≤ t_2)}/𝔼{∑_i=1^m𝕀(q_i ≤ t_2)} > α_1 𝔼{∑_i=1^m𝕀(q_i ≤ t_1)}/𝔼{∑_i=1^m𝕀(q_i ≤ t_2)} + α_1 𝔼{∑_i=1^m 𝕀(t_1 < q_i ≤ t_2)}/𝔼{∑_i=1^m𝕀(q_i ≤ t_2)} = α_1 = Q(t_1). For the third part, note that Q(t) here is continuous and increasing when m →∞. We consider two cases: lim_m→∞π(s_i)/m≤ 1-α and lim_m→∞π(s_i)/m > 1-α. The first case is trivial since it implies lim_t→∞ Q(t) ≤α and t_L = ∞. The procedure rejects all hypotheses and is obviously most powerful. For the second case, we have lim_t→∞ Q(t) = ∑_i=1^m {1-π(s_i)}/m > α. Combining this with the fact that Q(ν) < α, we can always find a unique t_L such that Q(t_L) = α. Note that, by lim_m→∞mFDR_δ^L = lim_m→∞𝔼{∑_i=1^m 𝕀( q_i ≤ t_L) q_i/1+q_i}/𝔼{∑_i=1^m𝕀( q_i ≤ t_L)} = α, andlim_m→∞mFDR_δ' = lim_m→∞𝔼{∑_i=1^m δ_i'q_i/1+q_i}/{𝔼∑_i=1^m δ_i'}≤α, we have lim_m→∞𝔼{∑_i=1^m δ_i^L(q_i/1+q_i - α)}=0andlim_m→∞𝔼{∑_i=1^m δ_i'(q_i/1+q_i - α)}≤ 0, which implies lim_m→∞𝔼{∑_i=1^m (δ_i^L - δ_i')(q_i/1+q_i - α)}≥ 0. Note that, by the law of total expectation as in Equation (<ref>), we have lim_m→∞{𝔼(∑_i=1^m δ_i^L θ_i) - 𝔼(∑_i=1^m δ_i' θ_i)}≥ 0⇔lim_m→∞𝔼{∑_i=1^m (δ_i^L - δ_i')1/1+q_i}≥ 0. Hence, it suffices to show lim_m→∞𝔼{∑_i=1^m (δ_i^L - δ_i')1/1+q_i}≥ 0. By Equation (<ref>), it suffices to show that there existssome λ≥ 0 such that (δ_i^L - δ_i')1/1+q_i≥λ (δ_i^L - δ_i')(q_i/1+q_i - α) for every i, i.e., (δ_i^L - δ_i'){1/1+q_i - λ(q_i/1+q_i - α)}≥ 0. By the first part of this lemma, we have α = Q(t_L) < t_L/1+t_L and thus 1/t_L - α(1+t_L) > 0. Let λ = 1/t_L - α(1+t_L), then for each i: * If δ_i^L = 0, we have δ_i^L - δ_i' ≤ 0 and q_i > t_L. Therefore, {1/1+q_i - λ(q_i/1+q_i - α)} < {1/1+t_L - λ(t_L/1+t_L - α)} = 0. * If δ_i^L = 1, we have δ_i^L - δ_i' ≥ 0 and q_i ≤ t_L. Therefore, {1/1+q_i - λ(q_i/1+q_i - α)}≥{1/1+t_L - λ(t_L/1+t_L - α)} = 0. This proves Equation (<ref>) and concludes the proof. §.§ Proof of Lemma <ref> For t ≥ 0, we let Q_2(t) = ∑_i=1^m_2{1 - π(s_2,i)}ĉ_2,i(w_2,i t)/𝔼{∑_i=1^m_2𝕀(q̂_2,i≤ t)}, Q̂_2(t) = ∑_i=1^m_2{1 - π̂_2(s_2,i)}ĉ_2,i(w_2,i t)/𝔼{∑_i=1^m_2𝕀(q̂_2,i≤ t)}, FDP_2(t) = ∑_i=1^m_2{1 - π̂_2(s_2,i)}ĉ_2,i(w_2,i t)/{∑_i=1^m_2𝕀(q̂_2,i≤ t)} 1, t^*_2 = sup{ t ∈ [0, ∞): FDP(t) ≤α}. As t^*_2 ≥q̂_2,(k)≥ν by the first part of Assumption <ref> and Lemma <ref>, we only consider t ≥ν in the following proof. The second part of Assumptiont <ref> implies 𝔼{∑_i=1^m_2𝕀(q̂_2,i≤ t)}→∞ when m→∞ for t ≥ν, which makes Q_2(t), Q̂_2(t), Q̃_2(t) well defined when t ≥ν. Note that, by Assumption <ref> and the standard Chernoff bound for independent Bernoulli random variables, we have uniformly for t ≥ν and any ϵ > 0 ℙ( | ∑_i=1^m_2𝕀(q̂_2,i≤ t)/𝔼{∑_i=1^m_2𝕀(q̂_2,i≤ t)} -1| ≥ϵ)≤2e^-ϵ^2∑_i=1^mℙ(q̂_2,i≤ t)/3 ≤ 2e^-ϵ^2∑_i=1^mℙ(q̂_2,i≤ν)/3→0, which implies sup_t≥ν|Q̂_2(t) - FDP_2(t)| P→ 0as m_2 →∞. On the other hand, we have uniformly for t ≥ν, |Q̂_2(t) - Q̃_2(t)| = | ∑_i=1^m_2{π̃_2(s_2,i) - π̂_2(s_2,i)}ĉ_2,i(w_2,i t)/𝔼{∑_i=1^m_2𝕀(q̂_2,i≤ t)}|≤ |∑_i=1^m_2{π̃_2(s_2,i) - π̂_2(s_2,i)}|/𝔼{∑_i=1^m_2𝕀(q̂_2,i≤ν)} = m_2 × o_P(1)/m_2 = o_P(1), where the first o_P(1)is with regard to m_1 →∞ by the uniformly conservative consistency of π̂_2(·), and the term m_2 in the denominator comes from the first part of Assumption <ref>. As the data splitting strategy ensures m_1≈ m_2, we obtain the second o_P(1) with regard to m →∞. Thus, we have sup_t≥ν|Q̂_2(t) - Q̃_2(t)| → 0 in probability as m →∞. Combining Equations (<ref>) and (<ref>), we have sup_t≥ν|FDP_2(t) - Q̃_2(t)| → 0 in probability as m →∞. Then, following the proof of Theorem <ref>, we can similarly obtain t^*_2→ t_2,L in probability by Assumptions <ref> and <ref>. Finally, we have mFDR_Half-procedure= ∑_i=1^m_2ℙ(q̂_2,i≤ t^*_2, θ_i = 0)/𝔼{∑_i=1^m_2𝕀(q̂_2,i≤ t^*_2)} = ∑_i=1^m_2ℙ(q̂_2,i≤ t_2,L, θ_i = 0)/𝔼{∑_i=1^m_2𝕀(q̂_2,i≤ t_2,L)} + o(1) = Q_2(t_2,L) + o(1) ≤Q̃_2(t_2,L) + o(1)≤α + o(1). apalike
http://arxiv.org/abs/2310.17845v1
{ "authors": [ "Bowen Gang", "Shenghao Qin", "Yin Xia" ], "categories": [ "stat.ME" ], "primary_category": "stat.ME", "published": "20231027015321", "title": "A Unified and Optimal Multiple Testing Framework based on rho-values" }
JLAB-THY-23-3950 Jefferson Lab, Newport News, Virginia 23606, USA Department of Physics, Old Dominion University, Norfolk, Virginia 23529, USA Jefferson Lab, Newport News, Virginia 23606, USA Jefferson Lab, Newport News, Virginia 23606, USAJefferson Lab, Newport News, Virginia 23606, USAJefferson Lab, Newport News, Virginia 23606, USAJefferson Lab, Newport News, Virginia 23606, USA Jefferson Lab, Newport News, Virginia 23606, USA Aix Marseille Univ, Université de Toulon, CNRS, CPT, Marseille, FranceJefferson Lab Angular Momentum (JAM) and HadStruc Collaborations We perform a new global analysis of spin-dependent parton distribution functions with the inclusion of Ioffe time pseudo-distributions computed in lattice QCD (LQCD), which are directly sensitive to the gluon helicity distribution, Δ g. These lattice data have an analogous relationship to parton distributions as do experimental cross sections, and can be readily included in global analyses. We focus in particular on the constraining capability of current LQCD data on the sign of Δ g at intermediate parton momentum fractions x, which was recently brought into question by analysis of data in the absence of parton positivity constraints. We find that present LQCD data cannot discriminate between positive and negative Δ g solutions, although significant changes in the solutions for both the gluon and quark sectors are observed. Gluon helicity from global analysis of experimental data and lattice QCD Ioffe time distributions S. Zafeiropoulos January 14, 2024 =================================================================================================== § INTRODUCTION The decomposition of the spin of the proton in terms of its constituent quark and gluon (or parton) degrees of freedom has been the subject of tremendous interest over the last three decades, ever since the discovery by the European Muon Collaboration (EMC) <cit.> that the intrinsic spin carried by quarks was only about ≲ 10%-30% of the proton's spin.These findings were confirmed by subsequent measurement at CERN, SLAC, DESY, and more recently at Jefferson Lab and RHIC (for reviews, see e.g., Refs. <cit.>). Specifically, in the Jaffe-Manohar <cit.> decomposition, the proton's spin contributions can be described in terms of the helicity of individual partons and the collective orbital angular momentum originating from quarks and gluons, 1/2 = 1/2ΔΣ(μ) + Δ𝐆(μ) + 𝐋_q+g(μ). Here, 1/2ΔΣ(μ) and Δ G(μ) denote the net spin contributions from quarks and gluons, respectively, while L_q+g represents the corresponding net orbital angular momentum from quarks and gluons. While component in the sum depends on the scale μ due to renormalization, the sum is a scale-invariant quantity. Utilizing the helicity basis, one can compute the net spin contribution of partons through moments of the helicity-dependent parton distribution functions (hPDFs) as Δ𝐆(μ) = ∫_0^1x Δ g(x,μ), ΔΣ(μ) = ∫_0^1x ΔΣ (x,μ) = ∑_q ∫_0^1x ( Δ q(x,μ) + Δq̅(x,μ) ). where x is the longitudinal light-cone momentum fraction carried by partons relative to their parent proton, and the sum runs over all quark flavors q=u,d,s,c,b. There are several basic considerations that are relevant to point out. First, hPDFs are of course not directly measurable quantities. Instead, observables such as double spin asymmetries (DSAs) measured in polarized deep-inelastic scattering (DIS) provide constraints on hPDFs via QCD factorization, which allows for the approximate expression of the measured asymmetries as convolutions of parton-level coefficient functions and hPDFs.Second, spin asymmetries are unable to impose constraints on hPDFs down to x=0, as this would require prohibitively high energies in particle collisions. Additionally, standard QCD factorization theorems are only valid provided there is a measurable hard scale Q in the reaction that is large enough for the applicability of perturbative calculations. This typically limits the lower bounds that experimental data can impose on hPDFs. The EMC provided constraints on hPDFs down to x≈ 0.01 and found that the reconstructed total quark spin ΔΣ was positive but far too small to account for the proton spin, although constraints were only in the region of 0.01<x<0.5 with large extrapolation uncertainties.At that time, constraints on Δ𝐆 were also rather nonexistent because the gluon hPDF only enters the DSA at next-to-leading order in perturbative QCD. Furthermore, constraints on Δ g via evolution were limited due to the kinematic coverage of the experiments.With the advent of the RHIC spin experimental program, knowledge about Δ g began to emerge thanks to measurement of DSAs in inclusive hadron and jet production in polarized proton-proton collisions.Using RHIC data <cit.> within a global analysis framework, the DSSV group found the first clearly nonzero signal and a positive gluon hPDF in the region above x ≈ 0.1 <cit.>.These observations were confirmed in subsequent inclusive jet production data from the STAR <cit.> and PHENIX <cit.> collaborations, leading to greater confidence that both the quark and gluon helicity content of the proton were relatively well understood.Complementary efforts were also made by the PHENIX collaboration to empirically determine the sign of gluon polarization without relying on global QCD analysis.Specifically, in Refs. <cit.> PHENIX observed an hierarchy of DSAs in hadron production, with π^+ > π^0 > π^-, indicating a positive sign for Δ g based on perturbative QCD arguments. Recently, the JAM collaboration <cit.> revisited the impact of RHIC spin data within a global analysis, with particular focus on the theoretical assumptions that are commonly made in such studies. Specifically, it was found that parton-level positivity constraints play an important role in determining the sign of Δ g.These constraints amount to demanding positivity on the individual helicity components (hPDF^±), such that g_↑/↓(x) > 0 [We use “hPDF” to denote Δ q=q_↑-q_↓ and “hPDF^±” for q_↑/↓, with q labeling a generic parton flavor.], where g_↑/↓ = 1/2( g±Δ g ), and g is the unpolarized gluon PDF. Relaxing these constraints in a global analysis reveals a possible second set of solutions in which Δ g is negative. Furthermore, the vast majority of the positive solutions also violate the naive positivity bounds in the very large-x region. Zhou et al. <cit.> showed that all the jet DSA data can be equally well described by the negative Δ g solutions and by the positive solutions.This emphasizes the lack of constraints on hPDFs at large values of x from experimental data, mostly due to the growing statistical uncertainties in DSA measurements at large x.In addition, Whitehill et al. <cit.> demonstrated that the negative Δ g solutions can equally well describe the pion DSA data measured by the PHENIX collaboration <cit.>. In view of these observations, the PHENIX collaboration recently presented a new analysis of DSAs in isolated prompt-photon data, from which they concluded that the negative Δ g solutions can be ruled out with a more than 2.8σ confidence level.However, in the PHENIX analysis the unpolarized cross sections that are part of the denominator of the DSA are only describable for photon transverse momentum p_ T≳ 10 GeV (see Fig. 1 of Ref. <cit.>). This leaves only three out of seven DSA data points above p_ T=10 GeV that are describable within a perturbative QCD framework. These remaining data points have sufficiently large uncertainties that the disagreement with negative Δ g solutions would very likely be significantly below the 2.8σ confidence level, so the question remains inresolved. Given the lack of clarity about the sign of the gluon hPDF in the absence of parton positivity constraints, one may be tempted to ask whether it would be prudent to impose such constraints at present until future data can make them redundant.Recently Collins et al. <cit.> pointed out that PDFs in general do not need to be positive definite, even though physical cross sections, as well as individual cross section components in spin asymmetries, must always be positive. In the DSA A_LL = (σ_+-σ_-)/(σ_++σ_-), where σ_± represents the two longitudinal spin configurations of the interacting beams, QCD factorization requires both σ_+ and σ_- to be positive.Negative components in PDFs can, in principle, induce negative σ_± contributions, which could be eliminated by imposing the positivity constraints.However, other sources, such as large logarithms in fixed-order perturbative calculations or significant power corrections that go beyond standard leading-power treatments, could also bring about such scenarios. Furthermore, the negative Δ g found in Ref. <cit.> obviously does not violate the positivity of σ_±, since all the DSAs are well described and fall within the physical bounds, |A_LL| < 1.Therefore, at present there is no clear data-driven evidence that rules out the negative solutions for Δ g.One could argue that the phase space coverage of the existing data is not a sufficient condition to accept the negative Δ g as a physical solution.It is of course possible to compute hypothetical observables outside the current experimental reach and find violations of positive cross sections.The challenge with this strategy, however, is that it assumes strict validity of factorization and perturbative stability across the entire physical phase space.Even if only a conservative region of phase space, where the theoretical framework is expected to operate relatively well, is considered, the lack of empirical evidence that demonstrates that theory can describe a given hypothetical data with the same universal sets of hPDFs describing existing data prevents us from testing universality and the predictive power of the reconstructed hPDFs.While determining the sign of the gluon polarization will require new experiments at planned facilities, such as those at Jefferson Lab and the future Electron-Ion Collider, an alternative strategy for the present time is to explore off-the-light-cone matrix elements calculable in lattice QCD (LQCD). A pioneering approach was introduced by Ji <cit.> (for recent reviews see, e.g., Refs. <cit.>) within the framework of large momentum effective theory (LaMET), which allows matrix elements of operators with space-like separation to be related to PDFs. A complementary approach introduced by Radyushkin <cit.> allows for this relationship even when the space-like separation is small, removing the formal requirement of large momentum.Practically, however, in both approaches a high precision, purely LQCD reconstruction of PDFs is limited by current computational resources, since access to larger momenta and smaller separations incurs greater costs.Synergistic activities are currently underway to make use of LQCD data as potential sources of information complementing hadron structure studies where the reach of experiments is limited. For instance, growing efforts to combine LQCD and experimental data within a global analysis framework have taken place <cit.>, which have illustrated that combining information from LQCD with experimental data can lead to stronger constraints on PDFs than those obtained from either LQCD or experimental data alone. In the context of hPDFs, the quark helicity contribution can be approximately reconstructed from proton matrix elements of the axial current <cit.>, although determining the gluon helicity and orbital angular momentum contributions is more challenging.One approach to extracting these quantities requires the computation of matrix elements of local operators which while approximating in the infinite momentum limit are related to Δ G within the LaMET formalism <cit.>. All of these approaches pose significant difficulties, and currently lattice data only provide weak constraints on the gluon helicity contributions to the proton'sspin <cit.>. Recently, the HadStruc collaboration has provided new LQCD calculations of matrix elements that have direct sensitivity to Δ g <cit.>.In their analysis, it was argued that the negative Δ g solutions were significantly disfavored by LQCD data. Motivated by these findings, in this paper we explore the full extent to which LQCD data can impose constraints on gluon polarization in the proton in terms of QCD factorization approach, and seek a potential resolution regarding its sign. In Sec. <ref> we review the LQCD calculations of the Ioffe time pseudo-distributions, and summarize the experimental data used in our analysis in Sec. <ref>.In Sec. <ref> we present the results of the combined analysis of the LQCD and experimental data, offering detailed comparisons of the results before and after the inclusion of the LQCD data.Our concluding remarks are found in Sec. <ref>.§ LATTICE QCD DATA In this section we review the LQCD calculations of pseudo-PDFs, as introduced by Radyushkin <cit.>. This method involves the computation of Lorentz invariant amplitudes (or linear combinations of them) called Ioffe time pseudo-distributions (pseudo-ITDs). The pseudo-ITDs can be matched to the PDFs in the MS scheme when the invariant separation between the field operators z^2 is sufficiently small. We consider matrix elements of the form <cit.> M^μν ; αβ(p,z)= ⟨ p | F^μν(0) W(0;z) F^αβ(z) | p ⟩, where F^μν and F^αβ represent the gluon field strength tensor and its dual, with color indices implicitly contracted, and W is a straight Wilson line in the adjoint representation. In the limit where z is a light-like separation, this matrix element can be used to provide the operator definition for Δ g that is accessible experimentally. The Lorentz decomposition for the generic matrix element in Ref. <cit.> is rather involved, with fourteen terms that remain after considering the antisymmetry in indices μ↔ν and α↔β, though two constraints exist between multiple terms. In the operator definition of Δ g, only three of the terms contribute. With space-like separations, it is useful to consider the combinationM_00(p,z)= p_0p_3 [ M^ti;it(p,z) + M^ij;ji(p,z) ] = ℳ(ν,z^2) + m^2z^2/νℳ_pp(ν,z^2),where ν=p· z is the Ioffe time <cit.>, and i,j are spatial directions transverse to z. The primary reason to consider such a combination in the space-like separations is that it contains the very same linear combination of the Lorentz invariants that appear in the light-cone case, represented by ℳ, alongside a power correction term, ℳ_pp, proportional to m^2 z^2/ν, where m is the proton mass. It is the ℳ term which survives the small-z^2 limit and will be related to the parton distributions. The particular combination defining M_00 also happens to be multiplicatively renormalizable <cit.>, where the renormalization constant contains an exponential dependence from the Wilson line and a logarithmic dependence determined by the specific choices of indices.Following the proposal in Ref. <cit.>, we construct the reduced pseudo-ITD as 𝔐(ν,z^2)= M_00(p,z) / [ p_0 p_3 Z_L(z_3/a) ]/M_00(p=0,z)/m^2. Note that this quantity is finite in the continuum limit. The combination M_00 = M_ti;it+M_ij;ji represents the matrix element for the unpolarized gluon PDF defined in Ref. <cit.> which contains the same Wilson line renormalization constant. The factor Z_L cancels the remaining logarithmic ultraviolet divergences. After cancellation of the renormalization constants, the denominator is given by the average gluon momentum fraction ⟨ x⟩_g. The purpose of this ratio is to construct a calculable observable, finite in the continuum limit, which reduces to the renormalized ℳ amplitude in the small-z^2 limit where it can be related to the PDFs, or equivalently their Ioffe time distributions.The gluon and quark-singlet Ioffe time helicity distributions, I_Δ g and ℐ_ΔΣ, respectively I_Δ g (ν,μ^2) = ∫_0^1x x sin(xν) Δ g(x,μ^2), ℐ_ΔΣ(ν,μ^2)= ∫_0^1x xsin(xν) ΔΣ(x,μ^2). The matching between the reduced pseudo-ITD and the Ioffe time helicity distributions, is given by <cit.>𝔐( ν, z_3^2 ) ⟨ x ⟩_g(μ^2) =I_Δ g(ν, μ^2 )- α_s N_c /2π ∫_0^1uI_Δ g (uν, μ^2 ) ×{log( z_3^2 μ^2 e^2γ_E/4)([2u^2/u̅ + 4uu̅]_+ - ( 1/2+ 4/3⟨ x ⟩_Σ(μ^2)/⟨ x ⟩_g(μ^2) ) δ( u̅ ) ) + 4 [u+log (1-u)/u̅]_+- [ 1/u̅ - u̅]_+- 1/2δ(u̅) +2u̅u } - α_s C_F/2π∫_0^1uℐ_ΔΣ(u ν,μ^2) (log(z_3^2 μ^2e^2 γ_E/ 4 )B_gq (u) + 2u̅u) + O(m^2 z^2) + O(Λ_QCD^2 z^2) ,where ⟨ x ⟩_Σ(μ^2) is the average momentum fraction of the unpolarized quark singlet distribution, u̅ = 1-u, and B_gq (u)=1 - u̅^2 is the quark-gluon mixing term of the evolution kernel. Note that the factorization is only valid in the limit where ℳ_pp does not contributing to 𝔐. As will be discussed later, multiple ways were tested in Ref. <cit.> to remove its contribution. The presence of the structure-dependent momentum fractions ⟨ x ⟩_g(μ^2) and⟨ x ⟩_Σ(μ^2) in the matching relation is atypical in the analogous factorization of cross sections. It appears entirely due to the evolution of the momentum fraction on the left hand side of Eq. (<ref>), which must be included due to the normalization of 𝔐. This normalization is convenient for two reasons. Not only does the exponential renormalization of the Wilson line cancel, but it does so in such a way so as to cancel the statistical fluctuations of M_00 and M_00, which are highly correlated.Note that Eq. (<ref>), as all factorization relationships, is valid up to the power correction terms, which in this case are O(z^2). However, it was found <cit.> that these corrections were actually the dominant contribution to the matrix element. To address this, two approaches were used to remove such contributions: one approach involved modeling the two terms in Eq. (<ref>) with polynomials in ν, while the other involved subtracting the rest frame matrix element which is exclusively given by the contaminating power correction term. The rest frame subtracted data were found to be consistent with the model of ℳ from the first approach, giving confidence that both approaches provide consistent results. This agreement implies that the residual contamination from the power corrections has been significantly reduced relative to the overall uncertainty on the leading power contribution in Eq. (<ref>). In this study we will apply the factorization (<ref>) to relate a model PDF to the rest frame subtracted data. Furthermore, in Ref. <cit.> both terms were modeled with a neural network functional form, showing relatively good agreement with the polynomial approach (see Fig. 3 in Ref. <cit.>).Calculations in LQCD are limited in the maximum momentum that a hadron can carry. Large momentum calculations are plagued by polynomially growing lattice systematic errors and, worse, exponentially growing statistical noise. This issue limits calculations to momenta |p|≲3 GeV. With a limited range of p, or equivalently ν, the pseudo-ITD cannot constrain the full x region of the PDF. It has been shown <cit.> that increasing the range of ν allows for more accurate reproduction in the low-x region. Even with only ν<10, the hPDF can be determined accurately for x ≳ 0.25. In Ref. <cit.> this feature was exploited by combining experimental results, sensitive to low x, and lattice results, sensitive to large x, to obtain stronger constraints on the unpolarized quark PDF in the pion. It is the goal of this study to explore whether the polarized gluon pseudo-ITD has sufficient constraining power to discriminate between the sign of Δ g in the large-x region. In our study, we include LQCD data that were generated on 1901 configurations of an ensemble with (2+1)-dynamical clover Wilson fermions with stout-link smearing and tree-level tadpole-improved gauge action with a lattice volume 32^3 × 64. The lattice spacing is a=0.094(1) fm, determined using the w_0 scale <cit.>, and the pion mass is m_π=358(3) MeV, respectively. While the quarks, and thereby pions, have unphysically large masses, this is not expected to be a dominant systematic error for gluon matrix elements compared to discretization effects and other systematic uncertainties. The two-point correlation functions are constructed using the distillation approach <cit.> with sources on all possible time slices. Wilson gradient flow <cit.> was used to control statistical errors, with an extrapolation to zero flow time. The scale dependence entering in Eq. (<ref>) is set asμ^2= max[m_c^2, 4/e^2γ_E z_3^2 ], where z_3=a n is the space-like separation of the gluon fields, expressed in terms of the lattice spacing a, and n is an integer. We choose this scale to optimize the perturbative expression in Eq. (<ref>) to remove the logarithmic contributions. § EXPERIMENTAL DATA From the experimental side, in the current analysis we restrict ourselves to using only spin observables that are directly sensitive to hPDFs, in contrast to the recent JAM analysis <cit.>, where PDFs, hPDFs and fragmentation functions were all simultaneously extracted from data. Here, we summarize all the experimental data in our analysis:* DSAs in inclusive DIS: We include all data from fixed-target experiments conducted by the EMC <cit.>, SMC <cit.>, COMPASS <cit.>, SLAC <cit.>, and HERMES <cit.> collaborations. We apply identical cuts on W^2 and Q^2 as those used for unpolarized DIS data <cit.>. Whenever available, we use DSAs rather than the reconstructed g_1 structure function to ensure consistent propagation of uncertainties include those from PDFs entering in the denominator of the asymmetries. To ensure that the asymmetries are dominated by the leading twist g_1 structure function, with negligible contributions from g_2, we impose constraints on the four-momentum transfer squared Q^2 > m_c^2, and the hadronic final state masses W^2 > 10 GeV^2. * DSAs in semi-inclusive DIS (SIDIS): With the same cuts as in the inclusive DIS case, we include pion, kaon, and unidentified hadron SIDIS measurements on polarized proton, deuteron, and ^3He targets fromHERMES <cit.>, COMPASS <cit.> and SMC <cit.>. The fragmentation variable z is restricted to the range 0.2 < z < 0.8 to ensure the applicability of the leading-power formalism and avoid hadron mass corrections and threshold effects <cit.>. * DSAs in inclusive jet production in polarized pp collisions: We include DSAs from the STAR <cit.> and PHENIX <cit.> collaborations at RHIC. The p_ T range is restricted to be the same as the minimum p_ T for which the corresponding unpolarized jet data are describable <cit.>. This ensures a faithful description of the denominator in the asymmetries. For all the observables we employ a next-to-leading order framework for the parton level cross sections and asymmetries. The scale settings for DIS and SIDIS are all set equal to the scale of the virtual photon. In the case of jet data, we use the scale settings equal to 1/2 p_ T, which generally yields the best agreement for both unpolarized and polarized data.§ GLOBAL ANALYSIS WITH LQCD DATAOur numerical approach to infer hPDFs in the combined analysis follows the same Monte Carlo strategy as in previous JAM analyses <cit.>. Specifically, we employ a data resampling technique where pseudodata are generated by sampling the original data with Gaussian distributions within the uncertainties. In the case of LQCD pseudo-ITD data, we utilize the full covariance matrix for generating pseudodata. For each set of pseudodata, we optimize the hPDF parameters while assigning prior parameters for the PDFs and fragmentation functions from an earlier JAM analysis <cit.>. The resulting ensemble of optimized hPDFs represents the posterior density of the combined LQCD+experimental global analysis. After collecting all the hPDF Monte Carlo samples, including the LQCD data, we find that the negative Δ g solutions still persist, although with significant changes in their shape. To assess the significance of the results, we first discuss the quality of the agreement between the data and theory. Figure <ref> displays the reduced χ^2 for the individual data sets, defined as χ^2_ red = χ^2/N, where N represents the number of points. We present results both before (from Ref. <cit.>) and after the inclusion of LQCD data. The results are separated by different types of data sets and arranged in increasing order of χ^2_ red. We tabulate the data sets and their labels in Table <ref>. In addition, we categorize the results based on the sign of Δ g to illustrate the global agreement of the negative solutions in the absence of positivity constraints. In Fig. <ref>, we provide standardized Z-scores based on the Gaussian hypothesis, computed as Z=√(2) erf^-1(1-2p), where p is the p-value estimated from a χ^2 distribution with N as the degrees of freedom. This allows us to assess the statistical significance of the reduced χ^2 values and diagnose instances where the χ^2 values deviate from the ideal value of unity. In both figures, the error bars indicate the 50% percentiles and their neighborhoods of ± 1σ percentiles.Prior to the inclusion of LQCD data, most of the experimental data sets exhibit relatively good agreement with the theory, with Z-scores confined within 1σ in most cases, regardless of the sign of Δ g. However, the LQCD data shows a significant tension for the negative Δ g solutions. After the inclusion of the LQCD data, one finds the same agreement across most of the data sets as before, with a possible exception in one of the polarized jet data sets labeled as data set “51" in Fig. <ref>. This data set corresponds to DSAs in polarized jets from the STAR collaboration. To examine this, in Fig. <ref>, we show the data and theory comparisons. The inclusion of the LQCD data forces the negative solutions to deviate further from a few A_LL data points around p_ T∼ 20 GeV at the 0<|η|<0.5 bin, causing an increase in the Z-score from < 1σ to 2σ, which is however not statistically significant. Note that in principle it is possible to obtain physical |A_LL| < 1 DSAs with σ_+ and σ_- both negative. However, this would imply that the spin-averaged cross sections, proportional to σ_+ + σ_-, would also be negative. Since we agree with the unpolarized cross section data, including at RHIC kinematics, this scenario can be ruled out in our analysis. Taking the same polarized jet data set 51, it is instructive to decompose its numerator into the three possible partonic subprocesses: qq, qg, and gg, to understand the role of the linear term with Δ g that can discriminate its sign. This is shown in Fig. <ref> for the 0<|η|<0.5 bin for the two solutions of Δ g and compares the results before and after the inclusion of LQCD data. In the case of Δ g>0, it is clear that the linear contribution qg is the leading subprocess of the DSAs at larger values of p_ T relative to the other subprocesses, and the inclusion of LQCD data does not significantly alter the relative contributions of the subprocesses. In contrast, prior to the inclusion of LQCD data, the negative Δ g solutions enhance the role of the gg channel at the expense of making the qg channel more negative in order to balance out the relative contributions to the DSAs and describe the data. This situation changes with the the inclusion of LQCD data where the qg and gg channels contribute positively at larger values of p_ T at the expense of turning the qq channel negative. This means that the quark hPDFs have undergone changes at large x, despite the fact that all the DSAs from DIS up to x ∼ 0.66 considered in this analysis are well described. We also find that the inclusion of the LQCD data admits negative solutions for Δ g that can describe the LQCD data relatively well, with Z-scores ranging from 1-3σ, which, in turn, prevents the complete elimination of the negative solutions from the posterior distribution. To understand the situation, in Fig. <ref>, we display the lattice data as a function of the Ioffe time ν. The data points are available at different values of z_3^2 for each value of ν, which requires us to use different values for the scale settings in Eq. <ref>. The calculations of Eq. <ref> are performed at discrete values of ν and z_3^2, and we have linearly connected the points to show the trends for the positive (red) and negative (blue) Δ g solutions. Prior to the inclusion of the LQCD data, the positive solutions exhibit relatively good agreement with the data, while the negative solutions display a peculiar oscillatory behavior that is inconsistent with the data. This inconsistency is particularly noticeable in the lower ν regions, where LQCD calculations are expected to be more reliable.After the inclusion of the LQCD data, the variance of the positive solutions decreases, indicating a level of constraint on the hPDFs. However, the negative solutions persist, albeit with a shape that exhibits fewer oscillations. These two solution sets clearly have distinctive signs for 𝔐. Since the majority of the LQCD data is positive, the negative solutions are disfavored. From a global analysis perspective, these negative solutions do not disappear entirely due to the contribution of the χ^2 function from the LQCD data, which includes a covariance matrix with non-zero off-diagonal components not included in Fig. <ref>. When considering the full covariance matrix of the LQCD data, one finds that the negative solutions agree within approximately 1σ confidence level, as shown in Fig. <ref>.We now discuss the results at the hPDF level. In Fig. <ref>, we present the replicas of Δ g and ΔΣ before and after the inclusion of LQCD data, categorizing the hPDFs by the sign of Δ g. In the gluon sector, we observe significant changes for the negative solutions for x>0.3, where the behavior of the replicas tends to violate the positivity constraints less. Nevertheless, negativity in the gluon helicity is still visible in the region x<0.2, which cannot be ruled out by the positivity constraints or any of the present data from experiments or LQCD included in the present analysis. Interestingly, for the quark singlet sector, we find, in contrast to the no-LQCD case, differences in ΔΣ for x>0.3, where negative solutions appear which corresponds to negative Δ g solutions. As mentioned before, our DIS DSAs are in the region with W^2>10  GeV^2 with the highest value of x ∼0.66 hence insensitive to most of the negative ΔΣ above x>0.7 and in turn it prevents the DSAs from single jet productions to discriminate against the negative solutions of Δ g. Finally, in Fig. <ref>, we display the individual components of the gluon helicity PDF, namely g_↑ and g_↓. In the case of Δ g>0, we observe violations of positivity, mostly for the spin anti-aligned PDF g_↓, above x∼ 0.4. For the mirror version, Δ g<0, this violation occurs earlier, around x∼ 0.3, for the spin-aligned PDF g_↑. As mentioned before, positivity constraints are violated regardless of the sign of Δ g. § CONCLUSIONSWe have performed a new global analysis of spin-dependent parton distribution functions, incorporating Ioffe time pseudo-distributions computed in lattice QCD, which directly probe the gluon helicity PDF. Our analysis critically examines the overall agreement between data and theory. We find that the inclusion of the LQCD data does not significantly alter the quality of the results. At present, LQCD data do not definitively rule out the negative Δ g solutions, which were recently found by the JAM collaboration at moderate values of x. Nevertheless, we observe changes in the shape and magnitude of the gluon helicity PDF and the quark sector. LQCD data reduces the magnitude of the negative Δ g solutions at high x, leading to a sign change in the corresponding quark singlet solutions at x∼ 0.4, necessary to describe the polarized jet data from RHIC. The changes induced by LQCD data do not impact the description of inclusive DIS data extending up to x≈ 0.66. Future work should include the large-x data from Jefferson Lab, which requires additional treatment of power corrections. However, these data are likely to exhibit tension with the negative Δ g and negative ΔΣ solutions at high x, providing an empirical test of the sign of Δ g. Nevertheless, we emphasize the importance of including additional large-x data that are less sensitive to power corrections in order to comprehensively assess the universality of the resulting hPDFs. For future work, we look forward to incorporating dijet data from RHIC, which may help constrain the sign of Δ g at high x. The proposed JLab 24 GeV upgrade would also give greater discriminating power at larger x values <cit.>. Furthermore, forthcoming LQCD calculations sensitive to the singlet distribution ΔΣ may provide new insights into the high-x behavior of hPDFs. We should also note that this study is limited by the data currently available, and anticipate collecting additional crucial information from the future Electron-Ion Collider <cit.>, which is expected to provide constraints on hPDFs in the previously unexplored region of small x and large Q^2, with observables that are sensitive linearly to Δ g.We would like to thank Werner Vogelsang for useful discussions.This project was supported by the U.S. Department of Energy, Office of Science, Contract No. DE-AC05-06OR23177, under which Jefferson Science Associates, LLC operates Jefferson Lab. N.S. was supported by the DOE, Office of Science, Office of Nuclear Physics in the Early Career Program. C.J.M. is supported in part by the U.S. DOE EC Award . S.Z. acknowledges support by the French Centre national de la recherche scientifique (CNRS) under an Emergence@INP 2023 project. R.M.W. was supported by N.S.'s Early Career Award. KO was supported in partby the U.S. DOE Grant .This work has benefited from the collaboration enabled by the Quark-Gluon Tomography (QGT) Topical Collaboration, U.S. DOE Award DE-SC0023646.Computations for this work were carried out in part on facilities of the USQCD Collaboration, which are funded by the Office of Science of the U.S. Department of Energy. This work was performed in part using computing facilities at William and Mary which were provided by contributions from the National Science Foundation (MRI grant PHY-1626177), and the Commonwealth of Virginia Equipment Trust Fund. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1548562. Specifically, it used the Bridges system, which is supported by NSF award number ACI-1445606, at the Pittsburgh Supercomputing Center (PSC) <cit.>. In addition, this work used resources at NERSC, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract #DE-AC02-05CH11231, as well as resources of the Oak Ridge Leadership Computing Facility at the Oak Ridge National Laboratory, which is supported by the Office of Science of the U.S. Department of Energy under Contract No. . The software codes Chroma <cit.>, QUDA <cit.>, QPhiX <cit.>, and Redstar <cit.> were used in our work. The authors acknowledge support from the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research and Office of Nuclear Physics, Scientific Discovery through Advanced Computing (SciDAC) program, and of the U.S. Department of Energy Exascale Computing Project. The authors also acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources, like Frontera computing system <cit.> that has contributed to the research results reported within this paper. We acknowledge PRACE (Partnership for Advanced Computing in Europe) for awarding us access to the high performance computing system Marconi100 at CINECA (Consorzio Interuniversitario per il Calcolo Automatico dell’Italia Nord-orientale) under the grants Pra21-5389 and Pra23-0076. This work also benefited from access to the Jean Zay supercomputer at the Institute for Development and Resources in Intensive Scientific Computing (IDRIS) in Orsay, France under project A0080511504.
http://arxiv.org/abs/2310.18179v1
{ "authors": [ "J. Karpie", "R. M. Whitehill", "W. Melnitchouk", "C. Monahan", "K. Orginos", "J. -W. Qiu", "D. G. Richards", "N. Sato", "S. Zafeiropoulos" ], "categories": [ "hep-ph", "hep-lat" ], "primary_category": "hep-ph", "published": "20231027144646", "title": "Gluon helicity from global analysis of experimental data and lattice QCD Ioffe time distributions" }
[name=Richard Hess, color=cyan]rh 00B5µ 00B0° 0.0cm 0.2cm 16cm21cm 1.0cmsciabstract24pt Gate-tunable topological superconductivity in a supramolecular electron spin latticeRémy Pawlak,^1∗† Jung-Ching Liu,^1† Chao Li,^1† Richard Hess,^1† Hongyan Chen,^2 Carl Drechsel,^1, Ping Zhou,^3 Robert Häner,^3 Ulrich Aschauer,^3,4Thilo Glatzel,^1 Silvio Decurtins,^3Daniel Loss,^1 Jelena Klinovaja,^1 Shi-Xia Liu,^3∗ Wulf Wulfhekel,^2 & Ernst Meyer^1^1Department of Physics, University of Basel, Klingelbergstrasse 82, 4056 Basel, Switzerland^2Physikalisches Institut, Karlsruhe Institute of Technology,Wolfgang-Gaede-Str. 1, 76131 Karlsruhe, Germany^3Department of Chemistry, Biochemistry and Pharmaceutical Sciences,University of Bern, Freiestrasse 3, 3012 Bern, Switzerland^4Department of Chemistry and Physics of Materials, University of Salzburg,Jakob-Haringer-Strasse 2A, 5020 Salzburg, Austria^†These authors equally contributed;^∗To whom correspondence should be addressed;E-mails:[email protected], [email protected] ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================ Topological superconductivity emerges in chains or arrays of magnetic atoms coupled to a superconductor. However, the external controllability of such systems with gate voltages is crucial for their future implementation in a topological quantum computer. Here we showcase the supramolecular assembly of radical molecules on Pb(111), whose discharge is controlled by the tip of a scanning tunneling microscope. Charged molecules carry a spin-1/2 state, as confirmed by observing Yu-Shiba-Rusinov in-gap states by tunneling spectroscopy at millikelvin temperature. Low energy modes are localized at island boundaries with an exponential decay towards the interior, whose spectral signature is consistent with Majorana modes protected by mirror symmetry. Our results open up a vast playground for the synthesis of gate-tunable organic topological superconductors.*One sentence summary: Radical molecules placed on superconducting Pb(111) represent a novel platform for topological superconductivity*Keywords: Topological crystalline superconductor, tetraazapyrene radicals, scanning tunneling microscopy, atomic force microscopy, molecular quantum dot, Yu-Shiba-Rusinov states§ INTRODUCTIONMajorana zero modes (MZM) in topological superconductors obey non-Abelian statistics and are considered as the most promising building blocks for constructing topological qubits <cit.>. Topological superconductivity (TS) can be obtained in hybrid systems by coupling semiconducting nanowires with strong spin-orbit interaction <cit.>, ferromagnetic atomic chains<cit.>, or magnetic islands <cit.> to a s-wave superconductor. One signature of MZMs is a zero-energy conductance peak, measured with a scanning tunneling microscope (STM) or in transport experiments. This zero-energy conductance peak appears close to the zero-dimensional boundary separating the trivial from the topological region. Higher dimensional boundariesinstead, lead to the formation of propagating edge states, namely chiral Majorana modes. Since local disorder can close the gap of non-trivial phaseby severely affecting the proximitized superconducting states<cit.>, assigning Majorana modes requires a fundamental understanding of the system parameters with high spectral resolution down to the atomic level<cit.>. The ability of STM to create and probe lattices with single-atom precision using manipulation techniques has offered unique opportunities for realizing designer quantum materials at the atomic scale <cit.>. With the spectral resolution of scanning tunneling spectroscopy (STS), the detection of Yu-Shiba-Rusinov (YSR) states arising from magnetic moments on a superconductor <cit.> has revealed how important surface coordination<cit.>, interatomic coupling <cit.>, or magnetic anisotropy<cit.> are to the formation of a complex topological phasediagram. Moreover, the experimental analysis of these YSR bands in atomic structures has revealed the emergence of topological non-trivial phases by probing localized zero-bias peaks consistent with their topological origin, and thus attributed to MZMs<cit.>.Beyond densely-packed atomic structures governed by nearest neighbor exchange interaction <cit.>, dilute spin chains or two-dimensional "Shiba" lattices are also an exciting platform for the emergence of topological superconductivity <cit.>.By increasing the spacing a between magnetic impurities while keeping it smaller than the coherence length of the superconductor ξ, the YSR band formation remains possible by the in-gap state hybridization over a large number of nearest neighbors mediated by Ruderman–Kittel–Kasuya–Yosida interaction <cit.>. In such a regime (i.e. k_ Fa ≫ 1 withk_ F being the Fermi wave-vector), two-dimensional ferromagnetic Shiba lattices are predicted to exhibit a rich phase diagram with a large number of phases with high Chern numbers 𝒞 <cit.>, where chiral MZMs are localized at edges of the island with an exponential decay towards the island's interior. Recently, topological phases protected by spatial symmetries have been proposed to occur in a rich variety of topological crystalline superconductors<cit.>. Using STM, the first attempt to build and to probe such atomic lattices showed interesting signatures of edge modes consistent with a mirror-symmetry-protected topological superconductor <cit.>.While the physics of YSR states and edge states can be addressed by tunneling spectroscopy, the control over the chemical potential near these artificial structures, an essential prerequisite for future applications to tune the system with external gate voltages from trivial to topological, remains an open issue. Inspired by recent works on the charge-state control of organic molecules using the electric field of an STM tip <cit.>, our work explores the experimental realization of a two-dimensional spin lattice using the supramolecular assembly of gate-tunable radical molecules on superconducting Pb(111). This system could not only serve as a unique starting point for investigating the interplay of a prototypical array of electron spins with a superconductor but also provides a general playground for discoveringcrystalline topological superconductivity in metal-free supramolecular network-superconductor hybrids.§ RESULTS AND DISCUSSION *Supramolecular assembly of TBTAP molecules on superconductingPb(111). As precursor, we used the 4,5,9,10-tetrabromo-1,3,6,8-tetraazapyrene (TBTAP) molecule (Fig. 1A) consisting of an electron acceptor tetraazapyrene backbone equipped with four peripheral bromine atoms <cit.>. We recently showed that TBTAP^∙- radicals with a 1/2 spin state retain a single electron on Ag(111) without using a decoupling layer <cit.>. To obtain large supramolecular domains of more than 100 nanometers in diameter, TBTAP molecules were sublimed in ultra high vacuum on a Pb(111) substrate kept at about 200 K <cit.> (Fig. 1B). A densely packed rectangular network of lattice parameter a_1 = 12.3 Å and b_1 = 17.2 Å (arrows in Fig. 1C), is observed by STM as alternating dark and bright rows. The corresponding image obtained by atomic force microscopy (AFM) (Fig. 1D) shows each Br atom bound to the TAP backbone as bright protrusion allowing us to assign the exact molecule position in the array (see models in Fig. 1D). Similar to STM imaging, two AFM contrasts are observed as a function of the considered rows denoted in the following as charged (c) (dashed line)and neutral (n) (dotted line), which will be discussed later. Using density functional theory (DFT) <cit.>, we relaxed the TBTAP network on Pb(111) (Fig. 1E, supplementary text and fig. S1). The assembly is in registry with the Pb(111) surface in agreement with the experimental data (a_1 = 12.1 Å and b_1 = 17.5 Å , fig. S1). Molecules are stabilized by a combination of halogen bonds between Br atoms (C–Br...Br–C) and TAP units (C–N...Br–C).Dashed and dotted lines correspond to rows of charged (c) and neutral (n) molecules, respectively.Molecules lie flat in a plane 3.4 Å above the surface, suggesting that the variation of STM/AFM contrasts between neighboring rows is due to the coexistence of two molecule charge states in the network <cit.>, rather than a difference in relative height (see fig. S1).*Electrical control of molecule's charge state in the assembly. To confirm this, we compared dI/dV point-spectraof TBTAP molecules located in c and n rows, respectively (Fig. 2B). Molecule c (blue spectra) shows a strong resonance D at V_ D ≈ 1 V, which is absent for molecule n (red spectra). The D resonance is assigned to a charge-state transition induced by the local electric field of the tip from the anionic TBTAP^∙- molecule to its neutral TBTAP^0 counterpart<cit.>. Without gating, radical TBTAP^∙- molecules are obtained by the transfer of one electron from the surface to the lowest unoccupied molecular orbitals (LUMO) <cit.>, leading to the LUMO splitting into a singly-unoccupied molecular orbital (SUMO) and a singly-occupied molecular orbital (SOMO) (see supplementary text, figs. S2-S4). Charging events expected as a dip in dI/dV spectra for negative voltages were not observed for both type of molecules. Figure 2C shows a constant-height dI/dV map acquired at the threshold voltage V_ D. Rings/dots of high conductance centered tomolecules c are the hallmark of a successful discharge, which also indicates the spatial position of the electron in TBTAP^∙- molecules prior to its removal. Using the double-junction tunneling barrier (DJTB) model <cit.>, the efficiency with which the tip locally discharges nearby molecules is characterized by the lever arm ℒ, whichat first approximation linearly depends on the tip-sample voltage V_ S and its position with respect to the molecule (supplementary text, figs. S5-S6).Figure <ref>D shows a dI/dV cross-section acquired across n-c-n rows (plain line in Fig. 2A). Discharging rings are absent along n rows since no charge can be extracted from neutral TBTAP^0. The discharge parabola is centered to the TBTAP^∙- with its bottom (≈ 0.9 V) corresponding to the resonance. Due tothe linear voltage-dependency of ℒthe parabola branches expand with increasing V_ S, thus reflecting the increase in size of rings in dI/dV maps with increasing voltages (Figs. <ref>F-I, see supplementary text). At fixedV_ S≥ V_ D, discharging rings form a superlattice of parameter a_2 = 20 Å and b_2 = 39.7 Å and rotated by 30° with respect to the molecular lattice observed in dI/dV mapping (Fig. <ref>C). Their diameters vary between neighboring TBTAP^∙- as the result of a local modulation of the resonance, estimated to ≈ 150 mV by comparing the bottom of each parabola (dashed line in Fig. <ref>E). For V_ S ≥ 1.1 V and when the tip is located between two neighboring molecules, the parabolas start to merge promoting the removal of two electrons 2e from the neighboring molecules (region e ≥ 1). Accordingly, increasing V_ S in a series of spatial dI/dV maps leads to ring expansion (Fig. 2G) followed by their coalescence (Fig. 2H). In contrast to a simple superposition of rings expected for non-interacting quantum dots, their fusion as observed in Figs. 2H-I indicates a cascade discharge along c rows and thus a manifestation of the electron correlation in the supramolecular assembly <cit.> (see supplementary text, fig. S6).*Yu-Shiba-Rusinov bound states of radical molecules. Radical TBTAP^∙- on Pb(111) feature a S = 1/2 ground state with a strong spin-polarization according to DFT calculations (Fig. 3A). We probed YSR bound states of radical TBTAP^∙- in the middle of a molecular island using tunneling spectroscopy with a metallic tip at T = 35 mK (Fig. 3B) <cit.>. Figure <ref>C compares dI/dV spectra of three representative TBTAP^∙- molecules marked in Fig. <ref>B (black spectra, Fig. <ref>C) with that of Pb(111) (blue) and a neutral TBTAP^0 (red). For the last two, a hard gap centered to E_ F and framed by the two coherence peaks of Pb(111) <cit.> is systematically measured without in-gap states. Each TBTAP^∙- spectrum additionally shows one pair of YSR states at energies ε _α = ± 460 µeV, ε _β = ± 720 µeV and ε _γ = ± 940 µeV (dotted lines), resulting from the spin-1/2 nature of radical TBTAP^∙-.By applying an out-of-plane magnetic field of 0.5 T, we also quenched the superconductivity state to probe the Kondo resonance (figs. S8) and estimated its Kondo temperature T_ K to be 10.3 K (figs. S7) <cit.>. Electron-like and hole-like wave-functions of TBTAP^∙- were also probed by dI/dV mapping at the ε_α^±, ε_β^± and ε_γ^± energies (Figs. <ref>E-G), respectively. The typical donut-shape is similar to the spin density map (Fig. <ref>A), while their energiesdepends on the molecule positions in the assembly. We also infer that the shift of the YSR states to higher energies as compared to that of the isolated molecule (fig. S9) and theirspatial distribution points to a coupling of the quasi-particle excitations within the supramolecular network <cit.>. Figure <ref>G shows a dI/dV cross-section across the island (red arrow of Fig. <ref>D), where white dotted lines refer to the ε_α,β,γ^± YSR energies. Broader resonances near E_ F coexist with the YSR peaks as marked by black arrows in Figs. <ref>C and G. These low-energy modes (LEM) are systematically observed with the highest magnitude for molecules aligned along the white dashed lines marked in the zero-energy dI/dV map of Fig. <ref>D. *Spectral signature and localization length of low-energy modes near an island edge. The intrinsic electron-hole symmetry of zero-energy modes, imposed by the Bogoliubov quasi-particle character, can be probed by tunneling spectroscopy using superconducting STM tips (Δ_ T = 1.35 meV is the superconducting pairing energy of the tip). Experimentally, a zero-energy peak appears in dI/dV spectra as a pair of peaks of equal amplitude shifted from zero to the finite voltages eV = ± Δ_ T, while the superconducting edge is observed at ± (Δ_ T + Δ_ S) = ± 2.7 meV (Δ_ S = 1.35 meV is the superconducting gap of the substrate).Using bulk Pb tips at T = 1 K (Fig. <ref>) <cit.>, we confirmed the presence of YSR in-gap states at the TBTAP^∙-locations by dI/dV point-spectra (Fig. <ref>B) acquired along seven TBTAP^∙- molecules of a c row (Fig. <ref>A, fig. S12). Due to the larger thermal broadening of ≈ 90-100 µeV at 1K, the accurate assignment of the ε_α, β, γenergies is less evident than that of the millikelvin measurements since these peaks merge into a single resonance found at eV = ± (Δ_ T +ε _α, β, γ) ≈ ± 2.1 meV. Note also that the YSR states are always accompanied by a broader resonance near zero-energy (i.e. ± Δ_T) which is the fingerprint of the low-energy modes using superconducting tips. The constant-height dI/dV maps of Figs. <ref>D and E compare the spatial distribution near the edge of an island of the hole-like wavefunctions extracted at ε^+ with the LEM wavefunctions at +Δ_T. While the DOS at the YSR energy is homogeneous along the c rows, the LEM lines emerge from the ferromagnetic edge schematized in the model of Fig. <ref>F. They propagate along the direction rotated by 60° with respect to the edge corresponding to a ferromagnetic direction of the spin structure. Figure <ref>G shows a dI/dV(V,X) cross-section acquired along one LEM line marked by a blue dashed line in Fig. <ref>E. All sub-gap excitations now appear at zero energy with equal amplitudes between electron-like and hole-like regions (Fig. <ref>H). This observation, in stark contrast with the strong intrinsic electron-hole asymmetry of the YSR resonances (Fig. <ref>B), underlines the zero-energy character of these edge modes. We next characterize the LEM localization length (Fig. <ref>I) by comparing dI/dV(X) profiles along the LEM line (blue) with that obtained at the YSRenergy (gray) (see also supplementary text, fig. S13). In contrast to the continuous DOS at ε^+, the LEM wavefunction has a maximum amplitude at the border of the island (X = 0)and decays towards the interior but without completely vanishing. In Fig. <ref>I, we estimated the experimental decay of the edge mode by fitting its envelope (dashed line) with a function f(x)composed of two exponents representing the short ξ_1 = 3 nm and the long ξ_2 = 110 nm localization length<cit.>. We explain it by considering the TBTAP^∙- network as a lattice of spin-1/2 impurities with long-range YSR overlap coupled to a superconductor, as reported in References <cit.>.*Theoretical analysis. To further rationalize our findings, we used a tight binding model on a rectangular spin lattice similar to one introduced by Soldini et al. <cit.> in order to describe the spatial-symmetry-protected topological order of an antiferromagnet-superconductor hybrid structure <cit.>. As suggested by our tunneling measurements (fig. S10), we assume an antiferromagnetic ordering of the spin (schematized by red and blue arrows in Fig. <ref>A andFig. 1E), along the lattice imposed by the TBTAP molecular assembly (dashed line). Based on our STM observations (Fig. <ref>B and Fig. <ref>C), we construct a prototypical "Shiba" island mimicking the supramolecular network boundaries by considering only ferromagnetic edges along the [110] directions with respect to the molecular lattice (red line in Fig. <ref>A). Figure <ref>B shows the calculated zero-energy LDOS map of the system, which demonstrates the formation of edge modes in agreement with reference <cit.>. Theoretical LDOS spectra are plotted in Figs. <ref>C and D for two edge positions marked by green and red squares in Fig. <ref>B, respectively. These edge modes have two typical spectral signatures consisting of either two peaks of equal amplitude split from zero energy (green in Fig. <ref>C) or three resonances centered to zero energy (red in Fig. <ref>C). They are both framed by a topological gap (±Δ_ Top) extracted at the center of the island (yellow spectra in Figs. <ref>C-D) as well as the superconducting gapat ±Δ_ S (black spectra). Importantly, the 45°-edges of the antiferromagnetic island respect the underlying spatial symmetries, namely mirror symmetries, <cit.>, such that a gapped topological crystalline superconducting phase with topological edge states can form. The experimental spectroscopic signatures of the LEM (Figs. <ref>E and F), acquired near an edge at 50 mK with a metallic tip (see figs. S14 and S15), is in good agreement with the theoretical predictions of these topological edge modes.§ CONCLUSION AND OUTLOOKIn conclusion, we demonstrate the formation of an extended array of electron spins on superconducting Pb(111) through the supramolecular assembly of organic radicals. Occupied by a single electron transferred from the substrate, radical molecules are in a spin-1/2 ground state confirmed by probing one pair of Yu-Shiba-Rusinov in-gap states in differential conductance spectra. In the two-dimensional supramolecular assembly, spectroscopic signatures of low energy modes (LEM) are observed in tunneling spectroscopy near edges of the island with a long decay towards the interior. Using both metallic and superconducting tips, we characterized the near zero-energy character of this resonance, its intrinsic particle-hole symmetry, the localization length as well as site-dependent spectral signatures. Altogether, these key features confirmed by theory are consistent with the emergence of topological non-trivial modes in an antiferromagnet-superconductor hybrid structure that can be assigned to Majorana modes <cit.>. Such a spatial-symmetry-protected topological superconductor has a complex phase diagram which crucially depends on the edge terminations of the system boundaries (fig. S10) as well as the lattice parameter a (i.e. hopping parameters t) <cit.>. In fig. S10 (see supplementary text), we also explored disorders or alternative boundaries that can break the mirror symmetry of the system. Since a depends on the molecular spacing, future work could explore the design of the precursor's side groups to access variable sizes and lattice symmetries on alternative superconducting platforms <cit.>. Importantly, our findings demonstrate the reversible control of the charge (spin) in radical molecules by the local electric field of the tip, opening interesting avenues for the fine-tuning of the system with external gate voltages. Creating local charge defects in the molecular assembly using probe chemistry <cit.> might also allow to investigate the effect of disorder on the topological phase as well as the topological edge modes. Overall, our work constitutes key advances in designing gate-tunable organic topological superconductors by the self-assembly of organic metal-free molecules in proximity to a superconducting substrate.§ ACKNOWLEDGMENTSFunding: E.M. and R.P. acknowledge funding from the Swiss Nanoscience Institute (SNI) and the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (ULTRADISS grant agreement No 834402 and supports as a part of NCCR SPIN, a National Centre of Competence (or Excellence) in Research, funded by the SNF (grant number 51NF40-180604). E.M., T.G. and S.-X.L. acknowledge the Sinergia Project funded by the SNF (CRSII5_213533). E.M., T.G. and R.P. acknowledge the SNF grant (200020_188445). T.G. acknowledges the FET-Open program (Q-AFM grant agreement No 828966) of the European Commission. S.-X. L. acknowledges the grant from the SNF (200021_204053). J.-C.L. acknowledges funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement number 847471. U.A. acknowledges funding by the SNF Professorship (Grant No. PP00P2 187185/2). R.H. acknowledges the European Union’s Horizon 2020 research and innovation programme under Grant Agreement No. 862046 and the ERC grant under Grant Agreement No.757725. Calculations were performed on UBELIX (http://www.id.unibe.ch/hpc), the HPC cluster at the University of Bern. C.L. acknowledges the Georg H. Endress Foundation for financial support. W. W. gratefully acknowledge financial support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) through the Collaborative Research Centre "4f for Future" (CRC 1573, project number 471424360) project B2.R.P. acknowledges Ruslan Temirov for the fruitful discussions on the improvement of the spectral resolution of the Basel microscope. Authors contributions: R.P., S.-X.L., S.D and E.M. conceived the experiments. P.Z., Ro.H., S.-X.L. and S.D. synthesized the materials. C.D., C.L., and R.P. performed the STM/AFM measurements at 1 K. J.-C.L., H.C. and W.W. performed the millikelvin measurements. U.A. performed DFT calculations. Ri.H., D.L. and J.K. performed tight-binding calculations. R.P. analyzed the data and wrote the manuscript. All authors discussed on the results and revised the manuscript. Competing interests: The authors declare no competing financial interests. Data and materials availability: Data in formats other than those presented within this paper are available from the corresponding authors upon reasonable request.§ SUPPLEMENTARY MATERIALSMaterials and MethodsSupplementary TextFigs. S1 to S15References (50-69+)Science
http://arxiv.org/abs/2310.18134v2
{ "authors": [ "Rémy Pawlak", "Jung-Ching Liu", "Chao Li", "Richard Hess", "Hongyan Chen", "Carl Drechsel", "Ping Zhou", "Robert Häner", "Ulrich Aschauer", "Thilo Glatzel", "Silvio Decurtins", "Daniel Loss", "Jelena Klinovaja", "Shi-Xia Liu", "Wulf Wulfhekel", "Ernst Meyer" ], "categories": [ "cond-mat.supr-con" ], "primary_category": "cond-mat.supr-con", "published": "20231027132509", "title": "Gate-tunable topological superconductivity in a supramolecular electron spin lattice" }
Practical application of quantum neural network to materials informatics: prediction of the melting points of metal oxides Hirotoshi Hiraie-mail: [email protected] Central R&D Labs., Inc.,41-1, Yokomichi, Nagakute, Aichi 480-1192, Japan============================================================================================================================================= We study the effects of Prandtl numberand Rayleigh numberin two-dimensional Rayleigh-Bénard convection without boundaries, i.e. with periodic boundary conditions. In the limits of → 0 and ∞, we find that the dynamics are dominated by vertically oriented elevator modes that grow without bound, even at high Rayleigh numbers and with large scale dissipation. For finite Prandtl number in the range 10^-3≤≤ 10^2, the Nusselt number tends to follow the `ultimate' scaling Ν∝^1/2^1/2, and the viscous dissipation scales as ϵ_ν∝^1/2^-1/4. The latter scaling is based on the observation that enstrophy ω^2∝^0 ^1/4. The inverse cascade of kinetic energy forms the power-law spectrum E_u(k) ∝ k^-2.3, while the direct cascade of potential energy forms the power-law spectrum E_θ(k) ∝ k^-1.2, with the exponents and the turbulent convective dynamics in the inertial range found to be independent of Prandtl number. Finally, the kinetic and potential energy fluxes are not constant in the inertial range, invalidating one of the assumptions underlying Bolgiano-Obukhov phenomenology.§ INTRODUCTIONThe fundamental challenge to understand thermally driven flow in the strongly nonlinear regime has puzzled the fluid dynamics community for more than a century <cit.>. The typical set-up that is studied theoretically consists of a fluid confined between two horizontal plates that is heated from below and cooled at the top. This is the so called Rayleigh-Bénard convection (RBC) problem after Rayleigh's proposed model <cit.> for Bénard's experiment on buoyancy-driven thermal convection <cit.>. The classical problem depends on two dimensionless parameters: the Rayleigh number , which measures the driving effect of buoyancy relative to the stabilising effects of viscosity and thermal diffusivity; and the Prandtl number , which is the ratio of kinematic viscosity to thermal diffusivity. The global flow properties may be characterised by the Nusselt number Ν, a further dimensionless parameter that measures the total heat flux relative to the purely conductive heat flux.For stably stratified turbulence, <cit.> and <cit.> proposed the so-called Bolgiano–Obukhov (BO) phenomenology, in which buoyancy balances inertia in the momentum equation and the potential energy flux is approximately constant for length scales larger than the Bolgiano scale. These assumptions lead to kinetic and potential energy spectra of the form E_u(k) ∝ k^-11/5 and E_θ(k) ∝ k^-7/5, respectively, where k is the wavenumber.Despite originally being proposed for stably stratified flows, BO scaling has been reported for three-dimensional (3D) convection <cit.>. However, its existence is still debatable <cit.>, with some studies reporting <cit.> power-law scaling ∝ k^-5/3 for the kinetic and potential energy spectra <cit.>. Similar debate remains in simulations of two-dimensional (2D) RBC with periodic boundary conditions, with some studies to argue for<cit.> and some against <cit.> the validity of BO scaling.For the strongly nonlinear regime of thermal convection, which is of paramount importance for geophysical and astrophysical applications, there are two competing theories for the behaviour ofΝ astends to infinity for arbitrary .These two proposed asymptotic scaling laws arethe `classical' theory Ν∝^0^1/3 by <cit.> and the `ultimate' theory Ν∝^1/2^1/2 by <cit.>. The Rayleigh number at which the transition to the ultimate scaling is presumed to occur is not known, and laboratory experiments and numerical simulations <cit.> have reported different power-laws for a wide range of Rayleigh numbers.Boundary conditions and boundary layers strongly affect the turbulence properties of RBC <cit.>. In the analogy of RBC with geophysical phenomena, the top and bottom boundaries are often absent particularly when the focus is to understand the dynamics of the bulk flow. In this study, we choose the most obvious theoretical approach to bypass boundary layer effects by considering a fully periodic domain <cit.> for the Rayleigh-Bénard problem, with an imposed constant, vertical temperature gradient. For this homogeneous RBC set-up has been claimed that Nu ∝ Ra^1/2 <cit.> in line with the ultimate theory and this scaling is also suggested from simulations of axially periodic RBC in a vertical cylinder <cit.>.Homogeneous RBC, however, exhibits exponentially growing solutions in the form of axially uniform vertical jets, called “elevator modes” <cit.>. As soon as these modes grow to a significant amplitude, they experience secondary instabilities ultimately leading to statistically stationary solutions <cit.>. In recent numerical simulations the elevator modes were suppressed by introducing an artificial horizontal buoyancy field <cit.> or large-scale friction <cit.>. The inverse cascade that is observed in 2D homogeneous RBC <cit.> is another source of energy to the large-scale modes that grow to extreme values forming a condensate, whose amplitude saturates when the viscous dissipation at the largest scale balances the energy injection <cit.>. So, in this study to avoid the unbounded growth of energy we include large scale dissipation to mimic the effect of friction when there are boundaries and to be able to reach statistically stationary solutions. Most of the attention on two-dimensional (2D) RBC focuses is on the Rayleigh number dependence of the dynamics, with only a few studies (e.g., <cit.>) considering the effects of the Prandtl number.In this paper, we extensively study the effects of the Prandtl number and the Rayleigh number using numerical simulations of 2D RBC in a periodic domain driven by constant temperature gradient, while also considering hyperviscous simulations to permit the large scale separation, which is crucial for the analysis of the multi-scale dynamics.Sec. <ref> contains the dynamical equations, numerical methods and the definitions of the spectral and global variables under study.The results of our simulations are presented in Sec. <ref>. After briefly investigatingthe behaviour of the system in the limits of zero and infinite Prandtl number,we analyse how the global variables scale with Prandtl and Rayleigh numbers and then discuss the effects of these dimensionless parameters on the spectral dynamics. Finally, we summarise our conclusions in Sec. <ref>.§ PROBLEM DESCRIPTION §.§ Governing equations We consider two-dimensional Rayleigh–Bénard convectionof a fluid heated from below in a periodic square cell (x,y)∈[0,L]^2. The temperature T(x,y,t) is decomposed as T=-Δ T y/L+θ, where Δ T/L is the constant imposed temperature gradient and the temperature perturbation θ(x,y,t) satisfies periodic boundary conditions.As usual, for simplicity we employ the Oberbeck–Boussinesq approximation <cit.>, in which the kinematic viscosity ν and the thermal diffusivity κ are taken to be constant while temperature dependence of the fluid density ρ is neglected except in the buoyancy term of the momentum equation. The governing equations of the problem in two dimensions can be written in terms of θ(x,y,t) and the streamfunction ψ(x, y, t) as follows:∂_t ∇^2ψ + {ψ, ∇^2 ψ} =α g _x θ + (-1)^n+1ν∇^2n+2ψ + μψ,∂_t θ + {ψ, θ} = Δ T/L_x ψ + (-1)^n+1κ∇^2nθ, where {A, B} = _x A _y B - _y A _x B is the standard Poisson bracket, α is the thermal expansion coefficient and g is the gravitational acceleration.To prevent the formation of a large scale condensate in the presence of an inverse cascade, to avoid the elevator modes and to reach a turbulent stationary regime we supplement our system with a large scale dissipative term μψ that is responsible for saturating the inverse cascade. We consider both normal and hyper viscosity by raising the Laplacian to the power of n=1 and n=4, respectively. The hyperviscous case, albeit not physically realisable, gives a wider inertial range, as diffusive and viscous terms kick in abruptly at much smaller scales compared to the normal viscosity case. In the limit of ν→ 0, κ→ 0, and μ→ 0 the quantity that is conserved is E_u - agL/Δ T E_θ, where the kinetic energy E_u and the potential energy E_θ are defined byE_u =1/2|∇ψ|^2,E_θ = 1/2θ^2,with the angle brackets · here denoting the spatiotemporal average.Equations (<ref>) depend on three dimensionless parameters, namely= ν/κ,= α gΔ T L^4n-1/νκ,= μ(L^5/α g Δ T)^1/2,which are the Prandtl number, Rayleigh number and friction Reynolds number, respectively, in accordance withx∼ L,t ∼L^2n/κ, ψ∼κ/L^2n-2, θ∼Δ T. We perform direct numerical simulations (DNS)of Eqs. (<ref>) using the pseudospectral method <cit.>. We decompose the stream function into basis functions with Fourier modes in both the x and y directions, viz.ψ( x,t)= ∑^N/2_ k = -N/2ψ_ k(t) e^ik·𝐱,where ψ_ k is the amplitude of the k = (k_x, k_y) mode of ψ, and N denotes the number of aliased modes in the x- and y-directions. We decompose θ in the same way. A third-order Runge-Kutta scheme is used for time advancement and the aliasing errors are removed with the two-thirds dealiasing rule <cit.>.In both the normal and hyperviscous simulations,we find that =(2π)^5/2 yields a saturated turbulent state that dissipates enough kinetic energy at large scales such that the kinetic energy spectrum peaks at k = 2 without over-damping the system. So, we fix = (2π)^5/2≃ 100 while varying the Rayleigh and Prandtl numbers in the ranges 6.2 × 10^7 ≤≤ 6.2 × 10^11 and 10^-3≤≤ 10^2. To model large Rayleigh number dynamics, we set = 9.4 × 10^49 in our hyperviscous simulations. Fig. <ref> shows the parameter values simulated in the (,)-plane as well as the resolution, N, used in each case.Time-averaged quantities are computedover 1000 realisations once the system has reached a statistically stationary regime, sufficiently separated in time (at least 5000 numerical time steps) to ensure statistically independent realisations.§.§ Global and spectral quantities Next we briefly outline the global and spectral flow properties that will be explored in our numerical simulations below. The energy spectra of the velocity field E_u(k,t) and the temperature field E_θ(k,t), referred to as the kinetic energy and potential energy spectra, are defined asE_u(k,t)= 1/2∑_k ≤ | k| < k+Δ k k^2| ψ_ k(t) |^2, E_θ(k,t)= 1/2∑_k ≤ | k| < k+Δ k| θ_ k(t) |^2, where the sum is performed over the Fourier modes with wavenumber amplitude k = | k| = √(k_x^2 + k_y^2) in a shell of width Δ k = 2π/L.Using the Fourier transform, one can derive the evolution equations of kinetic and potential energy spectra from Eqs. (<ref>), namely∂_tE_u(k,t)=- ∂_k Π_u(k,t) - D_ν(k,t) - D_μ(k,t) + α g F_B(k,t),∂_tE_θ(k,t)= -∂_k Π_θ(k,t) - D_κ(k,t) + Δ T/LF_B(k,t).The energy flux Π is a measure of the nonlinear cascades in turbulence <cit.>. The energy flux for a circle of radius k in the 2D wavenumber space is the total energy transferred from the modes within the circle to the modes outside the circle. Consequently, we define the flux of kinetic energy Π_u(k,t) and potential energy Π_θ(k,t) as Π_u(k,t)= ∑_k' ≤ k T_u(k',t),Π_θ(k,t)= ∑_k' ≤ k T_θ(k',t), where T_u(k,t) and T_θ(k,t) are thenon-linear kinetic and potential energy transfer across k: T_u(k,t)= -∑_k ≤ | k| < k+Δ kψ^*_ k(t) {ψ,∇^2 ψ}_ k(t), T_θ(k,t)= ∑_k ≤ | k| < k+Δ kθ^*_ k(t) {ψ,θ}_ k(t). The notation { . }_ k represents the Fourier mode of the Poisson bracket expandedusing Eq. (<ref>), and the asterisk denotes complex conjugation.The spectra of the small-scale viscous dissipation D_ν(k,t), the large-scale friction D_μ(k,t) and the thermal dissipation D_κ(k,t) are defined asD_ν(k,t)= 2 ν k^2n E_u(k,t), D_μ(k,t)= 2 μ k^-2 E_u(k,t), D_κ(k,t)= 2 κ k^2n E_θ(k,t), and the buoyancy term F_B is given byF_B(k,t)= ∑_k ≤ | k| < k+Δ k i k_x ψ^*_ k(t) θ_ k(t). The Nusselt number is a dimensionless measure of the averaged vertical heat flux, defined mathematically byΝ = 1 + _x ψ θ/κΔ T / L^2n-1.Using the above definition, one can derive the following exact relations for the kinetic and potential energy balances in the statistically stationary regime, as in <cit.>: ϵ_u= ϵ_ν + ϵ_μ = ν^3/L^6n-2(Ν - 1)/^2 ϵ_θ = ϵ_κ = κΔ T^2/L^2n (Ν - 1) where ϵ_u = α g ψ _x θ is the injection rate of kinetic energy due to buoyancy, ϵ_ν = ν⟨ψ∇^2(n+1)ψ⟩ is the viscous dissipation rate, ϵ_μ = μ⟨ψ^2⟩ is the large scale dissipation rate, ϵ_θ = Δ T/L_x ψ θ is the injection rate of potential energy due to buoyancy and ϵ_κ = κ⟨θ∇^2nθ⟩ is the thermal dissipation rate. §.§ Elevator modes Upon linearising (<ref>) about the conductive state (ψ=θ=0), we find that infinitesimal solutions with ψ(𝐱,t)=e^i 𝐤·𝐱+σ t are possible provided the normalised linear growth rate σ satisfies the relation(σ+k^2n) (σ+ k^2n+√()/k^2) = (2π k_x)^2/k^2,which has two real roots for σ. From these two roots, one is positive if and only if√(/)< (2π k_x)^2/k^2n-k^2n+2.One can show that σ is a monotonic decreasing function of k_y, so the most dangerous modes are independent of y, withψ(𝐱,t)=e^ik_x x+σ t,k_x∈ℤ_>0.Indeed, such a unidirectional mode satisfies the nonlinear governing equations (<ref>) exactly (because the nonlinear Poisson bracket terms are identically zero).Although the maximum growth rate does not necessarily occur at the minimum wavenumber k_x=1, it is straightforward to show that (<ref>) is satisfied for some (k_x,k_y) if and only if it is satisfied at (k_x,k_y)=(1,0). In other words, exact solutions of the problem (<ref>) that grow exponentially without bound exist whenever√(/)< /(2π)^2(n-1)-(2π)^2n+2. § RESULTS §.§ Zero and infinite Prandtl numberBefore considering the →∞ limit, to simplify the analysis let's write the non-dimensional form of Eqs. (<ref>) in accordance with Eqs. (<ref>), which yield_t ∇^2ψ + {ψ, ∇^2 ψ} =_x θ + (-1)^n+1∇^2(n+1)ψ + √(/) ψ,_t θ + {ψ, θ} =_x ψ + (-1)^n+1∇^2nθ. As we taketo infinity, we ensure that √(/) is finite to maintain the effects of the large-scale dissipation. The system (<ref>) thus reduces to_x θ = [(-1)^n ∇^2(n+1)-√(/)]ψ,_t θ + {ψ, θ} =_x ψ + (-1)^n+1∇^2nθ.Upon linearising about the conductive state (ψ = θ = 0), we derive the dispersion relation σ = (2π k_x)^2/k^2(n+1)+√(/) -k^2n.The growth rate, σ, is therefore largest when k_y=0, so a mode with no y dependence, i.e.θ = Be^i k_x x + σ t + c.c.,will grow at the onset of convection. Note that (<ref>) is an exact solution to (<ref>) and unbounded solutions with σ(k_x) > 0 are possible if > (2π)^4n + √(/) (2π)^2(n-1), which ensures that the mode with k_x = 1, k_y = 0 is unstable. However, it is not necessarily the ψ_1,0 mode that is the most unstable. In Figure <ref> we plot the growth rate, σ, versus k_x in the →∞ limit at k_y = 0, = 10^7, n=1, and four different values of √(/). The growth rate peaks at k_x = 1, k_x = 1, k_x = 3, and k_x = 4 as we increase √(/). In general, the maximum growth rate occurs when k_x ≈1/2π[/2(√(8B+1)-2B-1 ) ]^1/4,where B = /√()<1. However, when the ψ_1,0 mode is stable, then all other modes are as well. We note that the mode with wavenumber k_x is suppressed if√(/)> ( - (2π k_x)^4n)/(2π k_x)^2(n-1).In fact, the inequaltiy (<ref>) holds true for all Prandtl numbers, not just in the →∞ limit. In the complementary limit of zero Prandtl number, we have to rescale the variables according to {ψ,θ,t }↦{ ψ, θ,t/}before letting → 0, which removes the time derivative and advective term from the heat transport equation (<ref>). As in the →∞ case, we maintain the effects of the large-scale dissipation by ensuring that √(/) is finite. This process reduces the system (<ref>) to_t ∇^2ψ + {ψ, ∇^2 ψ} =_x θ + (-1)^n+1∇^2(n+1)ψ + √(/)ψ,_x ψ =(-1)^n∇^2nθ.Again we linearise about the conductive state and find exact exponential solutions to (<ref>), of the formψ = Ae^i 2πk· x + σ t + c.c.,with growth rateσ =(2π k_x)^2/k^2(n+1) -k^2n - √(/) k^-2.Again, unbounded solutions with σ(k_x) > 0 are possible. However, in this case one can prove that the ψ_1,0 mode is always the most unstable; see Appendix <ref>. In Figure <ref> we plot time series of the kinetic energy, E_u in both the → 0 and →∞ limits. We use normal viscosity (i.e. n=1), we set =10^7 for four different values of √(/), and the simulations are initialised with random initial data. The flow converges to a statistically steady state only in the case where →∞ and √(/) = 9 × 10^6, a value only 10% smaller than the maximum value given by (<ref>) at which all convection is suppressed. For all other parameter values attempted, an elevator mode takes over the dynamics and the energy grows without bound. At present, we are not able to reliably obtain turbulent saturated states in the extreme Prandtl number limits, unless the large-scale dissipation is made very strong. Instead, for the remainder of the paper we restrict our attention to finite values of Prandtl number, for which we find that =(2π)^5/2 is sufficient to prevent elevator modes and allow the system to reach turbulent saturated states. §.§ Finite Prandtl number: global variablesIn Fig. <ref> (a)–(c) we show how the global quantities in the kinetic and potential energy balances (<ref>) vary withkeeping = 6.2 × 10^11, while in (d)–(f) we show how the same quantities vary withwhile keeping =1. In both cases we use normal viscosity (n=1) and keep =(2π)^5/2 constant. Firstly, Fig. <ref>(a) and (d) show that the kinetic and potential energies remain virtually constant whileandvary by at least four orders of magnitude. Secondly, Fig. <ref>(b) and (e) showthat ϵ_u ≈ϵ_μ, which indicates that the majority of the kinetic energy, injected by buoyancy in the flow, is dissipated at large scales. This effect is caused by the inverse cascade of kinetic energy, which will be investigated in more detail in Sec. <ref>. Note that ϵ_u and ϵ_μ are almost independent of bothand , while the viscous dissipation scales likeϵ_ν∝^1/2^-1/4.Finally, in Fig. <ref>(c) and (f), we see that ϵ_θ=ϵ_κ, as required by the potential energy balance (<ref>), and both quantities,are also virtually independent of bothand . To explain the scaling of the viscous dissipation rate (<ref>), we now look at how the enstrophy ω^2 varies with the Prandtl and Rayleigh numbers, where ω = ∇^2 ψ is the vorticity of the flow. In Fig. <ref>(a) we plot enstrophyas a function ofwith = 6.2 × 10^11 fixed, and in Fig. <ref>(b) as a function ofwith = 1 fixed. We keep = (2π)^5/2 fixed throughout. From Fig. <ref>(a) we observe that enstrophy can be considered approximately independent of , because it varies by less than a factor of two over the five decades of Prandtl numbers considered, while Fig. <ref>(b) demonstrates that enstrophy scales like ^1/4, i.e.,ω^2∝ ^0 ^1/4.With normal viscosity, by definition we have ϵ_ν = νψ∇^4 ψ = νω^2 and, writing ν ∝ (/)^1/2, we thus obtain the scaling of Eq. (<ref>) that is observed in Fig. <ref>. According to Fig. <ref>, the normalised energy injection rates ϵ_u and ϵ_θ are both approximately constant in the range ofandwe considered. With ≫ 1 and n = 1, the net energy balances (<ref>) thus produce the Nusselt number scalingΝ∝^1/2^1/2.This relation is in agreement with the ultimate scaling. In Fig. <ref>, we plot the Nusselt number compensated by the classical scaling, Ν∝^0^1/3, and by the ultimate scaling, Ν∝^1/2^1/2. The ultimate scaling provides a much more convincing collapse of the data, with the fit becoming increasingly accurate asincreases.Indeed, the ultimate scaling in terms of Rayleigh number dependence was expected to be exhibited by our simulations as we have effectively removed the boundary layers by applying periodic boundary conditions on the computational domain <cit.>. In addition, we demonstrate that the Prandtl number dependence follows the ultimate scaling, too.§.§ Finite Prandtl number: spectraIn this section, we examine the time-averaged kinetic and potential energy spectra and spectral fluxes.To eliminate finite Rayleigh number effects and to have large enough scale separation, we focus on the hyperviscous simulations (i.e., n=4 in Eqs. (<ref>)) at = 9.4 × 10^49 and = (2π)^5/2. Results with normal viscosity(i.e. n=1 in Eqs. (<ref>)) at = 6.2 × 10^11 are also presented for comparison. According to the Bolgiano–Obukhov (BO) scaling <cit.>, the ratio of the kinetic to the potential energy spectra scales as E_u(k) /E_θ(k) ∝ k^-4/5, since E_u ∝ k^-11/5 and E_θ∝ k^-7/5.In Fig. <ref> we plot E_u(k) /E_θ(k) compensated by k^4/5 for the hyperviscous simulations and, in the inset, for the runs with normal viscosity. Instead of finding a wavenumber range where this scaling is valid, we observe a k^-0.3 power-law in the inertial range for all Prandtl numbers considered, leading us to the conclusion that BO scaling is not followed in our simulations. For the normal viscosity simulations we find the k^-0.3 power-law again to be followed, but within a narrower wavenumber range (see inset of Fig. <ref>). Fig. <ref> also shows that, for ≪ 1, the kinetic energy is much larger than the potential energy at large wavenumbers. This is expected as the small scales are dominated by thermal diffusivity when ≪ 1 and so the potential energy is dissipated much more effectively than the kinetic energy. The opposite is true for ≫ 1.In Fig. <ref>(a) and (b), we plot the kinetic and potential energy spectra multiplied bypowers of k chosen to best compensate for the power-laws exhibited by the spectra. The hyperviscous runs are shown in the main plots, and normal viscosity runs in the insets.The observed behaviour, with Ê_u(k)∝ k^-2.3 and Ê_θ(k)∝ k^-1.2, is close to, but not fully consistent with, BO phenomenology, according to which the exponents should be -11/5=-2.2 and -7/5=-1.4, respectively. Moreover, the spectra we observe are in contrast to 3D RBC with periodic boundary conditions, where the kinetic and potential energy exhibit k^-5/3 spectra <cit.>, similar to those observed in passive scalar turbulence <cit.>.To understand the turbulent cascades, in Fig. <ref>(c) and (d), we plot the associated kinetic and potential energy fluxes normalised by the time-averaged injection rates of energy due to buoyancy. The positive potential energy flux suggests a strong direct cascade, while there is a weak inverse cascade of kinetic energy, which is typical for 2D turbulence <cit.>. For = 1 these types of cascades are in agreement with <cit.>. The negative kinetic energy flux peaks at low wavenumbers, while the potential energy flux peaks at high wavenumbers.The inverse cascade of kinetic energy is not affected by the Prandtl number; however, the direct cascade of potential energy moves to higher wavenumbers along with the peak of Π_θ(k) asincreases.We emphasise the wavenumber dependence of kinetic and potential energy fluxes, with _k Π_u(k) > 0 and _k Π_θ(k) > 0 in the inertial range of wavenumbers for all of the Prandtl numbers considered.In Fig. <ref>, we present the time-averaged spectra of the magnitudes of the terms in the kinetic and potential energy balances (<ref>) for runs with hyperviscosity and three different values of ∈{10^-2,1,10^2}. The corresponding plots with normal viscosity and =6.2×10^11 are shown in Fig. <ref>. In both figures, the red dots indicate where the inertial flux terms ∂_kΠ_u and ∂_kΠ_θ become negative.In the kinetic energy balance, we identify three distinct wavenumber ranges, labeled I to III in the plots.In region I, the kinetic energy injected by buoyancy is dissipated by the large-scale friction.In region II, the inertial term balances buoyancy, which is positive for all wavenumbers in this region, i.e.∂_k Π_u(k) ≈α g F_B(k) > 0.This relation shows how the kinetic energy injected by buoyancy is cascaded to larger scales in the inertial range of wavenumbers and explains the k dependence of the kinetic energy flux we see in Fig. <ref>(c).Note that region II is largest for small Prandtl numbers, especially in the runs with normal viscosity, as evident from the results presented in Fig. <ref>(a)–(c).In region III, the balances between terms depend on the Prandtl number.For = 10^-2 buoyancy decays rapidly and so small-scale viscous dissipation is balanced by the inertial term, which is negative in this range of wavenumbers. Asincreases, buoyancy becomes more significant in the balance of region III between the small-scale viscous dissipation and the inertial term. For ≳10^2, small-scale viscous dissipation seems to be balanced by buoyancy rather than by the inertial term. This effect is shown more clearly in the runs with normal viscosity shown in Fig. <ref>(c),where the small-scale dissipation range is much larger. In the potential energy balance, we identify two distinct wavenumber ranges, labeled A and B in the plots (see (d)–(f) in Plots. <ref> and <ref>).In these plots we observe the inertial term to be balanced by buoyancy in region A and by small- scale thermal dissipation in region B. In other words, the potential energy injected by buoyancy in region A is cascaded to larger wavenumbers, where it is dissipated by thermal diffusivity.We recall that the red dots in Plots. <ref> and <ref> show where the inertial terms of the kinetic and potential energy become negative. This sign change occurs primarily in region III for _k Π_u(k) and region B for _k Π_θ(k), the latter corresponding to the large negative gradient of Π_θ(k) observed in Fig. <ref> at large wavenumbers. Near the boundary between regions A and B in Plots. <ref> and <ref>(d)–(f), we observe that_k Π_θ(k) exhibits fluctuations between positive and negative values over these wavenumbers. However, for the majority of the wavenumbers in region A we find that∂_k Π_θ(k) ≈Δ T/L F_B(k) > 0.This relation explains the k dependence of the potential energy flux we observe in Fig. <ref>(d) and demonstrates that the BO phenomenology, which assumes that the potential energy flux is constant in the inertial range of wavenumbers, does not hold for any of the Prandtl numbers we studied.Note that in the wavenumber range where region II and A overlap, we have ∂_k Π_u(k)/α g ≈ L∂_k Π_θ(k) / Δ T ≈ F_B(k), such that Π_u(k) - α g L/Δ TΠ_θ(k) is approximately constant for all Prandtl numbers considered. This is the inertial range of scales, where the viscous, diffusive and friction effects can be neglected and so the energy flux is constant. This is expected for the quantity E_u - agL/Δ T E_θ, which is constant in the limit of ν→ 0, κ→ 0, and μ→ 0. § CONCLUSIONSIn this paper we study the effects of varying the Prandtl and Rayleigh numbers on the dynamics of two-dimensional Rayleigh-Bénard convection without boundaries, i.e., with periodic boundary conditions.First, we focus on the limits of → 0 and →∞. Our findings indicate that, unless large-scale dissipation is made so strong as to almost suppress convection completely, large-scale elevator modes dominate the dynamics. Such elevator modeshave long been known <cit.>.In all parameter values simulated, the inequality (<ref>) is violated, implying that the system admits exact single-mode solutions that grow exponentially without bound. Instead, we turn to finite Prandtl numbers, where we find that non-linear interactions between modes continue to allow the system to find a turbulent stationary state. In general, whether or not the solution blows up must depend on the initial conditions, in a way that is not currently understood.Examining the Prandtl and Rayleigh number dependence of the terms in the kinetic and potential energy balances, we find that the enstrophy scales as ω^2∝^0 ^1/4 and hencethe small-scale viscous dissipationscales as ϵ_ν= νω^2∝^1/2^-1/4.On the other hand, we observe that the injection rate of kinetic energy ϵ_u due to buoyancy is effectively independent of both the Prandtl and the Rayleigh number. Using this observation,we find that Ν∝^1/2^1/2, which agrees with the so-called ultimate scaling.Looking at the kinetic and potential energy spectral fluxes, we find an inverse cascade of kinetic energy and a direct cascade of potential energy in contrast to 3D RBC <cit.>, where both kinetic and potential energies cascade toward small scales. The inverse cascade is independent of the Prandtl number, while the peak of the potential energy flux moves to higher wavenumbers asincreases. The kinetic and potential energy fluxes, Π_u(k) and Π_θ(k), are not constant in the inertial range because both are balanced by the buoyancyterm F_B(k),which is predominantly positive in this range of wavenumbers. These two balances imply a positive slope in k for the fluxes. Although we observe no range of wavenumbers where either Π_u(k) or Π_θ(k) is constant, we find that they are connected by the relation Π_u(k) - α gL/Δ TΠ_θ(k) ≈constant in the overlap between regions II and A shown in Plots. <ref> and <ref>.The kinetic energy spectra scale as E_u(k) ∝ k^-2.3, which is close to k^-11/5 behaviour of the BO phenomenology. However, the potential energy spectra scale as E_θ(k) ∝ k^-1.2, which deviates significantly from the k^-7/5 scaling predicted by the BO arguments. The deviation from the BO phenomenology is clearer when we test the scaling E_u(k) /E_θ(k) k^4/5≈constant, which is clearly not followed by our spectra which follow E_u(k) /E_θ(k) k^4/5∝ k^-0.3. For the hyperviscous simulations,the observed power-laws in the inertial range of the kinetic and potential energy spectra do not show any dependence on the Prandtl number. The only dependence we observe is at the dissipative range of wavenumbers, where the viscosity and the thermal diffusivity dominate the dynamics. In the spectra from the normal viscosity simulations, the effects of the Prandtl number are more significant due to the comparatively low scale separation. Hence, the inertial range over which a power-law behaviour can be observed is truncated. This study clearly demonstrates the necessity for large scale separation to be able to make clearer conclusions on the spectral dynamics and the power-law exponents of two-dimensional Rayleigh-Bénard convection. This requirement makes similar studies in three dimensions more challenging. The development of a phenomenology where buoyancy acts as a broadband spectral forcing is required to interpret the current observations. Numerical simulations at Prandtl and Rayleigh numbers outside the ranges we investigated, i.e., < 10^-3, > 10^2 and > 10^12, are challenging but would be of great interest to see if they agree with the hyperviscous simulations we have performed and to provide a more complete picture of the asymptotic regime of buoyancy driven turbulent convection.jfm
http://arxiv.org/abs/2310.17928v2
{ "authors": [ "Philip Winchester", "Vassilios Dallas", "Peter D. Howell" ], "categories": [ "physics.flu-dyn" ], "primary_category": "physics.flu-dyn", "published": "20231027065853", "title": "Two-dimensional Rayleigh-Bénard convection without boundaries" }
[email protected] NEST, Istituto Nanoscienze-CNR and Scuola Normale Superiore, I-56127 Pisa, Italy Departamento de Fìsica, Laboratório de Spintrônica e Nanomagnetismo, Universidade Federal de Viçosa, Viçosa,36570-900, Minas Gerais, [email protected] Department of Physics and Nanoscience Center, University of Jyväskylä, P.O. Box 35 (YFL), FI-40014 University of Jyväskylä, FinlandNEST, Istituto Nanoscienze-CNR and Scuola Normale Superiore, I-56127 Pisa, ItalyCentro de Física de Materiales (CFM-MPC), Centro Mixto CSIC-UPV/EHU, 20018 Donostia-San Sebastián, Spain Centro de Física de Materiales (CFM-MPC), Centro Mixto CSIC-UPV/EHU, 20018 Donostia-San Sebastián, Spain Centro de Física de Materiales (CFM-MPC), Centro Mixto CSIC-UPV/EHU, 20018 Donostia-San Sebastián, Spain Centro de Física de Materiales (CFM-MPC), Centro Mixto CSIC-UPV/EHU, 20018 Donostia-San Sebastián, SpainDonostia International Physics Center (DIPC), 20018 Donostia-San Sebastián, SpainDepartment of Physics and Nanoscience Center, University of Jyväskylä, P.O. Box 35 (YFL), FI-40014 University of Jyväskylä, [email protected] NEST, Istituto Nanoscienze-CNR and Scuola Normale Superiore, I-56127 Pisa, [email protected] NEST, Istituto Nanoscienze-CNR and Scuola Normale Superiore, I-56127 Pisa, ItalyHeat engines are key devices that convert thermal energy into usable energy.Strong thermoelectricity, at the basis of electrical heat engines, is present in superconducting spin tunnel barriers at cryogenic temperatures where conventional semiconducting or metallic technologies cease to work. Here we realize a superconducting spintronic heat engine consisting of a ferromagnetic insulator/superconductor/insulator/ferromagnet tunnel junction (EuS/Al/AlO_x/Co).The efficiency of the engine is quantified for bath temperatures ranging from 25 mK up to 800 mK, and at different load resistances. Moreover, we show that the sign of the generated thermoelectric voltage can be inverted according to the parallel or anti-parallel orientation of the two ferromagnetic layers, EuS and Co. This realizes a thermoelectric spin valve controlling the sign and strength of the Seebeck coefficient, thereby implementing a thermoelectric memory cell. We propose a theoretical model that allows describing the experimental data and predicts the engine efficiency for different device parameters. Superconducting Spintronic Heat Engine E. Strambini0000-0003-1135-2004 January 14, 2024 ======================================§ INTRODUCTIONThermoelectricity can be observed when an electron-hole-asymmetric conductor is driven by a temperature difference.The resulting thermovoltages and thermocurrents have been widely applied for thermometry and energy harvesting<cit.>. In the current technology, semiconductors have been extensively utilized due to the large Seebeck coefficient achievable in those materials, allowing for amplitudes of up to hundreds of μ V / K <cit.> and efficient energy harvesting <cit.>. However, semiconductor materials are not ideal for thermoelectric-based applications in some important areas of research such as aerospace and cryogenic electronics. At the extremely low temperatures of space or in dilution cryostats, the carriers in semiconductors freeze out<cit.> and the material becomes insulating. Approaches based on quantum dot system have been proposed and realized <cit.> showing sizable thermopower down to 0.5K but with limitations of scalability intrinsic to zero-dimensional systems. Moreover semiconductor properties would be drastically changed by the doping due to background cosmic particles<cit.>.A promising alternative based on superconductor materials can represent a step forward in thermoelectric-based technology. Differing from semiconducting materials, metals do not suffer charge freeze out, and semiconductor-like properties are still present in the quasi-particle spectrum characterized by the superconducting gap.Still, electron-hole asymmetry is difficult to achieve in conventional superconductors due to charge neutrality constraints, unless strong non-equilibrium conditions are present <cit.>. Recently, a superconducting spin-caloritronic scheme based on spin-selective tunnel junctions was proposed, enabling breaking the electron-hole symmetry while keeping charge neutrality and resulting in the generation of large thermoelectric effects <cit.>.This prediction was confirmed by thermocurrents measured in superconducting tunnel junctions, with spin-split superconductors obtained via external fields <cit.>, and by exchange interactions <cit.> present in thin superconductor/ferromagnetic-insulator (S/FI) bilayers <cit.>. So far, no thermovoltage or resulting thermopower has been demonstrated despite its key role for energy harvesting at cryogenic temperatures and resulting applications for radiation detection<cit.>Here, we implement a superconducting spin-selective tunnel junction based on a multilayer of EuS/Al/AlO_x/Co. A strong thermovoltage (∼ 10 μV) is generated at sub-Kelvin temperatures (<1 K) with a magnitude close to its upper bound dictated by the Al superconducting gap (Δ≃ 200 μeV). The resulting Seebeck coefficient is of the order of few hundred of μV/K for different temperature and magnetic configurations. A sizable work was extracted by the junction therefore demonstrating a superconducting spintronic heat engine. Yet, the efficiency and functionality of the engine are quantified for different magnetic configurations. Fianlly, the implementation of a two-state thermoelectric memory cell based on the device magnetic hysteresis is discussed.§ SAMPLE DESIGN AND NON-RECIPROCITY The device consists of a superconducting thin film (aluminum-Al) proximitized by a ferromagnetic insulator (europium sulfide-EuS) on one side and separated from a ferromagnet (cobalt-Co) on the other side by an insulating barrier (aluminum oxide-AlO_x).Figure <ref>a presents a micrograph of a typical device. A schematic of the four-wire measurement used for the tunneling spectroscopy is likewise shown. In the cartoon of Fig. <ref>b the side view of the sample is shown with a simplified representation of its density of states (DOS) on the bottom.The tunneling conductance of the device is strongly influenced by the exchange spin-splitting of the superconductor DOS (on the left) facing the spin-split DOS of the Co counter-electrode such that strong spin filtering is expected for proper voltage bias.Such spin filtering is at the origin of the asymmetric tunneling conductance G(V)=dI/dV measured as a function of the bias voltage and magnetic field, and presented in the color plot of Fig. <ref>c.The characteristic asymmetry in G(V) can be seen in Fig. <ref>d for the green line. It is compatible with a parallel (P) alignment of the magnetizations of the EuS and Co layers, visible for most of the magnetic fields explored. Only the tunneling conductance at B≃ 10 mT, obtained during the sequential switching of the two ferromagnets, is characterized by an anti-parallel (AP) alignment (Fig. <ref>c and d orange line).By fitting the experimental data with the spin selective tunneling model <cit.> (see Methods section <ref> for model details andcontinuous lines in Fig. <ref>d for the fit) it is possible to extract the spin polarization of the tunnel barrier P≃0.5, the exchange interaction induced in the Al layer h≃50 μeV, the Al superconducting gap Δ≃ 195 μeV, and the inelastic and spin-flip scattering rates ħΓ≃ 32 μeV andħΓ_ sf≃ 29 μeV, for B=-35 mT, respectively.Notably, a large Γ characterizes the superconducting tunnel barrier, as typically observed in junctions with ferromagnetic counterelectrodes <cit.>. Different devices were tested showing similar results with slightly different Γ, Γ_ sf, h and P (see the extended Figure <ref>). From the tunneling spectroscopy it is also possible to extract the field evolution of the tunneling magnetoresistance TMR= R_P-R_AP/R_AP, shown in Fig. <ref>e, and the magnetoconductance, shown in Fig. <ref>f. Values obtained are compatible with previous TMR measured in similar structures <cit.>, showing a maximum at voltage biases compatible with the superconducting gap (e V_MAX≃±Δ). § THERMOELECTRIC RESPONSE To quantify the thermoelectric response of the device, the thermovoltage across the junction was measured in the presence of a thermal gradient imposed across the junction. Such temperature difference is achieved via a Joule-heating current I_H which flows through the Co strip while the Al is thermalized by the substrate at bath temperature (T_ bath), according to the scheme presented in Figure <ref>a.The voltage measured across the junction was then symmetrized with respect to I_H (V_ th= V(+I_H)+V(-I_H)/2) to remove the trivial ohmic contribution originating from the shared electrical paths between the voltage probe and the heating current, similarly to previous experiments on transversal rectification in superconducting tunnel diodes <cit.>.Differing from tunnel diodes, the larger impedance of the device makes thermoelectricity the main contribution of the voltage drop summing to rectification components. A representative example of symmetrization is presented in Figure <ref>b. In the top panel the voltage measured as a function of |I_H| presents the main linear evolution, while only after symmetrization (bottom panel), small deviations are visible and a clear monotonic increase of V_ th(I_H) up to ≃ 10 μV is observed at I_H=50μA. Above 50μA the large power injected in the device is not fully dissipated by the substrate limiting the thermalization of the cold Al lead and resulting in a saturation or decrease of V_ th. The increase of the Al temperature at large I_H was confirmed by the damping of the critical current measured in the Al strip at different I_H as shown in the extended figure <ref>.The evolution of V_ th(B) in the external magnetic field at fixed |I_H| is shown in Fig. <ref>c.Consistently with the non-reciprocal tunneling spectroscopy measurements, V_ th strongly depends on B and on the relative orientation of the two ferromagnetic layers showing sign reversal in the AP phase. Hysteresis in the magnetic field is visible, with a maximum signal at |B|≃ 20 mT vanishing above 120 mT due to the quenching of superconductivity, as observed also in the tunneling spectroscopy measurements reported in fig. <ref>c.In the inset, showing the central measurement range, it is possible to appreciate the sizable signal (>10 μV) present even at zero field as a consequence of the strong ferromagnetism of the device. Moreover, a clear negative thermovoltage is visible between the coercive fields of the two ferromagnetic layers (7 mT≲ |B|≲ 10 mT).Such inversion of V_ th confirms the AP phase achieved between the Co and EuS ferromagnetic layers as deduced from the tunneling spectroscopy in the same field range. The lower amplitude of V_ th in the AP phase with respect to the P case is consistent with a weaker polarization and spin filtering of the device, and it indicates a partially polarized magnetization of the EuS and Co layers in the AP phase during the non simultaneous magnetization switching of the two ferromagnets. At higher temperatures V_ th tends to slowly decrease as shown in the top panel of Fig. <ref>d with a sizable thermovoltage observed up to 800 mK. Such robustness in temperature is a consequence of the large exchange splitting of the device h≃ k_B×600mK extending the operation of the device to higher temperature, see Extended Data Fig. <ref>.To evaluate the Seebeck coefficient from V_ th the temperature gradient across the junction δ T = T_Co - T_Al needs to be estimated. The thermal model for the device (see Methods)indicates that at low heating power T_Al≃ T_bath, as confirmed also by monitoring the critical current of the Al lead at different I_H (see extended figure <ref>).The temperature of the Co electrode is estimated from the broadening of the tunneling spectroscopy as typically done in S/I/N thermometry <cit.> and was measured at different I_H. In this case, the model needs to be extended to account also for the additional lateral voltage drop due to the presence of the heating current I_H0, (see Methods for details). The full model provides the estimate δ T(I_H,T_bath) = (T_bath^5 + b I_H^2)^1/5 - T_bath with b≈5.6·10^-5 K^5/μA^2, shown in the bottom panel of Fig. <ref>d. Notably, a large temperature gradient up to 500 mK was achieved across the junction for low T_bath. Using the relation δ T (I_H,T_bath) it is possible to remap the thermovoltage in the temperature gradient V_ th(δ T) as shown in Fig. <ref>e together with the resulting Seebeck coefficient S=V_ th/δ T. A Seebeck coefficient of up to 30μV/K can be estimated both at 10 mT and at zero magnetic field for a base temperature of 100mK, which is on par with state-of-the-art cryogenic thermoelectric elements <cit.>. It is worthwhile to mention that by increasing the bath temperature above 500mK, S can obtain values as large as a few hundreds of μV/K. Moreover, a smaller but sizable Seebeck coefficient of -5 μV/K is also visible in the AP phase obtained at B=-10mT, thus implementing a thermoelectric spin valve where the n-type and p-type Seebeck effect is controlled by the relative orientation of the device magnetic moments.§ HEAT ENGINE Once V_ th is applied on a load resistor R_L, work can be extracted from the thermoelectric effect for the demonstration of a heat engine, and we can quantify the thermal-to-electrical energy conversion. The circuit used for this purpose is sketched in Fig. <ref>a. The junction is shunted to the ground with R_L and two additional balancing resistors (R_B = 10 kΩ) have been included in the circuit in a symmetric configuration to prevent spurious leaks from the heating-current source to the load. The voltage drop (V_L) across R_L is probed for different measurement configurations and the resulting dissipated power (P_L= V^2_L/R_L) is used to estimate the power generated by the engine, then neglecting residual thermoelectric power dissipated in the balancing resistors R_B.The evolution of V_L measured as a function of R_L, under constant heating with |I_H|=40 μA is presented in Fig. <ref>b. Three different magnetic configurations are compared, P saturation regime (green, B=10 mT), P remanence (grey, B=0 mT) and AP (orange, B=-10 mT) characterized by a negative V_L as for the thermovoltage shown in Fig. <ref>c. In all presented field regimes V_L tends to increase with an increasing R_L, showing a saturation towards the thermovoltage V_ th when the load resistance is above 500 kΩ, i.e., much larger than the tunnel resistance (R_L ≫ R_T≃ 50 kΩ) as expected for an ideal voltage source V_L= V_ thR_L/R_L+R_T, with R_T as source resistance.The resulting dissipated power shown in Fig. <ref>c has a different non-monotonic behavior with a maximum observed for R_L ∼ R_T. This behavior is consistent with the maximum power transfer theorem (known as Jacobi's law) predicting P_L = I^2 R_L = V_L^2 R_L/(R_L+R_T)^2as reported in the red fit line in Fig. <ref>c.The heat engine efficiency can be quantified from the ratio between the power extracted by the engine (P_L) and the Joule power injected to generate the thermal gradient P_ in = R_ Co× I_H^2, where R_ Co=11 Ωis the resistance of the cobalt strip at the junction.Figures <ref>d and <ref>e show P_L vs P_ in for the three magnetic field configurations.At low power P_L is characterized by an almost linear increase corresponding to an efficiency η = P_L/P_ in≃ 5 × 10^-8 that tends to decrease at high power. We stress that this efficiency is just a lower bound of the intrinsic efficiency of the effect, as most of P_ in may be lost in different heat channels including the heat dispersed from the Co directly to the substrate.The temperature dependence of η displayed in Fig. <ref>f shows an almost constant efficiency below 400 mK, and a quick damping at higher temperatures. The quick damping is consistent with the expected behavior of η(T) obtained from the theoretical model shown in Fig. <ref>f based on the device parameters, as described in detail in the methods section.The model shows that the main limiting factor for η at high temperatures is the electron-phonon coupling: most of the thermal energy from the electrons is transferred to the lattice phonons, instead of being converted into thermoelectric power.In fact, the thermal conductance between electrons and lattice phonons scales as ∝T^4 <cit.>, leading to a decreasing efficiency at increasing temperature. Yet, as observed from the theoretical model, the heat engine efficiency could be strongly enhanced by working below 100 mK.§ HEAT-ENGINE MEMORYIt is worthwhile noting that combining the hysteretic behavior of the thermoelectric effect as shown in Fig. <ref>f with the heat engine, it is possible to envision a thermoelectric memory cell. Our device structure is an original concept for aclassical memory cell similar to a conventional magnetic random access memory stack <cit.>. The latter, is composed by two ferromagnetic layers with different coercivity, and separated by a tunnel junction. The first ferromagnetic electrode provides the electronic spin polarization in two Mott channels <cit.>, while the second feromagnetic layer filters the spin polarized currents after coherent tunneling, thereby allowing high conductance in P configuration and lower conductance in the AP state. By contrast, in ourmemory cell the logic states are codified by a thermoelectric voltage self generated in the memory itself, and not requiring a local input current for the state read-out. This advantage can strongly simplify the wiring of the memory with net benefits for scalability.Additionally, it allows for a local direct transduction of the electrical signal in to another physical observable, for instance, a photon in the sketch of Fig. <ref>a. This may inspire novel methods for read-out and packaging in dense arrays.In Fig. <ref>b, we present an example of the heat-engine hysteretic cycle measured in the proposed memory cell. The device is first polarized in the P state at -20 mT, then B is cycled between -20 mT to 10 mT.A clear hysteretic loop is visible with a high contrast of 10 μV between the P and AP configuration also at zero field, an important condition to operate the memory in the absence of external magnetic fields.§ DISCUSSION AND CONCLUSIONS In summary, we have fabricated and characterized a superconductor-ferromagnet tunnel junction structure based on aluminum proximitized by europium sulfide, and separated from a cobalt electrode by an aluminum oxide tunnel barrier.Our device shows a remarkable non-reciprocal charge transport due to electron-hole symmetry breaking induced by the spin selectivity of the junction.As a consequence, a sizable thermoelectric voltage is observed in the presence of a thermal gradient, which is achieved via a Joule-heating current flowing throughthe cobalt strip. The different coercivity of the two ferromagnetic layers exploited in the junction joined to the large ferromagnetic remanence of Co magnetization warrant two important features: (i) thermoelectricity is observed even at zero magnetic field and (ii) a clear inversion of the thermoelectric effect is achieved when the EuS and Co magnetizations are antiparallel. The latter implements the first spin valve for thermoelectric applications, by reversing the Seebeck coefficient from p-type to n-type.We quantified the power generated by the structureover a series of external load resistors thereby demonstrating the implementation of a superconducting spintronic heat engine.From the thermal model of the device we identified the heat losses through the electron-phonon coupling as the main factor limiting the engine efficiency.In light of future technological applications, several strategies could be followed in order to increase the engine efficiency, such as decreasing the junction size, and improvingheat isolation via device suspension or by lowering the operating temperature in order to limit the electron-phonon coupling. Finally, we have also shown the operation of the device as a thermoelectric superconducting memory cell. For that purpose, two main advantages are envisioned: (i) junction durability, if operated in an open circuit configuration with no detrimental currents flowing through the junction; (ii) high scalability, due to a read-out signal self generated by the thermoelectric power. We envision the application of such cryogenic thermoelectric element in the implementation of sensitive self-biased detectors of electromagnetic radiation with simplified approaches for multiplexing <cit.>.Additionally, by scaling our heat engine to large areas may find relevant applications for energy harvesting in the deep space where the low temperature makes conventional approaches somewhat ineffective.§ METHODS §.§ Sample preparation and measurement For the sample preparation we have first deposited 12.5 nm EuS thin film by molecular beam epitaxy on top of Si/SiO_x substrates cooled at 150K. The pressure during growth was kept in the range of 10^-9 mbar to avoid any EuS oxidation and achieve a near stoichiometric EuS compound.Without breaking the ultra high vacuum conditions, a ∼250-μm-width Al lead was grown on top by using a metallic shadow mask.The total thickness of the Al layer was 20 nm. To form the insulating AlO_x barrier the sample was exposed 3 × 10^-3 mbar of low-energy oxygen plasma created by inductevely coupled plasma source for 5 hours, resulting in a ∼ 4-nm-thick AlO_x layer and lowering the metallic Al thickness down to 16 nm. The subsequent cross bar geometry was realized with another shadow mask evaporation to grow a Co lead of 14 nm thickness and ∼ 200 μm width.Al and Co layers were grown with e-beam metal evaporators. Finally, before extracting the sample from the chamber, a 7 nm calcium fluoride (CaF) layer was deposited covering the whole system to avoid environmental oxidation.Samples were wire bonded with aluminum wires and mounted in a dilution fridge, where the magneto-electrical measurements were performed through low pass filters. All the signals were amplified via low-noise voltage and current preamplifiers.For the application of the heating currentsand critical current measurements the filters were bypassed in order to decrease the power load that would provide cryostat excessive heating.§.§ Theoretical modelThe current density through the Al/Co tunnel junction can be expressed as <cit.>, ℐ = ∑_σ=↑,↓ G_σ^□∫_-∞^∞ dEN_σ(E)[f(E,T_S) - f(E + V,T_N)], and depends on the voltage V over the junction and temperatures T_N, T_S on the normal (Co) and superconductor (Al) sides. Here f(E,T) is a Fermi function. The current is proportional to spin-dependent conductances per square area G^□_↑/↓=1± P/2G_□, which are due to Co spin polarization and interface properties. Here, -1≤P≤1 is the spin polarization and G_□ the junction conductance per square. The result also depends on the superconductor density of states N_σ(E). The superconducting gap Δ in it is spin-split by an exchange field h induced from EuS, but this is counteracted by spin-flip scattering with rate Γ_ sf and inelastic scattering with rate Γ, which we account for using methods in Ref. <cit.>. We extract the values of Δ, Γ_ sf, Γ, G, P, and h by fitting Eq. (<ref>) to experimental I(V) characteristics at T_S=T_N.As the Al/Co tunnel junction resistance is high compared to the total Co wire resistance, the voltage profile along the Co wire is linear to a good approximation, V(x)=V_0 + x I_H R_x/L_x, where R_x=ρ_Co L_x /(W t_Co) is the lateral resistance of the part of the Co film (cross-section t_Co×W, length L_x, resistivity ρ_Co) on top of the tunnel junction. Due to superflow the voltage in the Al film is spatially constant. The total tunneling current through the Al/Co junction then is I_T = W∫_-L_x/2^L_x/2dx ℐ(V(x),T_Co,T_Al) where the Al/Co overlap has size W×L_x and ℐ is the local S/I/FM junction current density–voltage relation from Eq. (<ref>), which includes the S/FI thermoelectric effects. <cit.> Under these conditions, the Joule heating via current I_H in Co at low temperatures is mainly limited by the electron-phonon coupling. The corresponding heat balance equation is, <cit.> L_xWt_CoΣ (T_Co^5 - T_bath^5)= R_x I_H^2 , where Σ the electron-phonon coupling parameter of Co. We have also modeled the Al side with a similar equation, with tunneling current input power on right-hand side and taking superconductivity into account. <cit.> We find that due to the high tunnel resistance, electronic heat transport across the tunnel junction is suppressed, and we can neglect heating of the Al side. Consequently δ T = T_Co - T_Al = (T_bath^5 + b I_H^2)^1/5 - T_bath, with b=R_x/(W t_CoL_xΣ). To determine the effective values of Σ and R_x, a two-parameter fit of Eqs. (<ref>),(<ref>) is done on the experimental dI/dV curves of different T_bath and I_H. In addition, fits at I_H=0 are used to determine the tunnel junction parameters in Eq. (<ref>).The circuit model of Fig. 3a together with Eqs. (<ref>),(<ref>),(<ref>) from the model of the heat engine, from which efficiencies and relative contributions of the rectification and thermoelectricity can be estimated. For small I_H, we can expand I_T≈ G V_th + Pαδ T/T + G'/24(R_xI_H)^2 where G is the tunnel junction conductance at zero bias, α the thermoelectric coefficient, <cit.> and G' the zero-bias voltage derivative of the conductance, characterizing the rectification.In the tunneling model, G'≈ce^2Pα/(2k_B^2T^2) where c≈1 is a weakly temperature-dependent numerical factor. From Eq. (<ref>) one can then deduce that for high-resistance tunnel junctions, the thermoelectric contribution dominates when T_bath≲[9k_B^2/(e^2ρ_CoΣ L_x^2)]^1/3≈300 mK and I_H≲25[k_B^5/(e^5R_x^4Wt_CoΣ L_x)]^1/3≈20 μA. The opposite limit of low-resistance junctions was discussed in Ref. <cit.>.The thermoelectric coefficient α is related to the Seebeck coefficient by S=V_ th/δT=Pα/(GT), and its temperature and exchange field dependence was discussed in Ref. <cit.>. The large value of Γ in the experiment modifies the temperature and exchange field dependence, as illustrated in the theoretical prediction in Extended Data Figure <ref>.§ DATA AVAILABILITY The data that support the findings of this study are available from corresponding author C.I.L.A., F.G and E.S. upon reasonable request. § CODE AVAILABILITY The codes that support the findings of this study are available from corresponding author P.V. upon reasonable request.§ ACKNOWLEDGMENTS C.I.L.A., P.V., T.H. and F.G. acknowledge funding from theEU's Horizon 2020 Research and Innovation Program under Grant Agreement No. 800923 (SuperTED). M.S. and E.S. acknowledge funding from the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska Curie Action IF Grant No. 101022473 (SuperCONtacts). F.G. and E.S. acknowledge the EU’s Horizon 2020 Research and Innovation Framework Program under Grant Agreement No. 964398 (SUPERGATE), No. 101057977 (SPECTRUM), and the PNRR MUR project PE0000023-NQSTI for partial financial support. C.I.L.A. acknowledge Brazilian agencies FINEP, FAPEMIG APQ-04548-22, CNPq and CAPES (Finance Code 001).§ AUTHOR CONTRIBUTIONSC.I.L.A., M.S. and E.S. performed the experiment and analysed the data.P.V. and T.T.H. provided theoretical support.C.G.O., S.K., M.I. and C.R. fabricated the samples.E.S. conceived the experiment together with F.G. and T.T.H.. C.I.L.A., M.S., P.V and E.S. wrote the manuscript with feedback from all authors. § COMPETING INTERESTSThe authors declare no competing interests.§ EXTENDED FIGURE@figureExtended Data Figure figure: caption@fignum@sep
http://arxiv.org/abs/2310.18132v1
{ "authors": [ "Clodoaldo I. L. de Araujo", "Pauli Virtanen", "Maria Spies", "Carmen González-Orellana", "Samuel Kerschbaumer", "Maxim Ilyn", "Celia Rogero", "Tero T. Heikkilä", "Francesco Giazotto", "E. Strambini" ], "categories": [ "cond-mat.mes-hall", "cond-mat.supr-con" ], "primary_category": "cond-mat.mes-hall", "published": "20231027132348", "title": "Superconducting Spintronic Heat Engine" }
square,numbers,comma NatureAstron. Astrophys.Astrophys. J.Astrophys. J. Lett.Astrophys. J. Supp.Astron. J.Mon. Not. Roy. Astron. Soc.Phys. Rept.Phys. Rev. DPhys. Rev. Lett.Astrophys. Space Sci.Annu. Rev. Astron. Astrophys.J. Cosmol. Astropart. Phys.
http://arxiv.org/abs/2310.18398v1
{ "authors": [ "Gerard Higgins", "Saarik Kalia", "Zhen Liu" ], "categories": [ "hep-ph", "hep-ex", "quant-ph" ], "primary_category": "hep-ph", "published": "20231027180003", "title": "Maglev for Dark Matter: Dark-photon and axion dark matter sensing with levitated superconductors" }
[e-mail: ][email protected] B. Verkin Institute for Low Temperature Physics and Engineering, Kharkiv 61103, Ukraine Theoretical Quantum Physics Laboratory, Cluster for Pioneering Research, RIKEN, Wakoshi, Saitama, 351-0198, JapanB. Verkin Institute for Low Temperature Physics and Engineering, Kharkiv 61103, Ukraine Theoretical Quantum Physics Laboratory, Cluster for Pioneering Research, RIKEN, Wakoshi, Saitama, 351-0198, JapanB. Verkin Institute for Low Temperature Physics and Engineering, Kharkiv 61103, Ukraine Quantum Motion, 9 Sterling Way, London N7 9HJ, United KingdomTheoretical Quantum Physics Laboratory, Cluster for Pioneering Research, RIKEN, Wakoshi, Saitama, 351-0198, Japan Quantum Computing Center, RIKEN, Wakoshi, Saitama, 351-0198, Japan Physics Department, The University of Michigan, Ann Arbor, MI 48109-1040, USA A conventional realization of quantum logic gates and control is based on resonant Rabi oscillations of the occupation probability of the system. This approach has certain limitations and complications, like counter-rotating terms. We study an alternative paradigm for implementing quantum logic gates based on Landau-Zener-Stückelberg-Majorana (LZSM) interferometry with non-resonant driving and the alternation of adiabatic evolution and non-adiabatic transitions. Compared to Rabi oscillations, the main differences are a non-resonant driving frequency and a small number of periods in the external driving. We explore the dynamics of a multilevel quantum system under LZSM drives and optimize the parameters for increasing single- and two-qubit gates speed. We define the parameters of the external driving required for implementing some specific gates using the adiabatic-impulse model. The LZSM approach can be applied to a large variety of multi-level quantum systems and external driving, providing a method for implementing quantum logic gates on them. 03.67.Lx, 32.80.Xx, 42.50.Hz, 85.25.Am, 85.25.Cp, 85.25.Hv Alternative fast quantum logic gates using nonadiabatic Landau-Zener-Stückelberg-Majorana transitions Franco Nori January 14, 2024 =====================================================================================================§ INTRODUCTIONThe conventional way of qubit state control is realized with resonant driving, resulting in Rabi oscillations (see, e.g., <cit.>). There, the frequency of operation, the Rabi frequency, is defined by the driving amplitude; and so increasing the speed of operations means increasing the driving amplitude. This presents several challenges <cit.>, including leakage to levels that lie outside the qubit subspace, breakdown of the rotating-wave approximation, and increased environmental noise. Instead of discussing the technological complications of the Rabi approach, let us consider here an alternative approach, based on a different paradigm of driving quantum systems.When a quantum system exhibits an avoided-level crossing and is strongly driven, it can be described by the modeloriginally developped in several publications in 1932 and known as Landau-Zener-Stükelbeg-Majorana (LZSM) transitions (see, e.g., <cit.> and references therein). Effectively, the model can be split into two evolution stages: non-adiabatic transitions between the energy levels in the vicinity of the anti-crossing and adiabatic evolution far from the anti-crossing.The energy-level occupation probabilities, as well as the relative phase between them, can be chosen by varying the driving parameters, providing a different paradigm for qubit state control <cit.>.The LZSM transitions provide an alternative to conventionalgates based on resonant Rabi oscillations <cit.>. The energy level avoided crossing of a single qubit or two coupled qubits allows to controllably change states of such systems <cit.> and to realize single- and two-qubit logic operations <cit.>. Recently, it was studied theoretically <cit.> and demonstrated experimentally <cit.> that the LZSM model has several advantages over conventional gates based on Rabi oscillations. These advantages include ultrafast speed of operation <cit.>, robustness <cit.>, using baseband pulses (alleviating the need for pulsed-control signals) <cit.>, and reducing the effect of environmental noise <cit.>.In this work we further develop the paradigm of the LZSM quantum logic gates. We investigate the single- and two-qubit systems' dynamics under an external drive numerically solving the Liouville-von Neumann equation using the QuTiP framework <cit.>. We explore the ways of finding the parameters for any arbitrary quantum logic gate with LZSM transitions and optimize the speed and fidelity of the quantum logic gates. We demonstrate the implementation of single-qubit X, Y, Hadamard gates and two-qubit iSWAP and CNOT gates using the LZSM transitions.This paper is organized as follows. In Sec. <ref> we describe the qubit Hamiltonian and two main bases. In Sec. <ref> we demonstrate X, Y, Hadamard, and phase gates implementations using both Rabi oscillations and LZSM transitions. We compare the speed and fidelities achieved with both paradigms. We explore the way of increasing the gate speed and fidelity of the LZSM gates by using multiple transitions. In Sec. <ref> we generalize the considered paradigm of using the adiabatic-impulse model for realization of quantum logic gates for multi-level quantum systems, and describe the realization of a two-qubit iSWAP gate with two LZSM transitions. The details for implementing other two-qubit gates, in particular a CNOT gate, are provided in Appendices <ref> and  <ref>. Sec. <ref> presents the conclusions. § HAMILTONIAN AND BASESConsider the typical Hamiltonian for a driven quantum two-level system ℋ(t)=Δ/2σ _x+ε (t)/2σ _z= 1/2( [ε(t) Δ; Δ -ε(t) ]) ,where ε(t) is the driving signal and Δ is the minimal energy gap between the two levels. Here we consider the harmonic driving signalε (t)=Asinω t.The wave function is a superposition of two states of a quantum two-level system: |ψ⟩=α(t)|0⟩+β(t)|1⟩= [ α(t); β(t) ].The two main bases are: the diabatic one, with diabatic energy levels {|0⟩, |1⟩}, where the Hamiltonian becomes diagonalized when Δ=0, and the adiabatic basis |E_±⟩, representing the eigenvalues of the total Hamiltonian, see Fig. <ref>. The relation between these bases is given by | E_±(t)⟩ = γ _∓| 0⟩∓γ _±| 1⟩, where γ _± = 1/√(2)√(1±ε (t)/Δ E(t)).Hereinafter, all the matrices of quantum logic gates, rotations R_x,y,z, matrices of adiabatic evolution U, and diabatic transition N should be assumed to be represented in the adiabatic basis, while the Hamiltonians will be represented in the diabatic one. The dynamics of the quantum system with relaxation and dephasing can be described by the Lindblad equation. For simplicity, we consider the dynamics without relaxation and dephasing, described by the Liouville-von Neumann equationdρ/dt = -i/ħ[H(t),ρ],which coincides with the Bloch equations in the case of a two-level system.§ SINGLE-QUBIT GATESWe will describe a basic set of single-qubit gates (Sec. <ref>) and then explain how these can be performed using both the Rabi approach (Sec. <ref>) and the LZSM approach (Sec. <ref>).§.§ Basic set of single-qubit gates We consider different gates <cit.>: X, Y, Z gates, phase gate R_z(ϕ), and the Hadamard gate H; which we write down here: X=σ_x=R_x(π)= [ 0 1; 1 0 ] , Y=σ_y=R_y(π)= [0 -i;i0 ] ⇔ R_z(π)R_x(π) , Z=σ_z=R_z(π)= [10;0 -1 ] , P(ϕ) ≡R_z(ϕ)= [10;0 e^iϕ ] ⇔ [ e^-iϕ/2 0; 0e^iϕ/2 ] , H=R_y(π/2)R_z(π)=√(Y) Z=1/√(2) [11;1 -1 ] , where R_x,y,z describes the rotations around the respective axes:R_x,y,z(ϕ ) = exp(-iσ _x,y,zϕ/2)= = cos( ϕ/2) I+isin( ϕ/2) X,Y,Z.Since the global phase of the density matrix ρ is irrelevant and the dynamics is invariant to the multiplication of the density matrix ρ by any complex number from the unit circle e^iφ, the gate operator e^iφ G is equivalent to the gate operator G, which we denote ase^iφG⇔ G. §.§.§ Phase gate R_z(ϕ) The first gate we consider is the phase gate R_z(ϕ) in Eq. (<ref>), which corresponds to a rotation around the z-axis by an angle ϕ. To perform this gate, there is no need of a drive. Without driving, ε=const, the Bloch vector experiences a free natural rotation around the z-axis (which is the analogue to spin rotating in a magnetic field: the Larmor precession with frequency Ω_L). The frequency of this free rotation depends on the distance between the energy levelsħΩ_L=Δ E=√(ε(t)^2+Δ^2).For a rotation by an angle ϕ, a qubit needs to precess for a timet_R_z(ϕ)=ϕ/Ω_L.Since in the Rabi-based approach the energy detuning of the qubit during the phase gate is at the level anti-crossing ε=0, and in the LZSM-based approach it can be far away from the anti-crossing, the time of the phase gate in the LZSM-based approach can be reduced. This difference in duration of the phase gates is demonstrated in Fig. <ref>(a) and Fig. <ref>(a).§.§ Rabi-based single-qubit operationsTo perform any quantum logic gate with changing level occupation probability, the qubit should be excited by a time-dependent energy detuning ε(t). A conventional way to achieve this is via Rabi oscillations with small amplitude A≪Δ and with the qubit resonant frequency (ħω = Δ), which we will compare with LZSM transitions with large amplitude A>Δ and non-resonant driving frequency ω.Here we describe how the single-qubit operations are implemented with Rabi oscillations and demonstrate the dynamics of the Bloch sphere coordinates for several logic gates in Fig. <ref>. Rabi oscillations occur during the resonant driving at δω=ω-ω_q≪ω (where ω_q = Δ E / ħ≈Δ / ħis the qubit resonant frequency) with small amplitude A≪Δ, and harmonic driving signal, Eq. (<ref>).The Rabi oscillations lead to a periodic change of the level occupation with Rabi frequencyΩ_R=AΔ/2ħΔ E≈A/2ħ.During the oscillations, the z-component of the Bloch vector changes as z(t)=cosΩ_Rt, when the initial state is the ground state |E_-⟩. While the state probability is evolving, a phase change also occurs with frequencyħΩ_L ≈Δ.We define the Rabi oscillations evolution as a combination of two rotationsU_Rabi(t)= R_z(Ω_Lt)R_x(Ω_Rt).Using Eq. (<ref>), we can rewrite it asU_Rabi(t)=R_z(Ω_Lt) R_x ( A/2 ħ t ) = R_z(Ω_Lt) R_x ( S/2 ħ),which shows that the angle of rotation around the x-axis is proportional to the area S=At under the envelope of the Rabi pulse.When there is no phase difference between rotations, Ω_L t= Ω_R t + 2 π n, we obtainU_Rabi(t)= R_z(Ω_Lt)R_x(Ω_Rt)=R_y(Ω_Rt),and the Rabi evolution results ina rotation around the y-axis.To perform an X operation, we drive the system by Rabi pulses during a time T_R, so that the area under the envelope isS = A T_R = 2 π .In order for the driving to end at zero amplitude, we take an integer number of periods of the sine.After that, we need to change the phase to obtain an X operation from a Y rotation, so we perform the R_z rotation by idling the drive for a time T_Iwith the conditionΩ_L(T_I+T_R)=2π n,then finally the X gate is realized asR_z(Ω_LT_I) U_Rabi(T_R)= = R_z(Ω_LT_I) R_z(Ω_LT_R)R_x(Ω_RT_R) =R_x(π) = X,see Fig. <ref>(c). To perform the Hadamard gate, we need to apply the Rabi pulse with the duration twice shorter than for the X gate,T_R=π/Ω_R, with the condition on the idling time T_I:Ω_L(T_I+T_R)=π + 2π n.As a result, we obtain the Hadamard gate asR_z(Ω_LT_I) U_Rabi(T_R)= = R_z(Ω_LT_I) R_z(Ω_LT_R)R_x(Ω_RT_R) = = R_z(π) R_x(π/2) =H,which is demonstrated in Fig. <ref>(b). §.§.§ Gaussian envelope optimization for Rabi-based gates Since the Rabi-oscillations model assumes small amplitudes of the driving signal, to increase the gate fidelity, a small amplitude at the start and end points should be used. To achieve a high gate speed, a large driving amplitude A between these points should be used. Hence, to increase the gate speed, now we use a Gaussian-shaped envelope A(t) for the driving signalε (t)=A(t)sinω t.We now consider a Rabi pulse with duration T_R with the envelope in the form A(t)= A_0exp[-(t-τ)^2/2σ_G^2],t<T_R 0,t>T_Rwith the tails of the Gaussian distribution truncated at some distance G from the peak, normalized to the standard deviation σ_G,G=T_R/2 σ_Gand the peak of the distribution at timeτ=T_R/2. The angle of rotation around the x-axis in Eq. (<ref>) is defined by the area under the envelope of the Rabi pulse.For the X operation it is given by Eq. (<ref>). So the area under the truncated Gaussian distribution should be the same as for the original signal with constant amplitude and rectangular shape of the pulse. This condition determines the amplitude of the distribution asA_0=√(2 π/σ_G S_G),where S_G is the normalized area of the truncated Gaussian distributionS_G= 1/√(2 π)∫_-G^Ge^-x^2/2dx. §.§ Single-qubit operations based on LZSM transitionsHere we describe how to implement single-qubit operations based on LZSM transitions using the adiabatic-impulse model (AIM), also known as the transfer-matrix method, and demonstrate the dynamics of the Bloch sphere coordinates for several logic gates in Fig. <ref>, that can be compared with the dynamics of the same gates realized with Rabi oscillations in Fig. <ref>. For the diabatic LZSM transitions, we need the following approximations: A>Δ and 2π/ω < t_trans, where t_trans is the transition time. After that time the result of the adiabatic-impulse model will asymptotically coincide with the exact dynamics <cit.>.§.§.§ Adiabatic-impulse model. Single-passage drive In the adiabatic-impulse model, the time evolution is considered as a combination of adiabatic (non-transition) and diabatic (transition) evolutions. The adiabatic evolution is described by the adiabatic time-evolution matrix, U(t_i,t_j)= [ e^-iζ (t_i,t_j) 0; 0e^iζ (t_i,t_j) ] =e^-iζσ _z=R_z(2ζ),where ζ (t_i,t_j) is the phase accumulated during the adiabatic evolution ζ (t_i,t_j)=1/2ħ∫_t_i^t_jΔ E(t)dt = 1/2ħ∫_t_i^t_j√(ε(t)^2+ Δ^2)dt,and Δ E(t) = E_+(t)- E_-(t). Then, the diabatic evolution (transition) is described by the matrix N= [ Re^-i ϕ_S-T; TRe^i ϕ_S ] = =R_z( ϕ_S) R_x( θ) R_z( ϕ_S),whereT=√(𝒫), R=√(1-𝒫)are the transition and reflection coefficients,𝒫=exp(-2πδ)is the LZSM probability of excitation of the qubit with a single transition from the ground state |E_-⟩, δ=Δ^2/4v is the adiabaticity parameter, v=ε^'(0) is the speed of the anti-crossing passage andϕ _S = π/4+δ (lnδ-1)+Arg [Γ (1-iδ ) ]is the Stokes phase <cit.>. The θ angle can be found from the equation sin ^2(θ /2)=𝒫.The inverse transition matrix can be written asN^inv=N^⊤ = [ Re^-i ϕ_S T;-TRe^i ϕ_S ]⇔ ⇔ [ Re^-i (ϕ_S - π)-T; TRe^i (ϕ_S - π) ].The single transition evolution matrix in the general case with adiabatic evolution matrix before the transition U_1 and after the transition U_2 is given by U_LZSM= U_2 N_1 U_1 = [ U^'_11 U^'_12; -U^'*_12U^'*_11 ],whereU^'_11 = R_1 exp [ -i (ϕ_S1 + ζ_1 + ζ_2) ] , U^'_12 = - T_1 exp [ i (ζ_1 - ζ_2) ] , ζ_1 = ζ(0,t_N1), ζ_2 = ζ(t_N1,t_final).Here, t_N1 is the time of the level anti-crossing passage, t_final is the end time of the drive.Then, we consider the same adiabatic evolution before and after the transition ζ=2ζ_1=2ζ_2. In that case, a single LZSM transition gate can be represented <cit.> as a combination of rotationsU_LZSM(𝒫,ϕ_total)=R_z(ϕ_total )R_x(θ)R_z(ϕ_total), U^inv_LZSM (𝒫,ϕ_total)=U_LZSM(𝒫,ϕ_total-π),where ϕ_total=ϕ_S+ζ, and U^inv_LZSM corresponds to the inverse transition. Using this LZSM gate, we can define a basic set of gates.For an X gate, the two-level system needs to perform a transition with probability 𝒫=exp(-2πδ)=1; which means that the adiabaticity parameter δ=Δ^2/4v→ 0, requiring an infinite speed of the anti-crossing passage v=ε^'(0)→∞ or a zero energy splitting Δ. Hence, it is difficultto implement the X operation with high fidelity using only a single passage. Therefore, at least two transitions are needed for implementing the X gate with sufficient fidelity.For the LZSM transition, we need to start andfinish the evolution far from the anti-crossing region. So we now consider the harmonic driving signal ε(t)=-Acos(ω t). This signal is linear in the anti-crossing regiondε/dt|_ε≈ 0≈ A ω =const.We obtain a relation between the amplitude A and the frequency ω for certain LZSM probability 𝒫𝒫 = exp[-2πΔ^2/4Aωħ]→ω=-πΔ^2/2A ħln𝒫.Then, we find an amplitude which satisfies some value of ϕ_total and 𝒫,ϕ_total = π/4+δ (lnδ-1)+Arg [Γ (1-iδ)]+ + 1/2ħ∫_0^π/ω√(ε(t)^2+ Δ^2)dt, δ = -ln𝒫/2π,where we used that the harmonic driving satisfies the initial conditions far from the anti-crossing regionε(t)=-Acosω t.A single LZSM transition is convenient for implementing rotations to any angle θ<π, for example θ=π/2, which is needed for the Hadamard gate. Following Eq. (<ref>), the angle θ=π/2 corresponds to the target probability of a single LZSM transition𝒫=sin^2(π/4)=1/2.This LZSM transition is non-instantaneous: the probability oscillates for some time after the transition, and the value of the upper-level occupation obtained from the formulae cannot be exactly reached until the end of the oscillations <cit.>. The parameters for the Hadamard gate implementation can be found from Eq. (<ref>) by equating U_LZSM to the matrix of the gate (<ref>), and solving the system of equations𝒫=1/2→ T=R=1/√(2),ζ_1-ζ_2 = π/2+π n_1, ϕ_S+ζ_1+ζ_2=π/2+π n_2.From this system we obtain the total phaseϕ_total=π n,so the Hadamard gate can be presented asH=R_y(π/2) R_z(π)=U_LZSM(1/2, 2π n) R_z(π) = =R_z(π) U_LZSM(1/2, π+2π n),where n is an integer. The dynamics of the Hadamard gate is shown in Fig. <ref>(b).How to find the driving amplitude A and frequency ω required for certain 𝒫 and ϕ_total is described in Section <ref>.After the transition is completed, to perform some rotation around the z-axis (phase gate), we need to apply a constant signal with the same energy detuning ε as we had after completing the previous operation.Alternatively, LZSM gates can also be realized with the position of the energy detuning before and after the gate at the level anti-crossing ε=0 <cit.>.§.§.§ Double-passage drive Consider now an arbitrary external drive ε(t) with two passages through the energy-level anti-crossing, linear in the anti-crossing region. The adiabatic energy levels as a function of time are illustrated in Fig. <ref>(a).We obtain the double transition evolution matrix in the general case: Ξ= U_3 N_2 U_2 N_1^inv U_1 = U^inv_LZSM(1) U_LZSM(2)= [Ξ_11Ξ_12; -Ξ_12^*Ξ_11^* ]whereΞ_11 = (R_1 R_2 e^-i (ϕ_S1 + ϕ_S2 + 2ζ_2) + T_1 T_2 ) e^i (ζ_2 - ζ_1 - ζ_3),Ξ_12 = (R_1 T_2- T_1 R_2 e^-i (ϕ_S1+ϕ_S2 + 2ζ_2)) e^i (ϕ_S1+ζ_1+ζ_2 - ζ_3), ζ_1 = ζ(0,t_N1), ζ_2 = ζ(t_N1,t_N2), ζ_3 = ζ(t_N2,t_final) Here, t_N1 and t_N2 are the times of the first and second passages of the level anti-crossing respectively, t_final is the end time of the drive, see Fig. <ref>. By equating this evolution matrix to the matrix of the required quantum gate, one can find the parameters of the driving signal that implements this gate. For example, for an X gate the driving signal should satisfy the conditions𝒫_1 + 𝒫_2 = 1 → T_1 = R_2, ϕ_S1 + ϕ_S2 + 2 ζ_2 = π + 2 π n_1, ϕ_S2 - ζ_1 + ζ_2 + ζ_3 = π/2 + 2 π n_2.To simplify the result, we consider a periodic driving with the same slope in the anti-crossing region during each transition ε(0)≈ vt, which means 𝒫_1=𝒫_2=1/2, T_1=T_2=R_1=R_2=1/√(2), and with the same adiabatic evolution between transitions ζ=ζ_1=ζ_2/2=ζ_3.After this simplification, we obtain a matrix of the double transitions with only two parameters <cit.>: adiabatic phase gain ζ and excitation probability 𝒫, withΞ≡√(U_2)N^invU_1N√(U_2)= [Ξ _11Ξ _12; -Ξ^* _12Ξ _11^∗ ],whereΞ _11 = -R^2e^-2iΦ _St-T^2, Ξ _12 = -Ξ _12^* = -2iRTsin(Φ_St), Φ _St ≡ ϕ_S+ζ,and Φ _St is a Stückelberg phase. For the symmetric drive with 𝒫_1=𝒫_2 and ζ=ζ_1=ζ_2/2=ζ_3, Φ _St=ϕ_total. For the X operation, shown in Fig. <ref>(c),we used two LZSM transitions with LZSM probability 𝒫=1/2, and total phase for each transition ϕ_total=π/2+π n, which is the condition for a constructive interference (see, e.g., <cit.>). Indeed, using Eq. (<ref>),U^inv_LZSM(π/2+π n, 1/2)U_LZSM(π/2+π n, 1/2)=R_x(π)=X. In principle, a double LZSM transition drive, in conjunction with a rotation around the z-axis, can implement any single-qubit gate.For the Hadamard gate implemented by two LZSM transitions with the same slope in the anti-crossing region during each transition (𝒫_1=𝒫_2) the driving signal should satisfy the conditions𝒫=2 ±√(2)/4, ϕ_S+ζ_2=π/2+2π n_1,ζ_1+ζ_2+ζ_3=π/2+2π n_2,ζ_1-ζ_3=2π n_3.§.§.§ Optimization to speed up gates: multiple passage driveTo speed up the gate, multiple LZSM transitions can be used.We consider the simplest driveε(t) = - A , 0 < t < t_pre, - A cosω t, t_pre < t < t_pre + 2 π k /ω, - A, t_pre + 2 π k /ω < t < t_pre + 2 π k /ω + t_after,with an even number 2k of successive LZSM transitions with the same probability of LZSM transition 𝒫, and Stückelberg phase Φ_St, and the idling periods with phase accumulation at the start and end of the drive with durations t_pre and t_after, respectively. Here, k=1,2,... is the number of periods of the cosine.For the case of four LZSM transitions, the evolution matrix of the harmonic part of the drive, ε(t)= - A cosω t, can be found as the multiplication of two evolution matrices for double passage  (<ref>): Ξ_Q = Ξ^2 = [ Ξ _Q11 Ξ _Q12; -Ξ^*_Q12Ξ_Q11^∗ ], where Ξ_Q11 = R^4e^-4iΦ_St+T^4++ 2R^2T^2[e^-2iΦ_St-2sin^2(Φ_St)],Ξ_Q12 = 4iRTsin(Φ_St)[R^2cos(2Φ_St)+T^2]. Since T=√(𝒫) and R=√(1-𝒫), the evolution matrix depends on two parameters of the drive: the probability of a single LZSM transitions 𝒫, and the Stückelberg phase Φ_St. The idling periods before and after the main part of the drive result in the phase-shift gates R_z(ϕ_pre) and R_z(ϕ_after), respectively [see Eq. (<ref>)].The parameters of the drive are found by equating the total evolution matrix of the driven qubit to the matrix of the required operation, multiplied by the factor e^iφ with an arbitrary φ, as it does not affect the dynamics of the system [see Eq. (<ref>)]:R_z(ϕ_pre) Ξ_Q R_z(ϕ_after) = e^iφ H. Here we describe a simple algorithm for finding the optimal parameters of the drive with four LZSM transitions 𝒫, Φ_St, A, ω, t_pre, t_after that implements the Hadamard gate H.The final upper energy-level occupation is the occupation probability of the exited state of the qubit |E_+⟩ after applying the drive with four LZSM transitions to the qubit in the ground state |E_-⟩, and is given by𝒫_final = |Ξ_Q12|^2. ∙ First, we find the possible values of a single LZSM transition probability 𝒫 that provide the target final upper energy-level occupation probability after four transitions 𝒫_final = 𝒫_target. These can be found as the crossings of the red curve and orange horizontal line in Fig. <ref>. The red curve represents the maximum possible final upper energy-level occupation after four LZSM transitions 𝒫_final after varying through all possible values of Φ_St∈[0,π] using Eqs. (<ref>) and (<ref>). The orange horizontal line represents the level of 𝒫_final = 𝒫_target. For the H operation, the target final upper energy-level occupation of the qubit 𝒫_target = 1/2.Larger values of 𝒫 provide shorter transition durations and shorter gate durations, so the largest possible value of 𝒫 is selected. For the H operation realized with four LZSM transitions the largest possible value 𝒫^*≈ 0.962. For the X operation with four LZSM transitions, 𝒫^*=(2+√(2))/4 coincides with the LZSM transition probability for the H operation, realized with two LZSM transitions, see Eq. (<ref>).∙ At the second step, we find the Stückelberg phase Φ_St that provides the target final transition probability 𝒫_final = 𝒫_target given the obtained value of 𝒫 using the blue curve in Fig. <ref>. For the H operation the solution is given byΦ_St=π/2+π n,where n is an integer. This is also a condition for the constructive interference between the LZSM transitions.∙ After finding 𝒫 and Φ_St, at the third step, we find the amplitude A and frequency ω of the signal  (<ref>). Equation (<ref>) defines the relation between the frequency ω and the amplitude A for a certain 𝒫. Using Eqs.  (<ref>),  (<ref>),  (<ref>),  (<ref>) and  (<ref>), we build Fig. <ref> and find the possible values of the amplitude A of the signal that provides the required value of the Stückelberg phase Φ_St target, found in the previous step for the previously found 𝒫=𝒫^*. ∙ Finally, at the fourth step we determine the required idling durations before and after the main part of the drive with LZSM transitions,t_pre and t_after. Substitution of the obtained evolution matrix of the main part of the drive Ξ_Q to Eq. (<ref>) allows to find the required accumulated phases ϕ_pre and ϕ_after. Then using Eqs. (<ref>) and  (<ref>) we determine the durations t_pre and t_after. In the example considered here, the accumulated phases ϕ_pre = ϕ_after = π/2, and the durations t_pre and t_after depend on the choice of the amplitude A in the previous step of the algorithm. This algorithm of finding the parameters of the drive (<ref>) with an even number of LZSM transitions for the implementation of any single-qubit operation can be summarized as follows: * Find the probability of a single LZSM transition 𝒫 which provides the desired final upper-level occupation probability 𝒫_final = 𝒫_target. See the red curve cross with the orange horizontal line in Fig. <ref>. * Find the required Stückelberg phase Φ_St. See the blue curve in Fig. <ref>. * Find the combination of the amplitude A and the frequency ω that provides the required values of 𝒫 and Φ_St. See Fig. <ref>. * Determine the idling times before and after the main part of the drive with LZSM transitions, t_pre and t_after. This algorithm allows to find the optimal parameters of the drive with an arbitrary even number of LZSM transitions. Here we consider only a two-level quantum system, but real quantum systems are usually multilevel. The LZSM transitions during the passage of nearest level anti-crossings with different energy levels will influence the dynamics, so it is important to limit the amplitude of the drive, so that the next nearest anti-crossings are not reached.§.§ FidelityThe relaxation and dephasing are not considered in this paper. Thus the infidelities of the gates arise because the theories of RWA and AIM that are used to obtain the parameters of the driving signals are approximate. The infidelities due to numerical solution errors are negligible in comparison with infidelities due to approximations in the theories.The fidelities are found using quantum tomography <cit.>, which consists in applying the gate for many different initial states, which span the Hilbert space, and then calculating the average fidelity between the obtained states and the target state using <cit.> F(ρ,ρ_t)=(tr√(√(ρ)ρ_t√(ρ)))^2.Here, ρ is the density matrix obtained by numerical simulation of the qubit dynamics by solving the Liouville-von Neumann equation Eq. (<ref>), andρ_t=Uρ_inU^† is the target state, obtained by applying the gate operator U to the initial density matrix ρ_in. Then we calculate the averaged fidelity for different equidistant initial conditions on the Bloch sphere F̅=∑_n=1^N F(ρ_n, ρ_t_n)/N.To better compare the difference between Rabi and LZSM approaches we will use the error rate D=1-F̅.The LZSM probability formula 𝒫=exp(-2πδ) is derived for a linear drive with an infinite time, ε (t) = vt, t ∈ (-∞, ∞), leading to an infinite amplitude of the driving signal. Thus, for the considered non-linear drive ε(t)=-Acos(ω t) with the finite amplitude A, the fidelity of the LZSM gate increases with the amplitude of the drive A. Considering Eq. (<ref>), the amplitude of the drive is proportional to the duration of the gate, A ∼ 1/ω∼ T. So the fidelity of the LZSM gate increases with its duration, the gate error D decreases,and a satisfactory balance between the fidelity and speed of the gate should be found.Figure <ref> illustrates that the gate error rate D using the LZSM implementation decreases much faster with time than using the usual Rabi approach.An alternative method to determine the gates fidelity is the Randomized benchmarking <cit.>, which considers how the fidelity decreases with increasing the number of applied operations. Here we only used the quantum tomography method, as a simpler one for numerical calculations.In experiments, there are methods of improving the gate fidelity based on the back-response loop, also know as quantum control or robust control, for example gradient ascent or Krotov's method <cit.>. § TWO-QUBIT GATES§.§ HamiltonianNow we consider a Hamiltonian of two coupled qubits <cit.>H = -1/2∑_i=1,2 ( Δ_i σ^(i)_x + ε_i (t) σ^(i)_z ) - -g/4 ( σ^(1)_x σ^(2)_x + σ^(1)_y σ^(2)_y )- J/4σ^(1)_z σ^(2)_z,with the external drive of the second qubit which results in the driving ε_2(t). The energy levels of this Hamiltonian normalized to the coupling strength g are shown in Fig. <ref>(a).Although other choices for the interaction part of the Hamiltonian are possible, we will now consider a transverse coupling with XY-interactionH^XY_int = -g/4 ( σ^(1)_x σ^(2)_x + σ^(1)_y σ^(2)_y ),resulting in a splitting between E_1 and E_2 adiabatic energy levelsversus g at the crossing of |01⟩ and |10⟩ diabatic energy levels, and longitudial couplings with ZZ-interaction termH^ZZ_int = -J/4σ^(1)_z σ^(2)_z,resulting in a shift between the(E_0-E_1) and (E_2-E_3) adiabatic energy-level anti-crossings on the value of J [see Fig. <ref>(a)]. The JJ- or Heisenberg interaction is the particular case when both terms are present and J=g.The difficulty of generating a particular operation depends on the available coupling terms. On the other hand, for each type of coupling there are two-qubit gates which can be implemented in a straightforward manner <cit.>. §.§ iSWAP gate One of the simplest natural two-qubit operations when the XY-type of coupling is present, is an iSWAP gateiSWAP =[ 1 0 0 0; 0 0 i 0; 0 i 0 0; 0 0 0 1; ].Its LZSM implementation should involve passages of the anti-crossing between the adiabatic levels E_1 and E_2, located at ε_2 = ε_1. For simplicity, here we demonstrate an LZSM realization of the iSWAP gate for the Hamiltonian (<ref>) with only XY-interaction term, when J=0 and the(E_0-E_1) and (E_2-E_3) anti-crossings are both located at ε_2=0 [see Fig. <ref>(a)].As in the case of the X gate, it is impossible to implement an LZSM transition with an arbitrary 𝒫 with high fidelity by only one passage, so at least two passages are required. Thus, we consider a drive ε_2(t) with the following form [see Figs. <ref>(b) and <ref>(b)]:ε_2(t) = ε_1- A cosω t,0 < t < t_1, ε_1 + A,t_1 < t < t_2, ε_1 + A cosω (t-T_c/2 - T_1), t_2 < t < t_3,wheret_1 = T_c/2, t_2 = T_c/2 + T_1, t_3 = T_c + T_1.It consists of two half-periods of cosine with period T_c =2 π/ω and amplitude A, separated in the middle by an idling period with time T_1. For the simpler form of a signal without the idling we would obtain a system of three equations on the signal parameters with only two parameters present, A and T_c. So an additional degree of freedom, like an idling time T_1, would be needed.As in the case of a single qubit, we build the dependence of the adiabatic energy levels on time in Fig. <ref>(a), introduce all values of the transition probabilities 𝒫_i for each diabatic transition N_i, and define the phase gains ζ^(ij)_k between adiabatic levels E_i and E_j for various periods of the adiabatic evolution asζ^(ij)_k = 1/2 ħ∫_t_N(k-1)^t_N(k)[ E_j(t) - E_i(t) ] dt,where t_N0 = 0, and t_N3 = t_3.Generally, for a multi-level quantum system, the matrix of the diabatic (LZSM) transition between the adiabatic energy levels |E_i⟩ and |E_j⟩ (i<j) with the LZSM probability 𝒫 in the adiabatic basis is defined asN = R e^iϕ_S|E_i⟩⟨E_i| + R e^-iϕ_S|E_j⟩⟨E_j| + + α T |E_i⟩⟨E_j| - α T |E_j⟩⟨E_i| + ∑_ki,j|E_k⟩⟨E_k|,where the transition and reflection coefficients, T and R, and the Stokes phase ϕ_S are determinedby the LZSM probability 𝒫, see Eqs. (<ref>) and (<ref>). The coefficient α depends on the direction of the passage of the adiabatic energy levels anti-crossing. Far from the anti-crossing region, the energies of the adiabatic states |E_i⟩ and |E_j⟩ asymptotically approach the energies of some diabatic states |m⟩ and |n⟩, where m<n. Here, we assume that the diabatic basis { ..., |m⟩,..., |n⟩,...} is the one, in which the Hamiltonian is defined. If before the passage of the adiabatic energy levels anti-crossing energy of the lower adiabatic level |E_i⟩ is asymptotically close to the energy of the diabatic level with the lower sequence number |m⟩, then α=1. If before the passage |E_i⟩ is asymptotically close to |n⟩, then α=-1.For the considered Hamiltonian (<ref>), defined in the diabatic basis {|00⟩, |01⟩, |10⟩, |11⟩}, and the drive (<ref>), the matrices of the diabatic transitions are defined byN_k =[1000;0R_k e^iϕ_S(k)α_k T_k0;0- α_k T_k R_k e^-iϕ_S(k)0;0001 ],where k=1,2, α_1 = 1, α_2 = -1.Here, T_i, R_i, ϕ_S(i) are the transition, reflection coefficients, and the Stokes phase for the diabatic transition N_i.The matrix for the adiabatic evolution U_n for the interval of evolution n=1,2,3 is diagonal with componentsU_(n)00 = e^i (ζ^(01)_n + ζ^(12)_n + ζ^(23)_n), U_(n)11 = e^i ( -ζ^(01)_n + ζ^(12)_n + ζ^(23)_n), U_(n)22 = e^i (- ζ^(01)_n - ζ^(12)_n + ζ^(23)_n ), U_(n)33 = e^ -i(ζ^(01)_n + ζ^(12)_n + ζ^(23)_n).In the case of a n-level quantum system, the matrix of the adiabatic evolution is diagonal with componentsU_kk = exp{ i∑_j=0^n-2β_jkζ^(j,j+1)}, β_jk = 2θ(j-k)-1,where k=0,1,...,n-1, and θ is the Heaviside step function. The rule of the sign β_jk determination can be summarized as the following: if the area of the phase accumulation, corresponding to ζ^(j,j+1) term is below the adiabatic energy level E_k, then β_jk=-1; if above it, then β_jk=1.The evolution matrix for the whole period becomesΞ = U_3 N_2 U_2 N_1 U_1.After simplifying byζ^(01) = ζ^(01)_1 + ζ^(01)_2 + ζ^(01)_3, ζ^(23) = ζ^(23)_1 + ζ^(23)_2 + ζ^(23)_3,taking the common phase e^i ξ out of brackets and neglecting it (as the common phase of the wave function after the logic gate is irrelevant), we obtain the evolution matrixΞ = [1000;0 U_11 U_120;0 U_21 U_220;000 U_33 ],that depends on the values 𝒫_1, 𝒫_2, ζ^(01), ζ^(12)_i, ζ^(23). Equating it to the matrix of a required two-qubit iSWAP gate allows to determine the parameters of the external signal which implements this gate:𝒫_1+𝒫_2= 1, ϕ_S1 + ϕ_S2 + 2ζ^(12)_2= π + 2 π n_1, ϕ_S1 + 2(ζ^(01) + ζ^(12)_1 + ζ^(12)_2 ) = π/2 + 2 π n_2, ϕ_S2 + 2(ζ^(01) + ζ^(12)_2 + ζ^(12)_3 ) = π/2 + 2 π n_3. For the considered signal (<ref>) with 𝒫_1 = 𝒫_2 = 𝒫, resulting in ζ^(12)_1 = ζ^(12)_3, and in the case when only the XY-interaction is present (J=0), resulting in ζ^(01) = ζ^(23), the conditions simplify to𝒫 = 1/2, ϕ_S + ζ^(12)_2= π/2 +π n_1, ϕ_S + 2(ζ^(01) + ζ^(12)_1 + ζ^(12)_2 ) = π/2 + 2 π n_2. The first equation results in a linear dependence between A and T_c:A = g^2/4 ħln2 T_c.Then we numerically solve two other equations on two parameters of the signal T_c and T_1. In Fig. <ref> we demonstrate the dynamics of the iSWAP gate for a particular solution with g T_c/h = 1.517, g T_1/h = 0.125, and A/g=3.438 for the Hamiltonian (<ref>) with the parameters Δ_1/g=0.3,Δ_2/g=1, ε_1/g=16.6, J=0; and compare the approximate solution obtained by the adiabatic-impulse model with the numerical solution of the Schrödinger equation. The same parameters of the Hamiltonian and the drive were used for Fig. <ref>, with the exception of the larger amplitude A/g=9. § CONCLUSIONWe further developed the paradigm of the alternative quantum logic gates, based on the LZSM transitions. We demonstrated how the adiabatic-impulse model can be used for implementing single- and two-qubit gates, demonstrated how to increase the gate speed, and the technique of finding the balance between speed and fidelity of the gates. We also demonstrated the comparison of the theoretical error rate for conventional Rabi gates and alternative LZSM gates for various logic gate durations.The adiabatic-impulse model is applicable for any quantum multi-level systems with two conditions. Firstly, it works well for a large drive amplitude, A > Δ. In terms of the requirements for a quantum system, this means that for the considered level anti-crossing, its minimal energy splitting Δ should be much less than the distance to the nearest level anti-crossings. Secondly, the time between the LZSM transitions should be larger than the time needed for the transition process. This condition limits the maximal frequency of the driving signal in the multi-passage implementation, and the minimal gate duration, respectively.An arbitrary single-qubit quantum logic gate can be performed with only two LZSM transitions. However, the considered option of gate implementation with multiple LZSM transitions provide a better combination of gate duration and fidelity. We demonstrated the technique of implementing an arbitrary single-qubit logic gate with any number of the LZSM transitions.For the multi-level quantum systems the considered general method of implementing quantum logic gates with LZSM transitions is the following: choose the shape of the driving signal so that it passes the required level anti-crossings for a given gate. For the considered signal compute the dependence of the adiabatic energy levels of the system on time. Introduce the transition probabilities 𝒫_i for each diabatic transition and phase gains ζ^(ij)_k between all pairs of successive adiabatic levels E_i and E_j for each period of adiabatic evolution. Using them, compose all matrices of the diabatic transition N_i and adiabatic evolution U_i, multiply them, and obtain the total evolution matrix. Equating it to the matrix of the required quantum logic gate multiplied by an arbitrary phase term e^i φ allows to determine the required parameters of the driving signal, that implements this logic gate.Note added. After this work was completed, we became aware of a recent relevant preprint <cit.>. The research of A.I.R, O.V.I., and S.N.S. is sponsored by the Army Research Office under Grant No. W911NF-20-1-0261. A.I.R. and O.V.I. were supported by the RIKEN International Program Associates (IPA). S.N.S. is supported in part by the Office of Naval Research Global, Grant No. N62909-23-1-2088. F.N. is supported in part by: Nippon Telegraph and Telephone Corporation (NTT) Research, the Japan Science and Technology Agency (JST) [via the Quantum Leap Flagship Program (Q-LEAP), and the Moonshot R&D Grant Number JPMJMS2061], Office of Naval Research (ONR), and the Asian Office of Aerospace Research and Development (AOARD) (via Grant No. FA2386-20-1-4069). § ISWAP-LIKE GATES. SWAP, √(SWAP), √(ISWAP)The evolution matrix (<ref>) can also implement the SWAP, √(SWAP), √(iSWAP), and other iSWAP-like gates. The system of equations (<ref>) written for the SWAP gate is not compatible. Thus, the SWAP gate cannot be implemented by only two passages of the (E_1-E_2) adiabatic energy-level anti-crossing for the Hamiltonian (<ref>) with only XY-coupling. It can, however, be implemented when ζ^(01)ζ^(23), in case both XY- and ZZ-interactions are present, as in Fig. <ref>(a). The corresponding conditions are written as𝒫_1+𝒫_2= 1, ϕ_S1 + ϕ_S2 + 2ζ^(12)_2= π + 2 π n_1, ϕ_S1 + 2(ζ^(01) + ζ^(12)_1 + ζ^(12)_2 ) = (1+λ)π/2 + 2 π n_2, ϕ_S2 + 2(ζ^(01) + ζ^(12)_2 + ζ^(12)_3 ) = (1+λ)π/2 + 2 π n_3, ζ^(01) +ζ^(12)_1 + ζ^(12)_2 + ζ^(12)_3 + ζ^(23) = π n_4,where λ=1 for the SWAP gate and λ=0 for the iSWAP gate.The√(SWAP) and√(iSWAP) gates do not provide a full swap of energy-level occupation probabilities between two levels when 𝒫=1; so they could be implemented by a single passage of the (E_1-E_2) level anti-crossing. § CNOT-LIKE GATES. CNOT, CZ, CPHASEZZ-type couplings allow to implement CNOT, CZ, CPHASE gates. Their LZSM implementations should involve passages of the anti-crossing between adiabatic energy levels E_2 and E_3 at ε_2=J/2 [see Fig. <ref>(a)]. Here we demonstrate an LZSM realization of the CNOT gate for the Hamiltonian (<ref>) with both XY- and ZZ-couplings, although only ZZ- is required.As for the X gate, it is impossible to implement an LZSM transition with an arbitrary 𝒫 with high fidelity by only one passage; so at least two passages are required. We now consider a drive ε_2(t) in the following form [see Fig. <ref>(b)] ε_2(t) = J/2-A,0<t<t_1, J/2 - A cosω (t-T_1),t_1 < t < t_2, J/2 + A, t_2 < t < t_3, J/2 + A cosω (t-T_1 - T_c/2 - T_2), t_3 < t < t_4, J/2-A, t_4 < t < t_5,wheret_1 = T_1, t_2 = T_1 + T_c/2, t_3 = T_1 + T_c/2 + T_2, t_4 = T_1 + T_c + T_2, t_5 = 2T_1 + T_c + T_2. As in the case of a single-qubit and iSWAP gate, we compute the time dependence of the adiabatic energy levels [see Fig. <ref>(a)], introduce the values of the transition probabilities 𝒫_i for each diabatic transition N_i, and define all phase gains ζ^(ij)_k (<ref>) between adiabatic levels E_i and E_j for the various periods of the adiabatic evolution.The matrices for the diabatic transitions are defined asN_k =[1000;0100;00R_k e^iϕ_Sk- α_k T_k;00α_k T_k R_k e^-iϕ_Sk ],where k=1,2, α_1 = 1, α_2 = -1. The matrix for the adiabatic evolution is given by Eq. (<ref>). The evolution matrix for the whole period can be found as (<ref>). After simplifying byζ^(01) = ζ^(01)_1 + ζ^(01)_2 + ζ^(01)_3, ζ^(12) = ζ^(12)_1 + ζ^(12)_2 + ζ^(12)_3,taking the common phase e^i φ out of brackets and neglecting it (as the common phase of the wave function is irrelevant), we obtain the evolution matrix in the formΞ = [1000;0 U_1100;00 U_22 U_23;00 U_32 U_33 ],which depends on the values 𝒫_1, 𝒫_2, ζ^(01), ζ^(02), ζ^(23)_i. Equating it to the matrix of a required two-qubit CNOT gateCNOT =[ 1 0 0 0; 0 1 0 0; 0 0 0 1; 0 0 1 0; ]allows to determine the parameters of the external signal which implements this gate:𝒫_1+𝒫_2= 1, ζ^(01)=π n_1, ϕ_S1 + ϕ_S2 + 2 ζ^(23)_2= π +2 π n_2, ϕ_S1 + 2 ζ^(12) + 2 ζ^(23)_1 + 2 ζ^(23)_2 = 2 π n_3, ϕ_S2 + 2 ζ^(12) + 2 ζ^(23)_2 + 2 ζ^(23)_3 = 2 π n_4.For the considered signal (<ref>) with 𝒫_1 = 𝒫_2 = 𝒫 and ζ^(23)_1 = ζ^(23)_3 the conditions simplify to𝒫= 1/2, ζ^(01)=π n_1, ϕ_S +ζ^(23)_2 = π/2 + π n_2, ϕ_S + 2 ζ^(12) + 2 ζ^(23)_1 + 2 ζ^(23)_2 = 2 π n_3.In Fig. <ref> we illustrate the dynamics of the CNOT gate for a particular solution with Δ_2 T_c/h=2.0394, Δ_2 T_1/h=0.0109,Δ_2 T_2/h=0.0288, A/ Δ_2 = 4.6217 for the Hamiltonian (<ref>) with the parameters Δ_1/g=0.3,Δ_2/g=1, ε_1/g=16.6, J/g=10, and compare the approximate solution obtained by the adiabatic-impulse model with the numerical solution of the Liouville-von Neumann equation. The same parameters were used for Fig. <ref>.The evolution matrix (<ref>) can also implement the CZ, CPHASE, and other CNOT-like gates. apsrev41Control
http://arxiv.org/abs/2310.17932v1
{ "authors": [ "A. I. Ryzhov", "O. V. Ivakhnenko", "S. N. Shevchenko", "M. F. Gonzalez-Zalba", "Franco Nori" ], "categories": [ "quant-ph", "cond-mat.mes-hall" ], "primary_category": "quant-ph", "published": "20231027071101", "title": "Alternative fast quantum logic gates using nonadiabatic Landau-Zener-Stückelberg-Majorana transitions" }
inst1]Baskoro Adi Pratomo [email protected][inst1]organization=Informatics Department, Institut Teknologi Sepuluh Nopember,city=Surabaya,state=East Java, country=Indonesiainst2]Toby Jackson inst2]Pete Burnap inst2]Andrew Hood inst2]Eirini Anthi[inst2]organization=School of Computer Science and Informatics, Cardiff University,city=Cardiff,state=Wales, country=United Kingdom Analysing malware is important to understand how malicious software works and to develop appropriate detection and prevention methods. Dynamic analysis can overcome evasion techniques commonly used to bypass static analysis and provide insights into malware runtime activities. Much research on dynamic analysis focused on investigating machine-level information (e.g., CPU, memory, network usage) to identify whether a machine is running malicious activities. A malicious machine does not necessarily mean all running processes on the machine are also malicious. If we can isolate the malicious process instead of isolating the whole machine, we could kill the malicious process, and the machine can keep doing its job. Another challenge dynamic malware detection research faces is that the samples are executed in one machine without any background applications running. It is unrealistic as a computer typically runs many benign (background) applications when a malware incident happens. Our experiment with machine-level data shows that the existence of background applications decreases previous state-of-the-art accuracy by about 20.12% on average. We also proposed a process-level Recurrent Neural Network (RNN)-based detection model. Our proposed model performs better than the machine-level detection model; 0.049 increase in detection rate and a false-positive rate below 0.1. malware analysis dynamic analysis event log sysmon process-level data§ INTRODUCTIONMalware attacks are prevalent nowadays. According to Statista <cit.>, there were 5.4 billion malware attacks in 2021 and 2.8 billion attacks in the first half of 2022. To tackle the problem of malware infection, researchers and security companies have developed various solutions to detect malicious activities. A common approach for detecting malware in a system is by analysing the file's content. For example, by calculating the hash value of a file and comparing it with a database of known malicious hashes; or by examining the codebase. The aforementioned approaches have two problems. Firstly, they assume that all malware can be located by examining a malicious file. There have been some incidents <cit.> that involve file-less malware. This kind of malware implants itself into a specific process in memory; thus, examining hash values and code bases is less effective for identifying known malware signatures. Secondly, both approaches rely on a database of known malicious hashes or malware signatures. They cannot detect malware that has never previously been seen.Machine learning (ML) techniques have been proposed to help identify malware without depending on code or hash file analysis. ML works by analysing various malware and benignware features. For example, previous approaches have analysed machine-level resource usage (e.g., CPU, memory, disk usage) during malware and benignware execution. Then, those features are used to see whether the machine was running malware or not <cit.>, <cit.>. Machine-level data captures the usage of a machine while opening and running applications, files etc. Previous work has collected machine data in aggregate form - combining all the various applications and activities on a machine into one observation and determining if malicious activity is present on a second-by-second basis <cit.>. In reality, malware may be running along with other benignware on a machine simultaneously. It is unlikely that a machine would have malware running on its own without other benign processes running in parallel. Therefore, the machine-level data does not solely capture malicious behaviour but will also be affected by benign process activities.In this paper we argue that process-level data captures the behaviour of each running process and as such, provides a more granular view of whichactivities are malicious. We can separate the benign and malicious activities of a machine. Process-level dataalso captures information such as what processes are created, which registry entries are modified, what files are created or modified, what domain names are contacted, and many others. In this research, we conduct a novel investigation into process-level features for malware detection and compare these results with machine-level features. We argue that the existence of background applications during malware executions will affect the performance of the existing model. Therefore, we proposed that process-level data would be more suitable for detecting malware activities.Additionally, datasets used in previous research on dynamically detecting malware using ML used only a single virtual machine to generate their malicious and behavioural activity - frequently without any other background applications running. In an enterprise environment, this is unrealistic, casting doubts over the applicability of previously tested methods in practice. Typically, we would have multiple computers running various applications, some of which might be malicious. For this reason, in this research, we created a virtualised small-medium enterprise topology to emulate a real-world network, executed both malware and benign samples across multiple machines - alongside typical benign activity, and then captured both machine and process-level data from each machine on the network. The generated data there enabled us to evaluate the first approach to study process-level data for malware detection while also doing so in a much more realistic and diverse multi-endpoint virtual network.In summary, the main contributions of this paper are as follows: * The first ML-based malware detection model to determine whether a specific process is malicious by using process-level data* A malware dataset containing both machine-level and process-level data from benign and malicious samples. The data also include second-by-second information, making it possible to evaluate the behaviour of malware execution over time. The rest of the paper is structured as follows: Section <ref> discusses related work in malware datasets and detection. We discuss our data generation methodology and detection model development in Section <ref>. The results of our experiments are presented in Section <ref>. We present some issues and the limitations of our approach in Section <ref>. Lastly, the paper concludes in Section <ref>.§ RELATED WORK In the area of malware datasets, we already have several datasets with millions of samples, such as Ember <cit.>, Solem-20M <cit.>, and MOTIF <cit.>. The EMBER dataset was generated by extracting features from malicious and benign PE files using the LIEF project <cit.>. SoRel-20M was produced by Sophos and contained disarmed malicious PE files. Besides the disarmed malicious PE files, SoRel-20M also provides features from the PE files, which were extracted using EMBER's . Similarly, the MOTIF dataset <cit.> contains 3,095 disarmed malicious PE files and is labelled with malware family labels. It also provides EMBER's raw features from its samples. Despite having many samples, the aforementioned datasets only extracted their features from static analysis of the malicious and benign files. Despite its data richness, features obtained from the static analysis may be limited as the malware may hide its characteristics by using static analysis evasion techniques. On the other hand, the dynamic analysis may provide better insight into malware behaviour while running. None of the aforementioned datasets executed their samples and captured the sample behaviour.<cit.> generated a malware and benignware dataset by executing both malicious and benign samples in a virtualised environment using Cuckoo <cit.>. While the sample was executed, they captured machine-level data (i.e., CPU usage, memory usage, network usage, and the number of processes). They also developed a Recurrent Neural Network model that can predict malicious behaviour early within five seconds. Similar to <cit.>, <cit.> generated a malware dataset by executing samples with Cuckoo. This research, however, does not provide data on CPU, memory, and network usage, but the authors captured the API calls conducted by the analysis machine during execution. All malware datasets <cit.> that were generated by dynamic analysis executed their samples in a single virtual machine or sandbox. This is to ensure malware execution is contained and does not spread beyond the analysis environment so that the analysis environment can quickly be reset to its original state without needing to reinstall the entire operating system. However, in reality, malware and benignware may interact with other machines, affecting the captured behaviour in other network parts. Therefore, in this research, we executed malware samples simultaneously as benignware being executed on different machines on the same network - creating a more realistic environment with more interaction between machines and more background noise. Table <ref> shows the difference between our malware dataset and the previous research. None of the previous research added background noise (i.e., benignware running simultaneously on a different machine) and captured process-level data.Dynamic analysis has been one of the two methods to identify malware. Looking at the malware behaviour when running can provide us with more information than merely looking at the static code. Apart from generating a dataset, we also evaluated how machine-level and process-level data for dynamic analysis can benefit malware detection. This is because capturing malicious behaviour from the machine-level information, such as CPU, memory, and network usage - as per <cit.> - may not capture sufficient information to identify the difference between malicious behaviour and benign or background processes. For that reason, our dataset also obtained Sysmon events that were generated by all processes such that we can know exactly what each application on the network was doing during execution.<cit.> and <cit.> each proposed a malware detection method using API calls. They looked at the list of API calls made by the applications and identified malicious behaviour from the sequence of API calls. <cit.> approached malware detection using the files created by the malware and developed a graph-based detection model accordingly. These approaches are based on process-level detection, but none look at the events created by the applications.Other research analyses Windows events generated by a process to find malicious software running on the system <cit.>. However, their approach only captured file creation, registry value set, and thread creation. Moreover, they did not analyse the effect of running background applications during sample executions. In comparison, our proposed method analysed the effect of background applications and considered various events; hence ours is more comprehensive. Table <ref> summarises the differences between our proposed method and the previous works.§ METHODOLOGY The section that follows is divided into two main parts. The first part details how we obtained the dataset required for the research. We accomplished this by executing both malware and benign samples in a controlled environment to simulate a real-world scenario. We collected the data generated by these executions, which included Windows Events such as process creation, network connection, file access, and registry access, among others.The second part of this section describes the development of the Recurrent Neural Network (RNN) model (i.e., Long Short-Term Memory <cit.> and Gated Recurrent Unit <cit.>) that we used to predict malicious processes early using their associated Windows Events as input features. We picked RNN-based approaches as they are suitable for handling a sequence, making them particularly useful for processing a sequence of Windows Events. We preprocessed the dataset to remove any noise or irrelevant data and then trained the RNN model using the remaining features. Our model was designed to learn from benign and malicious processes' patterns and classify unknown processes as benign or malicious based on their generated sequence of events. §.§ Data Generation We generated the data by running malicious and benign software in a virtualised environment to compare the machine-level and process-level information. Then, we captured the machine utilisation (i.e., CPU, memory, network usage, and the number of processes) and the Sysmon events whilesamples were running. The data generation experiment was conducted in a simulated network environment to represent a typical small to medium enterprise computer network <cit.>. The network consisted of five Windows 7 operating systems machines, a Windows Server 2016 machine that served as the Sysmon event collector, and a Linux machine that was used to detonate malware samples using Cuckoo Sandbox <cit.>. All these machines are connected through a layer-2 switch, as shown in Figure <ref>.To ensure that our data was accurate and reliable, we carefully configured each machine in the network environment. We recorded the specific operating systems, IP addresses, and installed applications used in each machine in Table <ref>. We also applied all necessary Windows updates to prevent Sysmon from generating duplicate Process GUIDs, which could have made it difficult to identify each process. The updates ensured that each process was uniquely identified by its GUID, and we were able to accurately track the events generated by each process during our experiments.During the data generation phase, we executed both benign and malicious samples on the network environment and collected the Windows Events generated by each execution. The events collected included process creation, network connection, file access, and registry access, among others. This process allowed us to obtain a large and diverse dataset, which we used to develop and train our Recurrent Neural Network (RNN) model. The RNN model was specifically designed to analyze the patterns of Windows Events generated by both benign and malicious processes and to accurately classify unknown processes based on their event sequences. Overall, the data generation experiment was carefully designed to ensure that our dataset was representative of real-world scenarios and that our analysis was based on accurate and reliable data.Logging Made Easy (LME) is an open-source initiative that combines various freely available software components to offer foundational security information logging on Windows devices. It simplifies the process of integrating with a Security Information and Event Management (SIEM) system. Thus, it serves as our data collection system for process-level information as it enables efficient logging and monitoring of security-related data which is based on Sysmon <cit.> and Windows Event Forwarder <cit.>.The LME Event Collector server had several responsibilities, including managing the analysis machines through Group Policy and handling DNS requests. We applied three primary policies to all analysis machines to enable effective data collection and malware analysis. First, we elevated a domain user to a local administrator to facilitate certain administrative tasks. Second, we enabled Windows Event Log forwarding to ensure that the generated Windows Events were collected in a centralized location. Finally, we disabled both Windows Firewall and Defender to prevent malware from being blocked or deleted before it could be executed.These policies were critical to the success of our experiment, as they ensured that the Windows Events were accurately collected and stored in one location. Disabling Windows Firewall and Defender enabled the malware to run without interference, allowing us to analyze its behaviour and generate the necessary data for our research. Overall, the LME Event Collector server played a crucial role in managing the analysis machines and facilitating our data collection and analysis processes.LME Event Collector collects Windows events from the analysis machines every fifteen seconds. The period of fifteen seconds was chosen to ensure all events would be collected before the VM reset to a clean state. We collected Sysmon event ID 1 (Process Creation), 2 (File Creation Time Changed), 3 (Network Connection), 5 (Process Termination), 7 (Image Load), 8 (Remote Thread Creation), 11 (File Creation), 12, 13, 14 (Registry-related events), and 22 (DNS queries). The server's ability to collect events was provided by Logging Made Easy (LME) [available at https://github.com/ukncsc/lme], which utilises Windows Event Forwarding.We executed malware by using Cuckoo to send files or binaries to a client machine. All logs from Cuckoo were then sent to the Cuckoo Server to be analysed later. As both the Cuckoo server and the client machines are running on virtual machines, we configured Cuckoo to use physical machine settings. We also modified Cuckoo to shut down VMs after it finished executing applications since the cyber range on which the virtualised network runs resets the VM state only when it is shut down. The shutting down process was actioned by calling a stop API request to the cyber range platform and restarted by making an API request to the cyber range platform. This activity would be recorded in the network traffic, but as of now, we did not capture any network traffic.Apart from its default behaviour log. We also set Cuckoo to collect the machine utilisation (i.e., CPU, memory, network usage, and the number of processes) with the script from <cit.> and to run two instances of Hollows Hunter. Hollows Hunter is used to scan for process hollowing; a technique commonly used to hide malicious processes. By default, it scans all active processes in the machine, but as some malware may run for a few milliseconds and thus evade detection, we ran two instances of Hollow Hunter, one for scanning all processes and the other for scanning the injected samples. Hollows Hunter log files were then sent to Cuckoo Server at the end of each sample execution.Cuckoo Sandbox operates by receiving a list of filenames that are to be injected into virtual machines. The Cuckoo daemon schedules the order in which the files are injected into each virtual machine. If multiple files are being injected into different virtual machines, their tasks can be executed simultaneously. In our experiment, we had five virtual machines available, which allowed us to execute up to five samples (both benign and malicious) simultaneously. We referred to each round of execution as an iteration. For instance, if we had a total of twenty samples, we would complete four iterations to execute all the samples.Before picking which malware and how many of them would be running at the same time, for each iteration, we picked a random number between zero to two (approx half of the number of machines). The next iterations started by picking a new random number from zero to two again. In the case of getting zero for three times in a row, the next iteration should pick any number greater than zero and less than two.Each iteration in the experiment involves running five samples with the random number of malware samples executed in that iteration. For this research, only malicious binaries are picked. We also tried to make the malicious samples to be diverse by including various type of malware. However, a malicious sample can belong to more than one category. Therefore, the number of each malware type is not balanced. Each sample was then executed into a randomly chosen VM. In each set of experiments, we initially had 200 malware and 200 benignware. In total, there are 1195 samples, a similar number of samples to the previous works <cit.>. The malicious samples are obtained from VirusShare <cit.>, while the benign samples came from the previous research <cit.>. As the number of malware and benignware was the same and the number of malware executed in each iteration was not always two, some benign samples may be executed multiple times, but malicious samples were executed only once. We also ensured that each experiment had different malicious and benign samples. The exact number of unique malicious and benign samples for each experiment is shown in Table <ref> and the number of each malware variant is listed in Table <ref>.Once each sample finished being analysed, we reset the VM, and Cuckoo would schedule the next sample to analyse immediately. Ideally, all samples finished at the same time, so the next five samples would start at the same time. However, there were some issues (explained in Section <ref>) with the Cyber Range which caused some samples to be left behind and executed later. To solve this problem, we set the start_time option in Cuckoo which allowed us to set the time before the analysis machine started executing the sample. Samples from the same iteration would always start at the same time in the record such that the malware samples will always be running at the same time as benignware which we refer to as background noise. As we randomised the VMs where the malware was executed, it will also give the ML model a challenge as the model would not be able to identify malware based on merely the information where the malware was executed.Each client machine was installed with standard office applications, such as browser, Word, Excel, Teams, Outlook, and PDF reader. We executed a sample for 120 seconds. After the time ran out, we sent Hollow Hunter log files and the machine utilisation records to Cuckoo Server before shutting down and resetting the machine. LME logs were collected at the end of each experiment by manually collecting them from the event collector server. Figure <ref> summarises the data generation process. §.§ Malware Detection ModelAfter collecting the data, we developed a detection model and re-evaluated the machine-level RNN model developed by <cit.>. Rhode et al.'s <cit.>'s RNN model reads system utilisation data every second and tries to predict whether a machine is running malware each second. The result shows that the detection gets more accurate over time. Our detection model builds on previous research by focusing on process-level data. Instead of detecting whether a particular machine is running malware in general, the model looks for specific malicious processes based on the sequence of events generated by a process and the data collected using Hollows Hunter <cit.>.As explained in Section <ref>, we did three experiments to generate malware activity data. We refer to the data generated by these experiments as Set-0, Set-1, and Set-2, respectively. Each set contains malicious and benign data, which will be used in subsequent experiments to evaluate the malware detection approaches.§.§ Machine-level detection modelThe malware detection using system utilisation data is heavily based on the previous work by <cit.>. The previous work developed an RNN model that analysed system utilisation every second and gave an accurate early prediction after analysing the data for five seconds. The model observes the machine's CPU (system and user) usage, memory usage, swap usage, the total number of processes, maximum process ID, and the number of bytes and packets transmitted and received. Figure <ref> shows a part of system utilisation data from an execution of a benign sample. The system utilisation data were taken by Cuckoo every second, as was done in <cit.>. Also, note that we developed a single model to be used in all machines. There is no machine-specific model as all machines will share the same model. This is an additional enhancement to previous research. There is no difference in terms of the methodology, but it is worth noting that <cit.> executed malicious applications in their analysis machine without other analysis machines executing benign samples, while our dataset added background noise during the analysis. Therefore, we expected to see performance degradation in the result as it should be harder for the model to distinguish between malware and benignware.For this experiment, we combined Set-0, Set-1, and Set-2 and then split the data into training and testing sets. The RNN model is trained with 10-fold cross-validation on the training set. Then we evaluated the model by using the testing set. Hence, no training data is mixed with the testing data. We also followed <cit.>'s approach to measure the model's quality by using accuracy metrics and added precision, recall, and F1-score metrics for better comparison with the process-level data.§.§ Process-level detection mode (LME Events and Hollows Hunter)Machine-level data can only tell us which machine is performing malicious activities. While this information might help identify the infected machine, it would be more beneficial to identify the specific malicious process. Knowing this enables us to shut down the specific process instead of the whole machine. In this research, we experimented with process-level data gathered from LME data and Hollows Hunter logs. The LME data contains important events which are generated by the process, while the Hollows Hunter data have the information on whether a particular process has potentially malicious implants (i.e., replaced/implanted PE files, shellcodes, hooks, or in-memory patches).§.§.§ Data Preprocessing LME data are essentially Windows Events stored in .evtx files. For easier handling, we converted the .evtx files to newline JSON format with evtx_dump tool <cit.>. The resulting JSON file contains a list of unordered events. For this research, we only considered event ID 1 (process creation), 3 (network connection), 5 (process termination), 12, and 13 (both are registry events). We then correlated the events by using ProcessGuid to look for a sequence of events generated by a process. As a result, we ended up with an event tree containing a list of process creation events and what the process did as shown in Figure <ref>.Although not all, some processes were identified containing implants by HollowsHunter, either malicious or not. The data needed to be incorporated with the LME data. As these data come from different sources, we identified the relationship between the HollowsHunter and the LME data by the process id, machine name, and timestamp.Each type of event has a different set of attributes, but some of them are shared. To model these events into a vector with uniform features, we flattened all possible attributes of an event and filtered out unnecessary attributes. If an event does not have the attribute, such as a process creation event that does not have information about the network endpoint it is connected to, we filled the attribute for that particular event as N/A. We removed attributes that have too many distinct values or contain the filename as it might hint too much to the model that the vector is malicious or benign. For non-binary categorical attributes, we transformed the features with one-hot encoding. And the timestamp was transformed to the number of milliseconds after the initial process creation event.Lastly, as a process may only have a set of numbers from Hollows Hunter, while it may generate multiple events, we repeated the Hollows Hunter data across the series of vectors of the particular process. In the end, we have 31 features, including the Hollows Hunter data and the timestamp. Table <ref> shows the list of the features used in our process-level detection model. We also categorised our features based on the data source, i.e., LME and Hollows Hunter features.We conducted the process-level detection model experiments using two different sets of features. The first set of features includes a sequence of events generated by a process. The second set of features contains more detailed information, such as the features listed in Table <ref>. For the sake of brevity, we will refer to the first set of features as the Event-only feature set and the second set of features as the Complete feature set.§.§.§ Recurrent Neural Network process-level malware detection To identify malicious processes, we have implemented two types of recurrent neural network (RNN) models - a Long Short-Term Memory (LSTM) based model and a Gated Recurrent Unit (GRU) model. The reason behind using RNN models is that the data we have for our model can be represented as time series data and consists of varying lengths of events. Our primary objective is to create a baseline model that can be used for further research in this field. Using RNN models, we can capture the sequential patterns present in the data and accurately identify the malicious processes. We hope this model will help improve the accuracy and efficiency of identifying malicious processes.Our RNN-based model takes input in the form of a sequence of events, denoted as X = x_i | 0 ≤ i ≤ n. The sequence length is represented by n, and each event x_i is a vector that captures information about the event. In the Event-only features, we used one-hot encoding to represent each event as a vector with five elements, as our research only considered five types of events. On the other hand, in the Complete feature set, each event is represented as a vector with 32 elements, as listed in Table 6. This means that for each process, we have a time series of event vectors with one vector for each second. This allows our RNN-based model to capture the temporal dependencies between events and accurately identify malicious processes.After selecting the appropriate set of features, we proceeded to build a one-layer Recurrent Neural Network-based model for classifying the sequence of events. In this model, each time step of the recurrent layer takes the event features as input. The RNN model captures the temporal dependencies in the sequence of events and outputs a hidden state at the final time step. This hidden state is then passed through a linear transformation layer with a Sigmoid activation function. During the training phase, the model's parameters are adjusted using backpropagation, which optimizes the model's ability to classify the input data accurately. During the identification/testing phase, the model is used to identify whether a given process is malicious or not. If the model's output exceeds 0.5, the process is classified as malicious. § DETECTION MODEL PERFORMANCE This section discusses the result of the machine-level and process-level detection models explained in Section <ref>. We evaluated both approaches on a second-by-second basis; the model performance is measured every second such that we know when the models start making good decisions. All experiments were run on a PC with Core i7 10700 2.9 GHz, 32 GB of RAM, NVIDIA GeForce RTX 2060, NVIDIA CUDA 10.0, and CUDNN 8.During the data generation process, each machine generated a time-series system utilisation data. If the injected application was malware, the generated data were marked as malicious. By the end of the data generation process, we have a collection of time-series data for each machine and each application execution. We refer to the time-series data generated by a machine as machine activity and the sequence of events generated by a process as process activity - the latter being the novel element of the experimentation. True Positive (TP) represents the number of correctly classified malicious activities, and True Negative (TN) represents the number of correctly classified benign activities. At the same time, False Positive (FP) and False Negative (FN) represent the number of wrongly classified benign and malicious activities, respectively. We then measure theperformance by using accuracy, precision, recall, and F1-score which are calculated as in Equation <ref>, <ref>, <ref>, and <ref> respectively. Acc= +/+++ * 100Precision= /+Recall= /+F_1= 2*Precision * Recall/Precision+Recall We compared the machine-level detection model performance by running <cit.>'s model with our dataset which contains background noise as one of the most recent and best-performing model to detect such malware is the one developed by Rhode et al. The best practice for evaluating a machine learning approach is to have separate training and testing set. As the name implies, the training set is used to train the model, and the testing set is for evaluating the model's performance. In this experiment, we combined Set-0, Set-1, and Set-2 then split them into the training and testing sets with a ratio of 80:20.Table <ref> shows the result of the detection of both models for the first twenty seconds, with the last column being the result taken from the per-second result in <cit.>. As shown in Table <ref> and Figure <ref>, <cit.>'s result tends to be more accurate over time, particularly during the first five seconds. The results from our new experiment (the remaining columns) show that the model shows similar behaviour to <cit.> during the first five seconds. The accuracy sees an increase and then becomes relatively plateaued. The accuracy result is also confirmed by the other metrics (i.e., precision, recall, and F1-score) showing the same trend. However, our accuracy is always below <cit.>'s result. We argue that this decrease in performance is caused by the existence of benign applications running at the same time as the malware. The inclusion of additional benign samples injected into the virtual environment at the same time malware samples were executed is one key difference between our data set and <cit.>'s. The addition of multiple processes running in parallel, while more representative of real-world systems, clearly impacts the performance of the RNN approach - presenting a new research challenge of distinguishing between malicious and benign activity - where previous research tended to only inject malware for dynamic analysis - with no background noise. After experimenting with the machine-level data, we continue with the process-level data. We developed our detection model with LSTM and GRU. Both models used ADAM as the optimiser with a learning rate of 0.01. We set the loss function to binary cross entropy. The models were developed with Python 3.8.10 and PyTorch 0.2.0 library. Our dataset contains an imbalanced ratio of benign and malicious process events, with a greater number of benign events than malicious ones. To ensure balance between the two classes in our training and testing sets, we performed undersampling on the benign class. We randomly split the malicious samples with a ratio of 80:20. Then, we took the same number of benign samples for the training set and used the rest of the benign samples for the testing set. We did that because the proportion of benign and malicious samples is imbalanced. In summary, we have 420 malicious and benign samples for the training set and 105 malicious and benign samples for the testing set.We repeated all experiments ten times with randomly picked samples for the training and testing sets and averaged the results. Unlike the machine activity-level model, we pay more attention to the precision, recall, and F1 score as we would like to get more insight from the result. Precision measures the ratio of the correctly malicious detected samples to the number of samples being classified as malicious, as formulated in equation <ref>. Recall measures the number of malicious samples correctly detected as malicious, as formulated in equation <ref>. Some literature refers to recall as the detection rate. F1-score conveys the balance between precision and recall, as formulated in equation <ref>.As shown in Figure <ref>, the Complete feature set provides stable performance with an F1 score of 0.87, while the Event-only feature set starts with 0.65 and keeps increasing over time. The performance of the Event-only feature set stops increasing after fifteen seconds. However, the Event-only feature set has a lower false positive rate (FPR) (see Figure <ref> than the Complete Feature set. It stays below 0.1, while the FPR of the Complete set is greater than 0.2 despite the value decreasing over time. Figure <ref> also shows that the detection rate of the Complete feature set decreases over time. It is interesting because typically performance will increase when we have more data coming in.Should we compare the performance of the machine-level and the process-level with the Event-only feature set detection model (see Figure <ref>). We can see that the process-level detection model performs better than the machine-level one from the first second (see Figure <ref>). The machine-level performance never surpasses the process-level despite more data coming in.Another point worth noting is that the effect of Hollows Hunter features on the model's performance. In our dataset, only 101 out of 1200 samples have features extracted from Hollows Hunter. The F1-score between the model which considered Hollows Hunter features and the model which did not consider them only differs by 0.01 on average. Therefore, the added Hollows Hunter features do not seem to significantly affect the model's performance.And as can also be seen in Figure <ref>, using either LSTM or GRU does not give a significant impact on the performance. Both RNN-based models always provide similar F1-score. § ISSUES AND LIMITATIONS We faced several issues in our two experiments that need to be taken into account for future works, particularly if our work is going to be reproduced. We executed the data generation process on Cardiff University's Cyber Range which is based on Hynesim <cit.> and Qemu <cit.>. Most of these issues are related to the behaviour of the Qemu.All client machines were configured to be immutable, which means that all changes to the machine will be removed when it is shut down. Restarting the machine keeps the changes. The problem is Cuckoo restarts guest machines after each analysis. Therefore, we modified Cuckoo to shut down guest machines and turn them back on by sending an API request to the Cyber Range platform. From our experiments, the second step was not always successful. The shutdown process is a non-blocking process. There was a time when Cuckoo sent the API request before the machine entirely shut down, which caused the request to be ignored. To handle that problem, we then ran another script alongside the Cuckoo daemon to monitor the machine's state. The script sends another API request to turn on inactive machines.However, that script does not fully solve the issue as another issue arose when some virtual machines had been turned off and on many times. The Cyber Range failed to start some machines, and sometime later, all client machines failed to start, including other machines in the Cyber Range. When this happens, the only possible solution is to reset the platform, but it will stop and undefine all virtual machines. This issue might cause disturbance when many people are using the cyber range. Although the Cuckoo daemon can automatically continue from where it left off, as mentioned earlier, some samples were executed later than their counterparts.Another issue we faced during the data collection was duplicate ProcessGuid in the data we obtained from the LME. ProcessGuid is supposed to be a unique value that can be used to correlate events. According to Sysmon documentation <cit.>, the value is generated by combining machine GUID, process ID, and timestamp. However, we found out the root causes of this problem. The problem was caused by missing Windows updates (KB3033929 and KB4457144) which made Sysmon improperly generate zeros in the middle part of the ProcessGuid. § CONCLUSION AND FUTURE WORK This research has generated a malware activity dataset containing machine-level data (i.e., system utilisation) and process-level (i.e., LME and Hollows Hunter data). The data was generated in a small enterprise network-like environment to understand better how malware propagates across networks, as none of the previous research has considered it. We also experimented with detection models that are trained on machine-level and process-level data. The result from the machine-level detection model shows a performance drop (on average 20.12% in accuracy) compared to earlier work <cit.>. It shows that background applications may affect detection performance. Our RNN-based model with the process-level data provides better performance than the machine-level data; 0.049 average increase in detection rate andfalse-positive rate below 0.1. The detection performance keeps increasing significantly until we have seven seconds of process-level activities. The performance grows slower shortly afterwards. However, better feature engineering is still needed for future research in process-level malware detection.We can pursue several other directions as a follow-up of this research. We executed our malware samples by sending a sample to the analysis machine and waiting for 120 seconds. We only assumed the adversary would merely drop the malware into the victim's machine. In reality, the story might be more complex as there are usually several infiltration steps. The adversary may also make a lateral movement after the initial malware infection. This behaviour is not captured in our dataset. To have this kind of behaviour, we suggest using Mitre Caldera <cit.> to emulate adversarial activities.Another thing that could be improved is the way we run background applications. In our setup, we ran the background applications after the user logged in, and then there was no user interaction. We let the background applications stay idle. It would be more realistic to emulate user behaviour, e.g., typing in a Word document, browsing the internet, opening a PDF file and interacting with it. elsarticle-num
http://arxiv.org/abs/2310.18165v1
{ "authors": [ "Baskoro Adi Pratomo", "Toby Jackson", "Pete Burnap", "Andrew Hood", "Eirini Anthi" ], "categories": [ "cs.CR", "cs.LG" ], "primary_category": "cs.CR", "published": "20231027141735", "title": "Enhancing Enterprise Network Security: Comparing Machine-Level and Process-Level Analysis for Dynamic Malware Detection" }
Induced subdivisions in K_s,s-free graphs with polynomial average degree [ January 14, 2024 ======================================================================== Generative language models (LMs) are increasingly used for document class-prediction tasks and promise enormous improvements in cost and efficiency. Existing research often examines simple classification tasks, but the capability of LMs to classify on complex or specialized tasks is less well understood. We consider a highly complex task that is challenging even for humans: the classification of legal reasoning according to jurisprudential philosophy. Using a novel dataset of historical United States Supreme Court opinions annotated by a team of domain experts, we systematically test the performance of a variety of LMs. We find that generative models perform poorly when given instructions (i.e. prompts) equal to the instructions presented to human annotators through our codebook. Our strongest results derive from fine-tuning models on the annotated dataset; the best performing model is an in-domain model, LEGAL-BERT. We apply predictions from this fine-tuned model to study historical trends in jurisprudence, an exercise that both aligns with prominent qualitative historical accounts and points to areas of possible refinement in those accounts. Our findings generally sound a note of caution in the use of generative LMs on complex tasks without fine-tuning and point to the continued relevance of human annotation-intensive classification methods. § INTRODUCTION Academia and industry increasingly use generative language models (LMs) for document annotation and class-prediction tasks, which promise enormous improvements in cost and efficiency. However, research tends to focus on relatively simple and generic annotation contexts, such as topic or query-keyword relevance <cit.>. But many potential applications call for annotation or prediction of complex or specialized concepts, such as whether a writer reflects a particular school of thought. These questions may be difficult even to describe to a trained human annotator, much less apply. It is unclear if generative LMs perform well on this type of complex and specialized task.In this study we systematically examine the ability of large LMs to parse a construct that is difficult even for highly trained annotators: modes of legal reasoning. We consider two prominent modes of legal reasoning that judges employ as identified by legal historians, in addition to a null or non-interpretative class. Although the classes of legal reasoning identified by historians reflect relatively well-defined concepts, determining whether a particular document reflects a mode of reasoning can be exceptionally challenging. We suspect this is common to many high-value but specialized tasks, such as classifying complex emotional states or detecting indirect racial or gender bias. These tasks often require both abstract reasoning and specialized knowledge. Legal reasoning is a suitable setting for examining model performance on a highly complex classification task. The foundation of our research is a new dataset of thousands of paragraphs of historical Supreme Court opinions annotated by a team of upper-year students at a highly selective law school. We find that even the largest models perform poorly at the task without fine-tuning, even when using similar instructions as those given to human annotators. This finding suggests that LMs, even as augmented through few-shot or chain-of-thought prompting, may not be well-suited to complex or specialized classification tasks without task-specific fine-tuning. For such tasks, substantial annotation by domain experts remains a critical component.To demonstrate this point, we examine the performance of established to cutting-edge LMs when fine-tuned on our annotated data. Our results show strong performance for many of these fine-tuned models. Our analysis explores various approaches to model structure, such as a multi-class task versus serialized binary tasks, but we find that using an in-domain pre-trained model, LEGAL-BERT <cit.>, results in the highest performance for a task that requires specialized domain knowledge. The primary contributions of this paper are as follows: * We develop a new dataset of domain-expert annotations in a complex area.* We find that SOTA in-context generative models perform poorly on this task.* We show that various fine-tuned models have relatively strong performance.* We study the relationship between our best-performing model's predictions and the consensus historical periodization of judicial reasoning, finding both substantial convergence and opportunities for refinement in the historical accounts. In sum, our paper shows that in a complex and specialized domain, without fine-tuning, current generative models exhibit serious limitations; there is a continued need for domain-expert annotation, which can be effectively leveraged to unseen instances through fine-tuned models.[Code is available at: <https://github.com/rosthalken/legal-interpretation>] § RELATED WORK Researchers have developed strategies to guide LMs to perform complex tasks without the time and infrastructure costs of fine-tuning, often by breaking decisions down into multiple steps of reasoning. <cit.> use few-shot chain-of-thought (CoT) prompting to provide a model with examples of intermediary logic before making a decision. An alternative, zero-shot CoT also results in improved performance in certain tasks, as LMs are prompted to break down their reasoning (e.g. “let's think step by step”) <cit.>. Another procedure, Plan-and-Solve (PS) prompting, asks a model to devise and execute a plan for reasoning through a problem <cit.>.At certain tasks and with these prompting strategies, LMs perform annotation or classification tasks at the level of humans. Given the high costs (e.g., time, money, logistics) of collecting high-quality human-annotated data, recent work has suggested that annotation tasks previously performed by students, domain experts, or crowdsourced workers could be replicated with equal performance by LMs.Generative models perform well on query-keyword relevance tasks <cit.>; on topic detection in tweets <cit.>; or on detecting political affiliation in tweets <cit.>.<cit.> suggests that zero-shot and few-shot models are a legitimate alternative for stance detection because of the unreliability of human annotators due to the vast contextual information annotators may or may not draw from. In the legal domain, scholars examine classification performance of generative LMs on the type of case (e.g., contracts, immigration, etc) <cit.>, or on the court's use of a specific canon in statutory construction <cit.>.The range of applications for which generative LMs might adequately perform is an open question. We have found limited work that requires specialized knowledge in addition to the use of abstract reasoning skills. In this study, we ask the models to engage in precisely this form of reasoning, which is challenging even for domain-expert humans. What distinguishes this form of reasoning is that it requires the analyst to conceptualize abstract principles and determine whether a specialized, domain-specific example fits one of those concepts. This difficulty contrasts with simpler tasks, which may key off well-established associations in training data between concepts, such as political affiliation and word usage. § LEGAL REASONING Our focus is on legal reasoning involving statutory interpretation.[Elsewhere, some of us examine jurisprudence more broadly <cit.>.] In the United States, Congress writes statutes, but determining how statutes apply in individual cases is often left to the courts. Every year, the Supreme Court decides numerous cases of statutory interpretation, ranging from questions such as whether a tomato is a “vegetable” or a “fruit” for the purposes of import tariffs as in Nix v. Hedden,[149 U.S. 304 (1893).] to whether the Clean Air Act authorizes the Environmental Protection Agency to regulate greenhouse gases as in West Virginia v. EPA.[142 S. Ct. 2587 (2022).]Jurists adopt a wide range of approaches to interpreting statutes and engaging in legal reasoning more generally. A classic distinction is between what 20th century legal scholar Karl Llewellyn referred to as “formal” and “grand” styles of reasoning <cit.>. Grand reasoning refers to a form of legal reasoning that respects precedent but is characterized by “the on-going production and improvement of rules which make sense on their face” <cit.>. On interpretive questions, it therefore privileges work-ability, future orientation, and common-sense understand-ability. By contrast, formalism focuses not on the “policy” considerations of a law's consequences, but instead on its more mechanical application: “the rules of law are to decide the cases; policy is for the legislature, not for the courts... Opinions run in deductive form with an air or expression of single-line inevitability” <cit.>. Llewellyn's modes of legal reasoning apply more broadly than statutory interpretation. With respect to statutory interpretation specifically, under the grand style of reasoning “case-law statutes were construed `freely' to implement their purpose, the court commonly accepting the legislature's choice of policy and setting to work to implement it” <cit.>; showing his sympathies, under the formal style, Llewellyn wrote, “statutes tended to be limited or even eviscerated by wooden and literal reading, in a sort of long-drawn battle between a balky, stiff-necked, wrong-headed court and a legislature which had only words with which to drive that court” <cit.>. Though their terminology does not always follow Llewellyn, other legal scholars identify a similar primary distinction in legal reasoning. Horwitz, for instance, centers discussion on legal “orthodoxy,” which seeks to separate law from consequences and elevate “logical inexorability” <cit.>. Against orthodoxy, Horwitz identified a progressive critique, which “represented a broad attack on claims of Classical Legal Thought to be natural, neutral, and apolitical” <cit.>. Other prominent accounts follow Llewellyn's distinctions more explicitly <cit.>. Operative doctrines in important areas of law, moreover, reflect the broad schools of thought: e.g., the “rule of reason” in anti-trust, which involves holistically examining the pros and cons of conduct rather than a rule-like test under the Sherman Act, may be understood to reflect the socially-aware grand school of thought <cit.>.Our contribution focuses on this broad consensus around a key distinction in the modes of legal reasoning. On the one hand, a mode of reasoning that is innovative, open-ended, and oriented to social, political, and economic consequences of law; on the other hand, a mechanical, logic-oriented approach that conceives of the law as a closed and deductive system of reasoning. Though scholars differ on terminology, we follow Llewellyn and refer to these schools as Grand and Formal (Table <ref>).Not only does this basic conceptual consensus exist, but there is also rough consensus on periodization: that is, the periods of history in which each school was dominant. The “conventional” <cit.> view is that in the pre-Civil War period, the grand style dominated; in the period between the Civil War and World War I, the formal style dominated; the Grand school then dominated for much of the twentieth century <cit.>. The standard view is that we currently live in a period of formalism <cit.>. We use this periodization to validate our measure; but also use the measure to provide a nuanced account of historical trends in legal reasoning.§ DATA We use a dataset of 15,860 historical United States Supreme Court opinions likely involving statutory interpretation and issued between 1870 and 2014.[This set of cases may include decisions on the merits and orders. See Appendix <ref> for details on our opinion selection procedure. ] The raw data come from Harvard's Caselaw Access Project.[The raw data can be accessed here: <https://case.law/>. The Caselaw Access Project is not open-access but it grants unrestricted access to researchers.] Opinion text underwent minimal pre-processing, but all case citations were removed to reduce cognitive workload for the annotators.[To do this, we used thePython library to identify the occurrence of case citations and replace them with the token '/[CITE/]' <cit.>.] To create the dataset for annotation, we included only opinions that conduct statutory interpretation and then upsampled paragraphs likely to use formal or grand reasoning. The seed terms and details about this sampling procedure are included in Appendix <ref>. In the final collection, 25% of paragraphs include at least one formal seed, 25% include at least one grand seed, and the remaining 50% are randomly sampled. § HUMAN ANNOTATIONS FOR LEGAL REASONING A team of domain experts, four upper-year law students at a highly selective law school, annotated selections from court opinions as formal, grand, or lacking statutory interpretation. This team collaboratively developed and tested a codebook (included in Appendix <ref>) by iteratively annotating court opinions and calculating inter-rater reliability on a weekly basis over the spring 2023 semester.The annotation task asked each annotator to assign one of three labels, “formal,” “grand,” or “none,” to each paragraph. A fourth label, “low confidence,” could be added in addition to one of the three core labels if the type of reasoning was ambiguous. We calculated inter-rater reliability using Krippendorff's alpha to evaluate agreement between the four labelers and across the three main classes. This coefficient was calculated weekly and guided the decision of when to start collecting data for training. Paragraphs with high disagreement were discussed in depth and these discussions led to the revision of our codebook. We note that while this annotation is formally a three-way classification task, the low dimensionality of the output space does not imply that the task is easy. In fact, it took weeks for highly trained upper-year law students to reach a level of expertise at which they were able to reach consistent results.Inter-rater reliability increased after the introduction of a decision chart (Figure <ref>), which broke down decisions about each of the classes into a series of guided questions (Appendix <ref>). For each paragraph assigned the low confidence label, the team deliberated over possible labels until reaching a group decision through a majority vote. Training and evaluation data includes this resulting label for low confidence paragraphs; not the initial label. In total, excluding paragraphs prior to decent inter-rater reliability, 2748 paragraphs were labeled and included in the training and evaluation data. Even with the upsampling of legal interpretation based on seed terms, paragraphs that did not engage in legal interpretation or interpreted something other than a statute, our “none” class, made up 68% of the data (Table <ref>). Grand reasoning was the second most common label, and formal the least common. Only 101 of these paragraphs received the additional low confidence label, and the formal class was the most common class to receive the low confidence label. § AUTOMATED ANNOTATION WITH LMS Though each member of the annotation team was an upper-year law student who had completed highly relevant coursework, the task remained difficult for the human annotators, as reflected in the mid-range inter-rater reliability (0.63 Krippendorff's alpha). The abstract concepts of the modes of legal reasoning were clear, but determining whether specific instances reflected one mode or another required specialized knowledge and an ability to map those abstract concepts to the incomplete evidence in the paragraphs.The complexity of this task makes it challenging for a generative model prompted in-context or with CoT reasoning. As an initial experiment, we begin with a slightly simplified task: identifying whether a passage involves some form of legal reasoning (regardless of class). We then compare a larger variety of models on the primary task of interest: identifying instances of formal and grand legal reasoning. For both tasks, we compare the performance of in-context and fine-tuned models, with the expectation that identifying legal reasoning is more achievable for in-context models than identifying the specific formal or grand classes. Here, we test thresholds of task complexity, to better identify the point at which an annotated dataset for fine-tuning is needed; not just a carefully crafted prompt. §.§ Model Training and EvaluationIn both tasks, we compare the performance of a set of fine-tuned models to a set of prompted models. Models were chosen based on established usage, popularity, and accessibility (i.e. model size), since applied NLP researchers may be less likely to have access to the computing power needed for extremely large models. The fine-tuned models include BERT-base <cit.>, DistilBERT <cit.>, and T5-small and T5-base <cit.>. We include one in-domain model, LEGAL-BERT-base, that was pre-trained from scratch on United States and European Union legal corpora, including United States Supreme Court cases <cit.>. Models prompted to identify legal reasoning include GPT-4 <cit.>, FLAN-T5-large <cit.>, and Llama-2-Chat (7B) <cit.>. We created five random splits of the annotated data with 75% of the data in the training set and 25% of the data in the test segment. Models that were fine-tuned were fine-tuned over three epochs, with 50 warm-up steps, a learning rate of 2e-5, with a weight decay of 0.01.§.§ Identifying Legal Reasoning As a slightly simplified initial task, we begin by considering whether a model can detect instances in which some form of legal reasoning occurs (regardless of formal or grand reasoning). This remains a challenging task but is comparatively less complex than identifying the mode of reasoning. We consider any paragraph annotated as either formal or grand as being a paragraph where legal reasoning is present; this is a binary classification problem. We compare two procedures for identifying legal reasoning in text: * In-context generative identification based on a description of legal reasoning (prompt included in Appendix <ref>).* Fine-tuned binary classification based on hand-labeled annotations.All fine-tuned models perform relatively well on distinguishing paragraphs with legal reasoning from paragraphs without legal reasoning (Table <ref>). In comparison to these models, the zero-shot models prompted with a description of legal reasoning perform worse, and either over- or under-identify legal reasoning (e.g. high recall for the reasoning class but low precision). However, these models perform surprisingly well given the comparative workload behind each method: our fine-tuned models are built upon weeks of extensive labeling and discussion; the in-context models, only a prompt.§.§ Identifying Types of Legal ReasoningThe primary task requires additional specialized knowledge in the identification of specific classes of reasoning, formal and grand. This task also requires the identification of imbalanced classes, as formal reasoning was only identified in 11% of all annotated paragraphs.We test various assemblies of models and compare fine-tuning with prompting for identifying legal reasoning in text. Our approaches to prompting include the following:* In Context, Descriptions: An in-context prompt that provides the model with descriptions of the legal reasoning classes before asking for inference on new paragraphs (Figure <ref>). The descriptions used in this prompt are the same presented to the annotation team in the codebook. * In Context, Examples: An in-context prompt that provides the model with examples of the legal reasoning classes before asking for inference on new paragraphs (Appendix <ref>). The examples used in this prompt are the same presented to the annotation team in the codebook. * Chain-of-Thought: A CoT prompt that provides steps of reasoning to follow prior to determining the class of legal reasoning (Appendix <ref>). The steps used in this prompt derive from the decision chart provided to annotators.Each prompting strategy is derived from our codebook (see Appendix <ref>), which guided human annotators through data annotation. We do not exhaustively explore prompts beyond our codebook. Instead, we consider whether a reasonable prompt that is successful for humans works well for a model. While it is possible that another, as-yet-unknown, prompt could have provided better results, we know that the language in our codebook is sufficient to describe the task and the desired results. We contrast the results of the prompted generative models with the results from fine-tuned models. These models were fine-tuned with a variety of approaches, including: * Multi-Class: A fine-tuned multi-class classification based on hand-labeled annotations.* Nested: An assembly of models that breaks the classification task into nested binary stages. One model is fine-tuned to identify interpretation and another model to distinguish between grand and formal classes. The results from the first model are used by the second.§ MODEL PERFORMANCE We test the performance of all models on the same five test splits of data and find that the fine-tuned models consistently outperform the in-context models (Table <ref>). Our results suggest that even state-of-the-art LMs may not be a suitable replacement for human annotation on highly complex and specialized classification tasks.[Our reported results employ a user-role and the default temperature on GPT-4. We experimented with zero-ed out temperature setting and with adding a system prompt, but the results did not improve substantially. Additionally, for the Llama-2-Chat models we used the same prompts as the other models but added the Llama-2-Chat-specific formatting that is necessary for instructing this model.]Of all training or prompting procedures, models fine-tuned to perform multi-class classification tend to result in the highest performance. Out of all models, the best performing model is LEGAL-BERT, the one in-domain model included in this analysis. GPT-4 performs worse than all fine-tuned models, but has much better performance than Llama-2-Chat or FLAN-T5. Llama-2-Chat and FLAN-T5 greatly over-predict one of the three classes, and rarely predict the other two classes, making the recall for one class artificially high.[We inspect the generated text from these models and find that FLAN-T5 and GPT-4 often over-predict certain classes, but these models rarely hallucinate or return text beyond the requested class (e.g. “grand”). Unsurprisingly, Llama-2-Chat often returns additional text beyond the class label; we extract the class label (if it occurs) from the text and use that as the label for evaluation.] Also notable, the performance of the generative models on this more complex task is low compared to the simpler task of identifying whether some type of relevant legal reasoning occurs (Table <ref>). This is true both in absolute terms and relative to the in-domain, fine-tuned models. For instance, the macro F1 for GPT-4 on the simpler task is 0.46, 0.36 lower than the corresponding F1 for the in-domain, fine-tuned model. On this more complex task, the macro F1 for GPT-4 with descriptions is 0.22, 0.48 lower than the F1 for the in-domain, fine-tuned model.§ APPLICATION: PERIODS OF LEGAL REASONING The conventional wisdom among legal observers is that we currently live in a period in which the formal style of reasoning predominates <cit.>. Yet it has not always been this way: in other historical periods, the grand style of reasoning prevailed. Indeed, there is a rough consensus in the legal literature regarding historical periodization <cit.>. Writing in the mid-twentieth century, Llewellyn identified three periods of legal reasoning. Prior to the Civil War, the grand style of reasoning predominated; from the Civil War to World War I, the formal style of reasoning prevailed; and from World War I onward, courts again operated under the grand style of reasoning. More recently, scholars identify the 1980s as a critical point of transition towards formalism <cit.>. Other scholars identify fundamentally similar periodizations <cit.>, and though differences exist, it is possible to speak of a “conventional” view <cit.>. These historical characterizations arise from leading scholars reading judicial opinions and forming judgments through the use of their full faculties about the prevailing style of reasoning.Our data starts at Reconstruction (the period following the US Civil War) and allows us to examine the convergence between the scholarly consensus historical periodization and the historical periodization implied by our LM-derived results. We can also use our predictions to offer more granular assessments of the periods and potentially to adjudicate differences among the views of prominent scholars. This latter analysis is preliminary, in part, because earlier scholars examined judicial reasoning broadly, whereas our current analysis considers only Supreme Court opinions involving statutory interpretation.[We screen opinions for these predictions using the statutory interpretation filter identified in Appendix <ref>. <cit.> provide a more comprehensive LM analysis of historical trends in jurisprudence.]For this exercise, we study historical trends in the predictions from the highest performing model, multi-class, fine-tuned LEGAL-BERT. We examine yearly averages at both the paragraph level and the opinion level.[An opinion-level prediction represents the average of paragraphs in that opinion. If the number of paragraphs in opinions is not time invariant, historical trends in opinions may not be the same as trends in paragraphs.] We focus only on paragraphs that involve interpretation, and code paragraphs classified as “formal” with a 1 and paragraphs classified as “grand” with a 0. These yearly averages, therefore, reveal the proportion of interpretive paragraphs that classify as formal as opposed to grand. Figure <ref> plots the yearly averages over our series: the left panel (panel a.) shows the yearly average at the paragraph level, and the right panel (panel b.) aggregates paragraphs within documents to show the yearly average at the opinion level. Broadly understood, the historical trends in our predictions converge with the views of Llewellyn and other legal observers. That is, the period after the Civil War and before World War I was characterized by formal judicial reasoning; the mid-century was characterized by grand legal reasoning; and we now live in a period of formalist resurgence. The story is essentially the same at the paragraph or the document level.But our predictions also allow for a more granular assessment of the historical periods. To illustrate this, we use dashed vertical lines in Figure <ref> to denote important historical events: in 1905, the Supreme Court decided Lochner v. New York,[198 U.S. 45 (1905).] which some observers note as a highwater point for formalism; in 1937, a year of “judicial revolution,” in which the Supreme Court is widely viewed to have shifted its jurisprudence from opposition to acceptance of federal and state regulations;[This revolution is also known as the “switch in time that saved nine,” referring to the changed voting behavior of Justice Owen Roberts in response to the running threat by President Roosevelt to pack the Court.] and in 1981, as President Reagan's judicial appointments started to take office and a possible marker for the formalist revival.[Justice Scalia, for instance, was appointed by President Reagan to the Supreme Court in 1986, and is often viewed as the single most influential person in the rise of new formalism. For an account of that rise, see <cit.>.]To a striking degree, these historical markers correspond with changes in our metric of jurisprudence. Consistent with those who see Lochner as a high-water point for formalism, the prevalence of formalist reasoning declines after 1905. Likewise, we see a remarkable increase in the prevalence of grand reasoning in 1937. This pattern is consistent with a “judicial revolution” in jurisprudence to accommodate regulatory programs, the type of which had been earlier invalidated under formalist regimes. Finally, our measures recover a sharp increase in formalism in the 1980s, again consistent with the views of legal observers.These results represent some of the first long-run quantitative characterization of trends in jurisprudential philosophies. They both broadly support the qualitative characterizations of legal scholars and provide opportunities for refinement of legal theory and historical accounts.§ CONCLUSIONWe found that for a task involving abstract reasoning in addition to specialized domain-specific knowledge, it remains essential to have an annotated dataset created by domain experts. Although other work has shown that generative models are able to replicate annotation for complex tasks using carefully crafted prompts, we demonstrate that models fine-tuned on a sizable dataset of expert annotations perform better than models instructed to perform the task through in-context and CoT prompts. We recommend that researchers use caution when employing non-fine-tuned generative models to replicate complex tasks otherwise completed by humans or with human supervision. Best practices would call for human validation of generative model results and an assessment of cost-performance tradeoffs with respect to in-domain models. § LIMITATIONS A limitation of this study is the relatively low inter-rater reliability between annotators even after extensive training and conversation. This relatively low reliability results from the difficulty of the task and the inevitable ambiguity of some passages, especially when read out of case context. Another limitation relates to our prompting strategy: to make the in-context prompting more comparable to working with the team of annotators, we use the codebook descriptions and examples in the in-context prompts. Likely, these descriptions and examples could have been optimized for better model performance through additional prompt strategies, and our results for these models may depict lower performance than is possible. § ACKNOWLEDGMENTSWe thank our annotation team, including Houston Brown, Michael Demers, Sarah Engell, and Josiah Rutledge, for their extensive work creating this dataset. We also thank Zach Clopton, Vijay Karunamurthy, Gregory Yauney, Andrea Wang, Federica Bologna, Anna Choi, Rebecca Hicke, Kiara Liu, and participants at workshops at Cornell Law School and Cornell Computer Science Department for helpful comments. This work was supported by NSF #FMiTF-2019313 and NSF #1652536.acl_natbib § SELECTION OF PARAGRAPHS FOR LABELING We use the following seed terms to up-sample paragraphs that more likely reflect the interpretive approaches of interest.[These terms largely draw from earlier efforts by legal scholars to develop search terms that recover cases involving methods of statutory interpretation related to the formal and grand styles of jurisprudence. <cit.> helpfully collects many of these terms; see also related efforts <cit.>.]* Grand seeds: conference report, committee report, senate report, house report, assembly report, senate hearing, house hearing, assembly hearing, committee hearing, conference hearing, floor debate, legislative history, history of the legislation, conference committee, joint committee, senate committee, house committee, assembly committee, legislative purpose, congressional purpose, purpose of congress, purpose of the legislature, social, society* Formal seeds: dictionary, dictionarium, liguae britannicae, world book, funk & wagnalls, expressio, expresio, inclusio, noscitur a sociis, noscitur a socis, ejusdem generis, last antecedent, plain language, whole act, whole-act, whole code, whole-code, in pari materia, meaningful variation, consistent usage, surplusage, superfluit, plain meaning, ordinary meaning, word The selection of paragraphs to annotate occurred through a series of steps:* We include only opinions that perform statutory interpretation. We identify these opinions by finding opinions that include any of the tokens `statute', `legislation', or `act', within 200 characters of the tokens `mean', `constru' (i.e. construct), `interpret', `reading', or `understand'. * Opinions that pass the statutory interpretation filter were split into paragraphs. In each paragraph, we looked for the occurrence of different seed terms corresponding to either formal or grand reasoning. * Of the total number of paragraphs used for labeling, 25% included one or more formal seeds, 25% included one or more grand seeds, and 50% included none of the seed terms. This proportion remained the same until the last two rounds of labeling when more examples of formal or grand seeds were included. During those two rounds, the proportion of formal and grand seeds was increased to 40% for both classes.§ DECISION CHART A decision chart was created and provided to annotators between the fourth and fifth weeks of annotations (Figure <ref>). Following the incorporation of this decision chart, we saw boosted inter-rater reliability and more consistent agreement between annotators.§ PROMPTS We designed three prompting strategies to instruct LMs to identify legal interpretation and classes of legal interpretation in text. These prompts are included in Figures  <ref> and  <ref>. All prompts were modeled after our annotation codebook.§ CODEBOOK The codebook was iteratively created throughout the process of annotation to guide annotators. Table <ref> includes the final definitions of each class alongside core examples of each class.
http://arxiv.org/abs/2310.18440v1
{ "authors": [ "Rosamond Thalken", "Edward H. Stiglitz", "David Mimno", "Matthew Wilkens" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231027192759", "title": "Modeling Legal Reasoning: LM Annotation at the Edge of Human Agreement" }
We use instanton gauge theory to prove that if Y is a closed, orientable 3-manifold such that H_1(Y;) is nontrivial and either 2-torsion or 3-torsion, and if Y is neither #^r ^3 for some r≥ 1 nor ± L(3,1), then there is an irreducible representation π_1(Y) →(2,).We apply this to show that the Kauffman bracket skein module of a non-prime 3-manifold has nontrivial torsion whenever two of the prime summands are different from ^3, answering a conjecture of Przytycki (Kirby problem 1.92(F)) unless every summand but one is ^3.As part of the proof in the 2-torsion case, we also show that if M is a compact, orientable 3-manifold with torus boundary whose rational longitude has order 2 in H_1(M), then M admits a degree-1 map onto the twisted I-bundle over the Klein bottle. Text2Bundle: Towards Personalized Query-based Bundle Generation Zhihua Wei January 14, 2024 ===============================================================§ INTRODUCTION Given a 3-manifold Y, the (2,) character variety of the fundamental group π_1(Y) contains a lot of information about the geometry and topology of Y.For example, one can use the characters of irreducible representations to understand something about the hyperbolic structure on Y, if it exists, or to find incompressible surfaces in Y.Before doing so, however, it is natural to ask whether there are any irreducible representations in the first place.The third author recently used instanton gauge theory to say that in many cases, the answer is yes.Let Y be an integer homology 3-sphere.If Y is not homeomorphic to S^3, then there is an irreducible representation π_1(Y) →(2,). The main results of this paper extend Theorem <ref> to all manifolds Y for which H_1(Y;) is either 2-torsion or 3-torsion.Let Y be a closed, orientable 3-manifold with H_1(Y;) ≅ (/2)^r for some integer r ≥ 1.If Y is not homeomorphic to #^r ^3, then there is an irreducible representation π_1(Y) →(2,). Let Y be a closed 3-manifold such that H_1(Y;) ≅ (/3)^r for some r ≥ 1.If Y is not homeomorphic to ± L(3,1), then there is an irreducible representation π_1(Y) →(2,). Although we will mostly describe applications of Theorem <ref>, it turns out that Theorem <ref> is slightly easier to prove.In fact, the analogous result when H_1(Y;) is p-torsion for an odd prime p follows from the case where H_1(Y;) is cyclic of order p; this is detailed in Theorems <ref> and <ref>.However, it is not always true that if H_1(Y;) ≅/p, then either Y is a lens space or there must be a representation π_1(Y) →(2,) with non-abelian image; a construction due to Motegi <cit.> (see Remark <ref>) provides counterexamples for many primes, starting with p=37.In the following subsections we will provide some applications of Theorem <ref> to character varieties and skein modules of reducible 3-manifolds, and then we will give an outline of its proof. §.§ (2,) character varieties Given a 3-manifold Y, we can define its (2,) representation variety to be(Y) = (π_1(Y), (2,)).We will say that Y is (2,)-reducible if every ρ∈(Y) is reducible, or (2,)-abelian if every ρ∈(Y) has abelian image.If Y is (2,)-abelian then it is (2,)-reducible, though the converse need not be true.The representation variety (Y) carries an action of (2,) by conjugation, and the (2,) character variety of Y is the GIT quotient(Y) = (Y) // (2,). Culler and Shalen <cit.> showed that one can use ideal points of curves in (Y) to find incompressible surfaces in Y.In the opposite direction, one can ask whether the existence of incompressible surfaces in Y forces _(Y) to be positive, and Motegi <cit.> showed that this is not always true, but for essential spheres we have the following.Suppose that for i=1,2 there are representationsρ_i: π_1(Y_i) →(2,)whose images are not central (i.e., not contained in {±1}).Then _(Y_1#Y_2) is positive.We write π_1(Y_1#Y_2) ≅π_1(Y_1) ∗π_1(Y_2) and consider the map (2,) →(Y_1#Y_2) given by A ↦ρ_1 ∗ (Aρ_2A^-1).This has positive-dimensional image, even in the quotient (Y_1#Y_2). Combining this observation with Theorem <ref>, we readily deduce the following.If Y_1 and Y_2 are closed, oriented 3-manifolds, and neither Y_1 nor Y_2 is homeomorphic to #^r ^3 for any r ≥ 0, then _(Y_1#Y_2) is positive.If Y_i ≇#^r ^3 for any r, then we can always find a representation ρ_i: π_1(Y_i) →(2,) with non-central image.Indeed, if H_1(Y_i;) is 2-torsion then Theorem <ref> applies; if it is not 2-torsion, then we can take ρ_i to factor through H_1(Y_i;) and send a summand of the formor /n (n ≥ 3) to a non-central subgroup of (2,).Now we apply Proposition <ref>. We remark that the condition Y_i ≇#^r ^3 is necessary in Theorem <ref>, because we have(Y#^3) ≅(Y) ×{± 1}and so taking connected sums with ^3 cannot change the dimension of (Y). §.§ Skein modules The Kauffman bracket skein module, defined by Przytycki <cit.> and Turaev <cit.>, is a [A^±1]-module (Y) associated to any oriented 3-manifold Y.Relatively little is known about the structure of this invariant in general; it was only recently proved by Gunningham, Jordan, and Safronov <cit.> that if Y is a closed, oriented 3-manifold then (Y) is finite-dimensional over (A).Przytycki conjectured the following.If Y ≅ Y_1 # Y_2, where neither of the Y_i is homeomorphic to S^3 with some number of disjoint balls removed, then (Y) has non-trivial torsion. By contrast, we know that (S^3) ≅[A^±1] is freely generated by the empty link <cit.>, while for lens spaces Hoste and Przytycki <cit.> showed that (L(p,q)) is a free module on ⌊ p/2⌋+1 generators.(On the other hand, a non-separating S^2 always leads to torsion in (Y), by a version of Dirac's belt trick <cit.>.)We remark that removing a ball from Y, or conversely filling in an S^2 component of ∂ Y with a ball, does not change (Y) up to isomorphism <cit.>.We note the relevance of Theorem <ref> to Conjecture <ref> via work of Bullock <cit.>, who showed that if _(Y) ≥ 1 then (Y) is infinitely generated.Indeed, Przytycki <cit.> proved that Conjecture <ref> holds for a connected sum Y=Y_1#Y_2 if for each i, there is a representation ρ_i: π_1(Y_i) →(2,)with non-central image.(See also <cit.>.)We have therefore shown the following, exactly as in Theorem <ref>.Let Y be an oriented 3-manifold, and suppose that we can writeY ≅ Y_1 # Y_2where neither Y_1 nor Y_2 is homeomorphic to some #^r ^3 (r≥ 0) minus a disjoint union of balls.Then (Y) has non-trivial torsion. In particular, the following conjecture would now imply Conjecture <ref>.Suppose that Y is a closed oriented 3-manifold that is not homeomorphic to S^3.Then (Y#^3) has non-trivial torsion. We note that at least the case Y = ^3 of Conjecture <ref> is known: the Kauffman bracket skein module of ^3#^3 was completely determined by Mroczkowski <cit.>, who showed in <cit.> that (^3#^3) does contain torsion. §.§ Outline of the proof of Theorem <ref> Just as for (2,), we will say that Y is (2)-abelian if every ρ: π_1(Y) →(2) has abelian image; in contrast with the (2,) case, this is the same as being (2)-reducible.We will use gauge theory to show that many of the 3-manifolds under consideration are not (2)-abelian, which means that they are not (2)-reducible and hence not (2,)-reducible either.With this in mind, we let Y be an (2,)-reducible 3-manifold, and we suppose that H_1(Y;) ≅ (/2)^r for some r ≥ 0.We can assume without loss of generality that Y is prime, since otherwise each of its summands also is (2,)-reducible with 2-torsion homology.Theorem <ref> follows quickly for several large classes of 3-manifolds: Thurston proved that closed hyperbolic 3-manifolds are never (2,)-reducible <cit.>, and among the prime Seifert fibered 3-manifolds with 2-torsion homology, work of the second and third author <cit.> implies that among these only ^3 is (2)-abelian.Thus if Y ≇^3 is (2,)-reducible, then we use the geometrization theorem to conclude that Y contains an incompressible torus, and this torus must be separating since b_1(Y) = 0. We now decompose Y along this torus T, writingY = M_1 ∪_T M_2where each M_i is compact and irreducible with incompressible torus boundary.Then we can write the (2) representation variety of Y as a fiber productR(Y) = R(M_1) ×_R(T) R(M_2),so it suffices to find representations ρ_j: π_1(M_j) →(2) for j=1,2 whose restrictions to π_1(T) coincide.In fact, we need only find these up to conjugation, so we end up studying the imagesi_j^*: X(M_j) → X(T)of the respective (2) character varieties in the (2) character variety of the torus, known as the pillowcase orbifold.We will generally aim to show that these images intersect, since the points of intersection correspond to representations π_1(Y) →(2); if we know that one of the images at such a point corresponds to an irreducible representation of π_1(M_j), then the representation of π_1(Y) will also be irreducible, as desired.Each M_i comes equipped with a distinguished peripheral curve up to orientation, namely the rational longitude λ_i: this generates the kernel of the inclusion map H_1(∂ M_j; ) → H_1(M_j;), but may be either zero or torsion in H_1(M_j;).If λ_2 is nullhomologous in M_2 then there is a standard degree-1 map that pinches M_2 onto a solid torus (see Proposition <ref>), and hence there is a degree-1 mapY → M_1(λ_2)onto the Dehn filling of M_1 along the slope λ_2 ⊂ T.This induces a surjection π_1(Y) →π_1(M_1(λ_2)), so if Y is (2,)-reducible then M_1(λ_2) must be as well.Similarly, if [λ_1] = 0 in H_1(M_1;) then we deduce that M_2(λ_1) is also (2,)-reducible.By choosing an appropriate Dehn filling of M_j we may write it as the complement of a nullhomologous knot K_j in a closed 3-manifold Y_j, with meridian μ_j ⊂∂ M_j, such that each H_1(Y_j;) is 2-torsion and one of the following applies: * both of the K_j are nullhomologous, with longitudes λ_j, and we glue ∂ M_1 ∂ M_2 so that * μ_1 ∼λ_2 and λ_1 ∼μ_2, or * μ_1 ∼μ_2^-1 and λ_1 ∼μ_2^2 λ_2;* or without loss of generality [λ_1] is 2-torsion in H_1(M_1;), and then [λ_2] = 0 in H_1(M_2;).In this case we glue ∂ M_1 ∂ M_2 so that μ_1 ∼λ_2 and λ_1 ∼μ_2. (This list is shown to be exhaustive in <ref> and <ref>.)Cases <ref> and <ref> are handled similarly, so we will summarize the arguments in cases <ref>, <ref>, and <ref> in that order below. Case <ref> (Theorem <ref>): The above discussion says that each ofY_1= M_1(μ_1) = M_1(λ_2), Y_2= M_2(μ_2) = M_2(λ_1)is (2,)-reducible.In particular we can follow work of Lidman, Pinzón-Caicedo, and the third author <cit.> essentially verbatim to construct an irreducible representation ρ: π_1(Y) →(2), giving a contradiction.The rough idea is that by using work of <cit.>, we know that each of the images i_j^*(X(M_j)) must contain a closed essential curve in the twice-punctured pillowcase, and the gluing map guarantees that these two curves will intersect.The only change from <cit.> is that we replace Floer's instanton homology for homology 3-spheres with the irreducible instanton homology of each Y_j.This invariant is generated as a complex by gauge equivalence classes of irreducible flat connections on the trivial (2)-bundle P → Y_j, and the theory works in exactly the same way when H_1(Y_j;) is 2-torsion, because the reducible flat connections on P all have central holonomy.See <ref> for further discussion. Case <ref> (Theorem <ref>): This is similar to case <ref>, but a priori we do not know that Y_2 is (2,)-abelian: we cannot pinch M_1 onto a solid torus, because the class [λ_1] ∈ H_1(M_1;) is 2-torsion rather than zero.In <ref>, we construct a replacement that should be of independent interest. Let M be a compact, oriented 3-manifold with torus boundary, and suppose that the rational longitude λ_M ⊂∂ M has order 2 in H_1(M).Then there is a degree-1 mapf: M → N,where N is the twisted I-bundle over the Klein bottle, such that f restricts to a homeomorphism ∂ M →∂ N sending λ_M to a rational longitude λ_N ⊂∂ N.Using Proposition <ref>, we see that N ∪_T M_2 is (2)-abelian if Y is; this is enough to deduce that Y_2 is (2)-abelian and understand the image i_2^*(X(M_2)) ⊂ X(T) exactly as in case <ref>.This leads us to an irreducible (2) representation of π_1(N ∪_T M_2), and hence of π_1(Y). Case <ref> (Theorem <ref>): Here the λ_j are both nullhomologous again, but the analogous degree-1 maps from Y have targets (Y_j)_2(K_j) rather than Y_j.This means that the 2-surgeries on K_j are (2,)-reducible, and if one of them is toroidal then we may replace Y with it and repeat.We apply the following theorem of Rong <cit.> to say that this process must terminate after finitely many iterations: Suppose we have an infinite sequence of closed, oriented 3-manifolds and degree-1 maps between them, of the formM_1M_2M_3 ⋯.Then the map f_i is a homotopy equivalence for all sufficiently large i.(We omit Rong's hypothesis that the M_i belong to a set 𝒢_c of 3-manifolds satisfying the geometrization conjecture, as this is now a theorem.)Thus we can freely assume that the (2,)-reducible 2-surgeries are atoroidal.Now we must have (Y_j)_2(K_j) ≅#^n_j^3 for some n_j.With this simplification at hand, we prove in Theorem <ref> that such Y cannot be (2)-abelian.The key idea is to examine the subsetR'_j = {ρ: π_1(Y_j ∖ N(K_j)) →(2) |ρ(μ_j^2λ_j) = -1 }of each representation variety R(M_j).These ρ do not descend to representations ofπ_1( (Y_j)_2(K_j) ) ≅π_1( #^n_j^3 ) ≅ (/2)^∗ n_j,but their adjoint representations do, and we can understand the representation variety ( (/2)^∗ n_j, (3) ) explicitly enough to see that each path component of R'_j contains an abelian representation.This tells us in Proposition <ref> that for each j, the image i_j^*(X(M_j)) ⊂ X(T) meets the line corresponding to the condition ρ(μ_j^2λ_j) = -1 in a connected arc.It also contains an essential closed curve as in the previous cases, as well as the image of this curve under an involution of X(T).All of this ensures that the images i_j^*(X(M_j)) are too large to avoid each other, and where they intersect we get an irreducible representation after all, completing the proof. If Y is toroidal and H_1(Y;) is 2-torsion then one might expect there to be an irreducible representation π_1(Y) →(2), as shown for homology spheres in <cit.>, but we do not prove this here.The issue is that in case <ref>, we only get an (2) representation once we have reduced to the case where π_1( (Y_j)_2(K_j) ) is generated by elements of order 2.The reduction process gets stuck if (Y_j)_2(K_j) is hyperbolic: in this case the degree-1 map Y → (Y_j)_2(K_j) tells us that Y is not (2,)-reducible and we stop there, but we cannot conclude that Y is not (2)-abelian because we do not know whether (Y_j)_2(K_j) is.The proof of Theorem <ref>, carried out in Sections <ref> and <ref>, is similar enough to that of Theorem <ref>, so we will not outline it here.We note only that it can be reduced to an analogue of Case <ref>, namely Theorem <ref>, whose analysis is simpler because (unlike #^r ^3) a nontrivial connected sum of order-3 lens spaces is not (2,)-reducible.§.§ Organization In Section <ref> we discuss the needed background from instanton Floer homology, including some non-vanishing results; this includes a generalization of the usual surgery exact triangle, Theorem <ref>, the details of which we postpone to Appendix <ref>.Then in Section <ref> we use this to investigate the (2) character varieties of knot complements and their images in the pillowcase, the (2) character variety of T^2.In Section <ref> we study the twisted I-bundle over the Klein bottle in depth, and construct “pinch” mapsonto it from knot manifolds with rational longitudes of order 2 (Proposition <ref>).Building on this, Sections <ref> and <ref> prove the existence of irreducible (2)-representations for various toroidal 3-manifolds built by gluing together the complements of knots in 3-manifolds whose homology is 2-torsion; these are Theorems <ref>, <ref>, and <ref>, respectively.Notably, in Subsection <ref> we apply the pinch maps of Proposition <ref> to study the case where one of the knots has a homologically essential rational longitude.Finally, in Section <ref> we prove that if Y is a toroidal 3-manifold whose first homology is p-torsion for some prime p, then Y can be decomposed into a union of knot complements in one of a few standard ways; when p=2 these are precisely the forms studied in Sections <ref> and <ref>.Thus allows us to complete the proof of Theorem <ref>, which we do in Section <ref>.In Section <ref> we study the case where H_1(Y) is instead p-torsion for some odd prime p > 2, and we conclude by applying this in Section <ref> to prove Theorem <ref>. §.§ Acknowledgments We thank Ali Daemi, Tye Lidman, and Mike Miller Eismeier for helpful conversations about the irreducible instanton homology of 3-manifolds Y such that H_1(Y) is 2-torsion.We also thank Rhea Palak Bakshi and Renaud Detcherry for discussions about the relation between character varieties and torsion in skein modules.We are grateful to the Max Planck Institute for Mathematics for hosting all three of us for the bulk of this work.§ INSTANTON FLOER HOMOLOGYLet I_*(Y) denote Floer's instanton homology <cit.>, associated to any integer homology 3-sphere, and I^w_*(Y) the variant assigned to a Hermitian line bundle w → Y such that c_1(w) has odd evaluation on some homology class.(This means that we fix a U(2)-bundle E → Y and an isomorphism ∧^2E ≅ w, and let I^w_*(Y) be the (3) instanton homology of the admissible bundle (E) → Y, following <cit.>.)Kronheimer and Mrowka proved the following important result en route to their proof of the property P conjecture.Let K ⊂ S^3 be a nontrivial knot.Then I^w_*(S^3_0(K)) ≠ 0. Here, we cap off a Seifert surface Σ for K to get a closed surface Σ̂ generating H_1(Y_0(K)) ≅, and then we let w → Y_0(K) be the unique non-trivial line bundle with ⟨ c_1(w), [Σ̂]⟩ = 1.(In general we will write w for both the line bundle and its first Chern class, and this should not cause any confusion.)In fact, we have the following more general nonvanishing result.Let Y be an irreducible 3-manifold, and suppose that there is a line bundle w → Y and an embedded surface R ⊂ Y such that w· R = 1.Then I^w_*(Y) ≠ 0. Theorem <ref> was crucial in the proof of Theorem <ref> in <cit.>.Lidman, Pinzón-Caicedo, and Zentner <cit.> used Theorem <ref> to generalize this to knots K ⊂ Y with irreducible, boundary-incompressible exterior, proving for such K that if Y is an integer homology sphere with I_*(Y) = 0 then I^w_*(Y_0(K)) ≠ 0, and they used this to produce (2) representations for toroidal homology spheres.We would like to further generalize it to other Y in order to prove Theorem <ref>, but the problem is that I_*(Y) does not make sense unless Y is a homology sphere.In this section we will discuss a version of Floer's instanton homology for 3-manifolds whose first homology is 2-torsion.This does not seem to have appeared explicitly in the literature in this form, but we make no claim of originality here; these ideas appear in recent work of Daemi, Lidman, and Miller Eismeier <cit.>, and have been elaborated on in much greater detail by Daemi and Miller Eismeier <cit.> under the name of irreducible instanton homology.By way of motivation, Floer originally defined I_*(Y) for a homology sphere Y <cit.> in terms of a chain complex generated by gauge equivalence classes of irreducible flat connections on the trivial (2)-bundle P → Y.This construction ignores the trivial connection θ completely, except as a way of lifting the relative /8 grading to an absolute one.The reason that we can safely omit θ is that it has central holonomy.As an example of why this matters, we recall that d^2 = 0 because the matrix coefficients ⟨ d^2 a, b⟩ count pairs of rigid flowlines, meaning ASD connections on × P belonging to 0-dimensional moduli spaces, of the form(A_1,A_2) ∈_0(a,c) ×_0(c,b)as c ranges over generators of the chain complex.This count is meant to equal the number of points in the boundary of the compactification of a 1-dimensional moduli space _1(a,b), which is zero, and indeed this is the case as long as no sequence in _1(a,b) limits to a broken flowline a →θ→ b that breaks at the omitted connection θ.We rule out this problematic case by observing that rigid flowlines a →θ and θ→ b can only be glued into a moduli space (a,b) of dimension at least 4, because the gluing map in this case involves an extra (3) factor coming from the isotropy group of θ.The proof that I_*(Y) is an invariant similarly only relies on the fact that θ is central.Having said this, we can define I_*(Y) for manifolds with H_1(Y;) ≅ (/2)^r for some r ≥ 0, by repeating the material in <cit.> essentially verbatim.We need only observe that all of the reducible flat connections on P have central holonomy, due to the fact that every homomorphism (/2)^r →(2) has image in the center {±1}.Let Y be a closed, oriented rational homology 3-sphere, and suppose that H_1(Y;) is 2-torsion.Then there is an irreducible instanton homology groupI_*(Y)defined exactly as in <cit.>.It is the homology of a chain complex whose generators are gauge equivalence classes of irreducible flat connections on the trivial (2)-bundle P→ Y, and its differential counts anti-self-dual connections on the product × P →× Y.For other rational homology spheres the story is much more complicated, and we will say nothing more about it here.See <cit.> for details. The key property we will need from irreducible instanton homology is a surgery exact triangle, which goes back to Floer <cit.> for the case of knots in homology spheres.Let Y be a closed, oriented 3-manifold such that H_1(Y;) is 2-torsion, and let K ⊂ Y be a nullhomologous knot.Then there is an exact triangle⋯→ I_*(Y) → I^w_*(Y_0(K)) → I_*(Y_1(K)) →⋯,where the Hermitian line bundle w → Y_0(K) has c_1(w) Poincaré dual to a meridian of K. We note in Theorem <ref> that H_1(Y) ≅ H_1(Y_1(K)) is 2-torsion, so Theorem <ref> says that all of the groups in the exact triangle are well-defined.The proof of Theorem <ref> follows an argument given by Scaduto in <cit.>, after one checks that the relevant compactifications of moduli spaces do not include broken flowlines with reducible connections in the middle.We discuss the details in Appendix <ref>.Let Y be a closed, oriented 3-manifold such that H_1(Y;) is 2-torsion, and let K ⊂ Y be a nullhomologous knot.For any n∈, there is an exact triangle⋯ I_*(Y_1/n(K)) → I^w_*(Y_0(K)) → I_*(Y_1/(n+1)(K)) →⋯,where the Hermitian line bundle w → Y_0(K) has c_1(w) Poincaré dual to a meridian of K.We let Y' = Y_1/n(K), with K' ⊂ Y' the core of this surgery.Then H_1(Y') ≅ H_1(Y) is 2-torsion, and 1-surgery on K' is the same as Dehn filling the exterior of K' along the curve μ_K'λ_K' = (μ_Kλ_K^n)(λ_K) = μ_Kλ_K^n+1, which produces Y_1/(n+1)(K).The desired triangle is thus the result of applying Theorem <ref> to the pair (Y',K'). With all of this at hand, we can now provide the desired generalization of <cit.>.Let Y be a closed, orientable, (2)-abelian 3-manifold, and suppose that H_1(Y;) is 2-torsion.Let K ⊂ Y be a nullhomologous knot with irreducible, boundary-incompressible exterior.Then I^w_*(Y_0(K)) ≠ 0, where w is Poincaré dual to a meridian of K.We repeat the proof of <cit.> verbatim, including the details here for convenience.Since Y is (2)-abelian, there are no irreducible flat connections on the product (2)-bundle over Y, so I_*(Y) = 0.Supposing that I^w_*(Y_0(K)) = 0 as well, we apply Theorem <ref> to get I_*(Y_1(K)) = 0, and then Theorem <ref> with n=1,2,3 in succession to getI_*(Y_1/2(K)) = I_*(Y_1/3(K)) = I_*(Y_1/4(K)) = 0.Then Gordon <cit.> showed that Y_1/4(K) ≅ Y_1(K_2,1), where K_2,1 denotes the (2,1)-cable of K, so we apply Theorem <ref> to getI^w_*(Y_0(K_2,1)) = 0.But Y_0(K_2,1) is irreducible, because it can be built by gluing two irreducible 3-manifolds – the exterior of K and the 0-surgery on the (2,1)-cable knot in S^1× D^2 – along their incompressible boundaries.Thus I^w_*(Y_0(K_2,1)) is nonzero, by Theorem <ref>, and we have a contradiction.We conclude that I^w_*(Y_0(K)) ≠ 0 after all. § CLOSED CURVES IN THE PILLOWCASE §.§ The pillowcase Here we review basic facts about the pillowcase, following <cit.>.Given a manifold Y, we define its (2)-representation variety R(Y) = (π_1(Y), (2)),and let R^(Y) denote the subspace consisting of irreducible representations.(We recall that an (2) representation is irreducible if and only if its image is non-abelian.)These both carry an action of (2) by conjugation, and we define the character varietiesX(Y)= R(Y) / (2), X^(Y)= R^(Y) / (2)as the quotients by this action. Note that we use a plain font (R, X) for the (2) representation and character varieties, in contrast to the calligraphicandfor their (2,) counterparts.If K is a nullhomologous knot in a 3-manifold Y, with exterior E_K = Y ∖ N(K), then the inclusion i: ∂ E(K) ↪ E(K) induces a mapi^*: X(E_K) → X(∂ E_K) ≅ X(T^2).Letting μ,λ be a meridian–longitude basis of π_1(∂ E_K), every representation ρ of either π_1(E_K) or π_1(T^2) is conjugate to one in whichρ(μ)= [e^iα 0; 0 e^-iα ],ρ(λ)= [e^iβ 0; 0 e^-iβ ],for some α, β∈/2π, and these coordinates are almost unique: the only ambiguity is that the representations corresponding to (α,β) and (-α,-β) are conjugate to each other.Thus the pair μ,λ leads to an identificationX(T^2) = (/2π) × (/2π)/(α,β) ∼ (-α,-β),and this quotient orbifold is called the pillowcase.See Figure <ref> for an example.The following is one of the key technical results of <cit.>, though it is only applied there to non-trivial knots in S^3.Let K be a nullhomologous knot in a 3-manifold Y, and let w ∈ H^2(Y_0(K);) be Poincaré dual to a meridian of K.Suppose that I^w_*(Y_0(K)) ≠ 0, and that the pillowcase image i^*(X(E_K)) does not contain the pointsP = (0,π),Q = (π,π) ∈ X(T^2).Then there is a topologically embedded curve C ⊂ i^*(X(E_K)) that is homologically essential inX(T^2) ∖{P,Q}≅ (0,1) × S^1. This is proved in <cit.>; we sketch the argument here.We first observe that I^w_*(Y_0(K)) is generated as a chain complex by gauge equivalence classes of flat connections on the associated (3) bundle over Y_0(K) that do lift to (2) connections over E_K, but that do not lift over all of Y_0(K) because the lifted connections over E_K have holonomy -1 along λ.Equivalently, these are conjugacy classes of representationsρ: π_1(E_K) →(2)such that ρ(λ) = -1.Thus the complex used to define I^w_*(Y_0(K)) is generated by the points of X(E_K) whose images lie on the line segment L_π = {β≡π2π} in the pillowcase.The next step is to show as in <cit.> that ifγ: [0,1] → X(T^2)is a topologically embedded path from γ(0)=P to γ(1)=Q that avoids the line L_0 = {β≡ 02π}, then γ intersects the image i^*(X(E_K)). Now the chain complex for I^w_*(Y_0(K)) is generated by the intersection of i^*(X(E_K)) with one such path, namely the line L_π.Supposing we have another such path γ that avoids i^*(X(E_K)) completely, then since i^*(X(E_K)) is compact it must actually be disjoint from an open neighborhood U of this path.Now I^w_*(Y_0(K)) is defined using a certain Chern–Simons functional, and <cit.> says that we can modify it using holonomy perturbations so that I^w_*(Y_0(K)) is instead defined by the intersection of i^*(X(E_K)) with a path that is arbitrarily C^0-close to γ.We take this path to lie in U, and then the intersection is empty, so I^w_*(Y_0(K)) is the homology of the zero complex and this is a contradiction.So every such γ must intersect i^*(X(E_K)).Now just as in the proof of <cit.> we know that Γ = i^*(X(E_K)) is an embedded finite graph in the pillowcase X(T^2) ≅ S^2.The graph Γ contains the entire line L_0 = {β≡ 02π}, as the image of the reducible characters of π_1(E_K), but by assumption it contains neither P nor Q, so the above argument says that P and Q lie in different components of the complement S^2 ∖Γ.We use <cit.> to conclude that Γ contains a topologically embedded, homologically essential curve in S^2 ∖{P,Q}. The following lemma will be useful in conjunction with Proposition <ref>, in order to understand when the essential curve C can pass through the corners of the pillowcase.Suppose that H_1(Y;) is 2-torsion, and let K ⊂ Y be a nullhomologous knot.If either (0,0) or (π,0) is a limit point of the image i^*(X^(E_K)), then there must be a representation ρ: π_1(E_K) →(2) with non-abelian image such that ρ(μ) = ρ(λ) = 1.In particular, neither Y nor any Dehn surgery Y_p/q(K) is (2)-abelian.Suppose we have a sequence of irreducible representations ρ_n: π_1(E_K) →(2) such that the imagesi^*([ρ_n]) = (α_n,β_n) ∈ X(T^2)converge to either (0,0) or (π,0).If their limit is (π,0), then since H_1(E_K) ≅ H_1(Y) ⊕ with thesummand generated by μ, we can define a characterχ: π_1(E_K) ↠ H_1(E_K) →{± 1}which sends H_1(Y) to +1 and μ to -1, and thus has central image.In particular eachρ_n' = χ·ρ_n: π_1(E_K) →(2)is an irreducible representation as well, and since ρ'_n(μ) = χ(μ)ρ(μ) = -ρ_n(μ) but ρ'_n(λ) = ρ(λ), we havei^*([ρ'_n]) = (α_n-π, β_n) → (0,0).Thus we may as well assume that (α_n,β_n) → (0,0).Moreover, since the (2) representation variety R(E_K) is compact, we can pass to a subsequence to assume that the ρ_n converge in R(E_K); their limit is a representationρ: π_1(E_K) →(2)with i^*([ρ]) = (0,0) and thus ρ(μ) = ρ(λ) = 1.The limiting representation ρ factors as a compositionπ_1(E_K) ↠π_1(E_K)/μ≅π_1(Y) (2),in which the last map ρ_Y has the same image as ρ itself.If this image is abelian then ρ_Y further factors through H_1(Y); the latter is 2-torsion, and -1 is the only order-2 element of (2), so then the image of ρ lies in the center {±1} of (2).We will show that this is impossible, arguing along the same lines as in <cit.>, and this will imply that ρ must not have abelian image after all.To prove that ρ cannot have central image, we think of it as a point of the (2,) representation variety (E_K), which is an affine variety over , and then since ρ is trivial we can identifyT_ρ(E_K) ≅ H^1(E_K; 𝔰𝔩(2,)_ρ) ≅ H^1(E_K; ^3) ≅^3.We have a finite-to-one (in fact, injective) morphism f: (2,) →(E_K), defined by sending A ∈(2,) to the unique representationρ_A: π_1(E_K) ↠ H_1(E_K;) ≅ H_1(Y) ⊕(2,)such that ϕ_A|_H_1(Y) = ρ|_H_1(Y) and ϕ_A(μ) = A.Then f(1) = ρ_1 = ρ and _ ((2,)) = 3 = _ T_ρ(E_K),where on the left side we view (2,) as a complex variety and compute its dimension at the identity 1∈ f^-1(ρ).Thus <cit.> says that (f) contains a neighborhood of ρ in (E_K), all of whose points are non-singular.But then ρ has a neighborhood in R(E_K) ⊂(E_K) consisting only of points in (f), all of which have abelian image, and this contradicts the assumption that ρ is a limit of irreducible representations.In summary, we have shown that the representation ρ must have non-abelian image, with ρ(μ) = ρ(λ) = 1.Now for any slope p/q, including p/q = 1/0, we have ρ(μ^pλ^q) = 1 and so ρ factors as a compositionπ_1(E_K) ↠π_1(E_K)/μ^pλ^q≅π_1(Y_p/q(K)) (2).The map ρ_p/q has the same image as ρ itself, so its image is non-abelian and thus Y_p/q(K) is not (2)-abelian.§.§ The cut-open pillowcaseIn some cases we can say more about the pillowcase image of X(E_K) and can use this to simplify the statement of Proposition <ref>.For example:Let K be a nullhomologous knot in an (2)-abelian 3-manifold Y, and fix a representation ρ: π_1(E_K) →(2).Suppose that i^*([ρ]) has coordinates (α,β) in the pillowcase, where α∈π.Then ρ has abelian image and β≡ 02π.The claim that β≡ 0 will follow from knowing that (ρ) is abelian: then ρ factors through H_1(E_K;), and the homology class [λ] is zero, so we must have ρ(λ) = 1.Thus we focus on the claim that ρ has abelian image.Suppose first that α≡ 0 2π.Then ρ(μ) = 1, and so ρ descends to a representationρ_Y: π_1(Y) ≅π_1(E_K)/μ→(2),which must then have abelian image.But ρ has the same image as ρ_Y, so (ρ) is abelian as well.In the remaining case, we have α≡π2π, so ρ(μ) = -1.Then we can multiply by a central character χ: π_1(E_K) →{±1} with χ(μ) = -1, just as in the proof of Lemma <ref>, to replace ρ with ρ' such that ρ'(μ) = 1.By the previous case we know that ρ' has abelian image, hence so does ρ. Lemma <ref> sometimes allows us to replace the pillowcase with the cut-open pillowcase= [0,π] × (/2π).In particular, the natural quotient map → X(T^2) glues each point (0,β) to (0,2π-β), and (π,β) to (π,2π-β), so it is one-to-one except at points of the form (α,β) with α∈π but β∉π.Lemma <ref> says that if Y is (2)-abelian then i^*(X(E_K)) avoids the images of such points, so it lifts uniquely to .Thus for (2)-abelian Y we have a well-defined mapj: X(E_K) →. The following is now a quick application of Proposition <ref>, generalizing <cit.>.Let K ⊂ Y be a nullhomologous knot in an (2)-abelian 3-manifold, and suppose that I^w_*(Y_0(K)) ≠ 0, where w is Poincaré dual to a meridian of K in Y_0(K).Then the image j(X(E_K)) ⊂ must contain a topologically embedded curve that is homologically essential in H_1(;) ≅.The pillowcase image i^*(X(E_K)) ⊂ X(T^2) does not contain the points P=(0,π) or Q=(π,π), by Lemma <ref>.Thus we can apply Proposition <ref> to find an embedded curveC ⊂ i^*(X(E_K))that is homologically essential in X(T^2) ∖{P,Q}.Lemma <ref> says that C actually lies inX(T^2) ∖( {0,π}× (0,2π) ),where it is still homologically essential, and the inclusion of the latter intois a homotopy equivalence taking C to its image j(C), so j(C) is a homologically essential curve in . We can now deduce the following generalization of the main result of <cit.>.Let Y be an (2)-abelian 3-manifold such that H_1(Y) is 2-torsion, and let K ⊂ Y be a nullhomologous knot with irreducible, boundary-incompressible complement.Then for any r∈ with 0 < |r| ≤ 2, there is a representationρ: π_1(Y_r(K)) →(2)with non-abelian image.Proposition <ref> tells us that I^w_*(Y_0(K)) ≠ 0, where w is Poincaré dual to a meridian of K.By Theorem <ref>, we can thus find a continuous pathγ: [0,1] → [0,π] × [0,2π]such that if we write γ(t) = (α_t, β_t), then * β_0 = 0, β_1 = 2π, and 0 < β_t < 2π for 0 < t < 1;* for each t, there is a representation ρ_t: π_1(E_K) →(2) withρ_t(μ)= [e^iα_t 0; 0 e^-iα_t ],ρ_t(λ)= [e^iβ_t 0; 0 e^-iβ_t ]; * and ρ_t is irreducible for 0 < t < 1, since 0 < β_t < 2π implies that ρ_t(λ) ≠ 1.Since (α_t,β_t) → (α_0,0) as t ↘ 0, and since R(E_K) is compact, some subsequence of the irreducibles{ρ_t | 0 < t < 1 }⊂ X^(E_K)converges to a representation ρ̅_0 ∈ R(E_K) with j([ρ̅_0]) = (α_0,0).Since H_1(Y) is 2-torsion, we can apply Lemma <ref> to say that α_0 is neither 0 nor π.The same argument says that 0 < α_1 < π as well.Now suppose without loss of generality that 0 < r ≤ 2, and write r = p/q in lowest terms, so that 0 < p ≤ 2q.We note for each t ∈ [0,1] thatρ_t(μ^pλ^q) = [e^i(pα_t+qβ_t) 0; 0 e^-i(pα_t+qβ_t) ],and that α_0 < π and p ≤ 2q imply thatpα_0 + qβ_0 = pα_0 < pπ≤ 2qπ,while α_1 > 0 tells us thatpα_1 + qβ_1 = pα_1 + 2qπ > 2qπ.Thus by continuity there is some t∈(0,1) such that pα_t+qβ_t = 2qπ, and then ρ_t is an irreducible representation satisfying ρ_t(μ^pλ^q) = 1, so it descends to the desired representation of π_1(Y_r(K)).It is not clear to us whether the hypotheses of Theorem <ref> should imply the existence of a non-abelian representation π_1(Y_0(K)) →(2), even when Y=S^3.This is equivalent to there being an irreducible ρ: π_1(E_K) →(2) with pillowcase image i^*([ρ]) = (α,0) for some α.If no such ρ exists, then the representation ρ̅_0 ∈ R(E_K) constructed in the proof of Theorem <ref> is a reducible limit of irreducible representations, and this implies that the Alexander polynomial satisfies Δ_K(e^2iα_0) = 0, cf. <cit.> in the case Y=S^3 or <cit.> more generally.On the other hand, these hypotheses do imply that I^w_*(Y_0(K)) ≠ 0, and hence there is an irreducible representation π_1(Y_0(K)) →(3) that does not lift to an (2) representation. § PINCHING AND THE TWISTED I-BUNDLE OVER THE KLEIN BOTTLEIn this section we will construct and study some degree-1 maps between compact 3-manifolds with torus boundary.As a warm-up exercise, we recall the well-known construction of “pinching” maps onto solid tori here; after doing so, we will study the twisted I-bundle over the Klein bottle in some detail, culminating in the construction of pinching maps onto it in Proposition <ref>.We will repeatedly make use of the following claim.Let X be a compact n-manifold, and suppose we have a continuous map f: ∂ X → S^n-1.Then f can be extended to a continuous map f̃: X → D^n, with f̃^-1(∂ D^n) = ∂ X.We identify a collar neighborhood [0,1] ×∂ X of the boundary {1}×∂ X, and then setf̃(t, x) = t· f(x)for all (t,x) ∈ [0,1] ×∂ X.Then f̃({0}×∂ X) = {0}, so we extend f̃ to the rest of X by setting f̃(y) = 0 for all y ∉[0,1]×∂ X. Lemma <ref> allows us to construct pinching maps onto solid tori as follows.Let M be a compact, oriented 3-manifold with torus boundary, and let λ⊂∂ M be an essential curve that bounds a properly embedded, orientable surface F ⊂ M.Then there is a degree-1 mapf: M → S^1× D^2that restricts to a homeomorphism ∂ M → S^1 ×∂ D^2 and sends λ to {}×∂ D^2.We first define f on ∂ M by choosing a homeomorphism ∂ M ≅ S^1 ×∂ D^2 that sends λ to {}×∂ D^2, and then extend it to a homeomorphism between collar neighborhoods of both boundaries.Now f is defined on a collar neighborhood of ∂ F in F, and it sends the boundary of this collar to a circle in {}× D^2 that bounds a disk, so Lemma <ref> lets us extend f across all of F.Again we extend this to a collar neighborhood of F, so now f is defined on N(∂ M ∪ F).The boundary of this domain is sent to a 2-sphere that bounds a 3-ball (namely, the boundary component of N((S^1 ×∂ D^2) ∪ ({}× D^2)) that lies on the interior of S^1× D^2), so we use Lemma <ref> to extend f to the rest of M and we are done. In the rest of a section we will work with rational longitudes, so in order to define them we must first recall a standard fact about 3-manifolds that we will use frequently in <ref>.If M is a compact orientable 3-manifold with boundary, the “half lives half dies” principle (see for example <cit.>) says that over any field , the mapi_*: H_1(∂ M;) → H_1(M;)has rank 1/2 H_1(∂ M;), which is 1 if ∂ M is a torus.(The orientability is needed to ensure that M satisfies Poincaré–Lefschetz duality over .)Applying this over =, we deduce that there is a primitive integral class λ∈ H_1(∂ M;) that generates the kernel of i_* over , and it is unique up to sign.We call this the rational longitude of M.While λ need not be nullhomologous in M, the integral class i_*(λ) is always torsion. §.§ The twisted I-bundle over the Klein bottle We define an annulusA = [-1,1] × (/2π)and a pair of orientation-preserving homeomorphisms A→ A by the formulasϕ(r,θ)= (-r,-θ),τ(r,θ)= (r, θ - π(r+1)).We note that the homeomorphism τ is a Dehn twist about the core c = {0}× (/2π).Define a diffeomorphism ψ_n: A → A for each n∈ by ψ_n = τ^n ∘ϕ.Then the mapping torus M_ψ_n is homeomorphic to the twisted I-bundle over the Klein bottle for all n, and if λ⊂∂ M_ψ_n is the rational longitude then each annulus fiber generates H_2(M_ψ_n,∂ M_ψ_n) ≅ and has boundary homologous in ∂ M_ψ_n to 2λ.We first observe that ψ_n fixes the curve c = {0}×(/2π) setwise, since both ϕ and τ do, but that it reverses the orientation of c: we haveψ_n(r,θ) = τ^n(ϕ(r,θ)) = τ^n(-r,-θ) = (-r, -θ - nπ(1-r))for all r and θ, and this sends the circle {r=0} to itself.Thus the mapping torusM_ψ_n|_c = c × [0,1]/(x,1) ∼ (ψ_n(x), 0)of ψ_n|_c is homeomorphic to a Klein bottle.Next, the mapping torus of ψ_n on all of A is by definitionM_ψ_n = A ×[0,1]/(x,1) ∼ (ψ_n(x),0),and the Klein bottle B = M_ψ_n|_c is a submanifold of M_ψ_n, identified as the image of {r=0}× [0,1] inside A× [0,1].We can check that the projection mapπ: A × [0,1]→ c × [0,1]((r,θ), t)↦((0, θ + (1-t)nπ r), t)fixes all points of c×[0,1], i.e., where r=0, and that the fiber over each point is an interval:π^-1( (0,θ), t ) = {((s,θ-(1-t)nπ s),t) | s ∈ [-1,1]}.Moreover, the monodromy ψ_n identifies the fibers at t=1 with fibers at t=0: we have((r,θ),1) ∼ (ψ_n(r,θ), 0) = ((-r, -θ - nπ(1-r)), 0) = ((-r, (-θ - nπ) - nπ (-r)), 0 ) ∈π^-1( (0,-θ-nπ), 0 ).Thus π descends to a fibration π: M_ψ_n→ B with interval fibers.The total space is orientable whereas B is not, so it must be the twisted I-bundle over B, as claimed.Finally, consider the fiber A_1 = A ×{1} of M_ψ_n.This fiber is primitive as an element ofH_2(M_ψ_n,∂ M_ψ_n) ≅ H^1(M_ψ_n) ≅ H^1(B) ≅,and hence generates it, because it has a single transverse point of intersection with the closed curve(0, -nπ2) × [0,1] ⊂A×[0,1]/(x,1)∼(ψ_n(x),0) = M_ψ_n.(We note that this curve is closed because ψ_n(0,-nπ/2) = (0,-nπ/2).)We orient the components of∂ A_1 = ({± 1}× (/2π)) ×{1}as the boundary of A_1.Then these components are isotopic to each other as oriented curves in the torus ∂ M_ψ_n, because λ = ( {+1}× (/2π) ) ×{1} is identified with ({-1}× (/2π) ) ×{0} in an orientation-reversing way.In particular λ is a rational longitude for M_ψ_n, and ∂ A_1 is homologous in ∂ M_ψ_n to 2λ as claimed. The twisted I-bundle over the Klein bottle is depicted as a mapping torus in Figure <ref>.With this construction at hand, we now study its (2) character variety.Let N be the twisted I-bundle over the Klein bottle, with rational longitude λ_0.Then there is a unique peripheral curve μ_0 ⊂∂ N with Dehn filling N(μ_0) ≅^3#^3, and μ_0 is dual to λ_0.Every other (2)-abelian Dehn filling of N has cyclic fundamental group, namelyor /4k for some integer k ≥ 1.Viewing (2) as the unit quaternions, every representationρ: π_1(N) →(2)is conjugate to one with (ρ(μ_0),ρ(λ_0)) equal to either (1,±1) or (-1, e^it) for some t ∈/2π, and every such value is realized by some ρ.The image of ρ is non-abelian if and only if ρ(λ_0) ≠±1.Taking n=0 in Lemma <ref>, the mapping torus M_ψ_0 = M_ϕ≅ N has fundamental groupπ_1(M_ϕ) ≅π_1(B) ≅⟨ a,b | aba^-1 = b^-1⟩,where a is identified with the section(0,0) × [0,1] ⊂A× [0,1]/(x,1)∼(ϕ(x),0)of M_ϕ→ S^1, and b is identified with a core circle c ×{1/2} of one of the annulus fibers.Then b is isotopic in M_ϕ to the rational longitude λ_0 ⊂∂ M_ϕ, while a^2 is isotopic to a dual peripheral curveμ_0 = {(1,0),(-1,0)}× [0,1] ⊂ M_ϕ.Thus π_1(M_ϕ) has peripheral subgroup ⟨μ_0,λ_0 ⟩ = ⟨ a^2, b⟩.We first claim that the Dehn filling N(μ_0) is ^3#^3.Indeed, by viewing N as the mapping torus of ϕ: A → A, one can see that the annuli( [-1,1] ×{θ}) × [0,1] ⊂ A × [0,1], θ=0 or πgive rise to a pair of Möbius bands in N, with tubular neighborhoods([-1,1] × I) × [0,1]/(x,1) ∼ (ϕ(x),0),I = (-π2,π2)or(π2,3π2),whose boundaries are parallel copies of μ_0.The meridional disks in the Dehn filling solid torus complete each of these Möbius bands to real projective planes in N(μ_0), and their tubular neighborhoods to punctured copies of ^3, producing the desired identification N(μ_0) ≅^3 #^3.Next, we will show that μ_0 is unique.Given any other slope α = μ_0^pλ_0^q = a^2pb^q in ∂ N, we must have q ≠ 0 and (p,q)=1; we take q ≥ 1 without loss of generality.If q=1 then we compute thatπ_1(N(α)) ≅⟨ a,b | aba^-1=b^-1, b=a^-2p⟩≅/4|p|,which is cyclic of order 4|p| if p ≠ 0, and isotherwise (corresponding to N(λ_0) ≅ S^1× S^2).If q ≥ 2 and p is odd then we can define a non-abelian representationπ_1(N(α)) ≅⟨ a,b | aba^-1 = b^-1, a^2pb^q=1 ⟩→(2)by sending a↦ j and b ↦ e^iπ/q, so N(α) is not (2)-abelian.Similarly if q ≥ 3 and p is even then we can send a ↦ j and b ↦ e^i· 2π/q, and this is also non-abelian; the case where q=2 and p is even does not occur because p and q are coprime.Thus every (2)-abelian Dehn filling of N other than N(μ_0) ≅^3#^3 has fundamental groupor /4|p| for some p.Finally, we consider an arbitrary representation ρ: π_1(M_ϕ) →(2), which must satisfy ρ(aba^-1) = ρ(b^-1).If ρ(a) = ±1 then ρ is reducible and ρ(b) = ρ(b^-1) implies that ρ(λ_0) = ρ(b) = ±1, while ρ(μ_0) = ρ(a^2) = 1.Otherwise up to conjugation we have ρ(a) = e^js for some s ∉π; the relation ρ(aba^-1) = ρ(b^-1) implies that ρ(a) = ± j and that ρ(b) has zero j-component, so up to another conjugation we can further arrange that ρ(a) = j and ρ(b) = e^it for some t, and any value of t works.In this case we have ρ(μ_0) = ρ(a^2) = -1 and ρ(λ_0) = ρ(b) = e^it, and ρ is irreducible unless ρ(b) = e^it commutes with ρ(a)=j, i.e., unless ρ(b)=±1.§.§ Pinching maps for rational longitudes of order 2 In this subsection we will construct degree-1 maps from compact manifolds M with torus boundary onto the twisted I-bundle over the Klein bottle.In contrast to Proposition <ref>, which only works when the rational longitude of M is nullhomologous, here we require the rational longitude to have order 2.Let M be a compact, oriented 3-manifold with torus boundary, and suppose that the rational longitude λ_M ⊂∂ M has order 2 in H_1(M).Then there is a degree-1 mapf: M → N,where N is the twisted I-bundle over the Klein bottle, such that f restricts to a homeomorphism ∂ M →∂ N sending λ_M to a rational longitude λ_N ⊂∂ N.Using Lemma <ref>, we realize N as the mapping torus of some self-diffeomorphismψ_n: A → Aof the annulus; we will choose the integer n∈ later.We will also letF ⊂ Mbe a connected, properly embedded rational Seifert surface, whose boundary is two disjoint copies λ^0_M, λ^1_M ⊂∂ M of the rational longitude for M.We will construct the map f in stages: first we define it on ∂ M, then we extend it to a rational Seifert surface F' constructed by stabilizing F, and then we extend it across the remainder of M.This last step requires substantially more care than did the pinching maps onto solid tori in Proposition <ref>: letting M_0 denote the remaining portion of M, we will want to send M_0 into N minus a neighborhood of an annulus fiber, i.e., a solid torus.In order to extend our initial map ∂ M_0 → S^1 × S^1 to M_0 → S^1× D^2, we must arrange for some curve γ⊂∂ M_0 that is nullhomologous in M_0 to be sent to {}× S^1, so that we can collapse a surface in M_0 with boundary γ to {}× D^2.By contrast, the target in the analogous step of Proposition <ref> was a solid torus minus a disk fiber, which is a ball, and we could just apply Lemma <ref> to extend ∂ M_0 → S^2 to M_0 → D^3 without any extra hypotheses.We fix points p_0 ∈λ^0_M and p_1 ∈λ^1_M, and a properly embedded, oriented arc α⊂ F from p_0 to p_1.Identifying a closed tubular neighborhood of F as F× [-1,1] ⊂ M, with F = F ×{0}, and letting E_F = M ∖(F ×(-1,1))be the exterior of F, we build a closed curve c ⊂∂ E_F as the union of the oriented arcsα_± = α×{±1}⊂ F ×{±1}with a pair of arcs in ∂ M ∩ E_F from p_1 ×{1} to p_0 ×{-1}, and from p_1×{-1} to p_0 ×{1}.See the top row of Figure <ref>. Next, we take a collar neighborhood ∂ E_F × [-1,0] ⊂ E_F of the boundary of E_F, which we identify in these coordinates as ∂ E_F ×{0}.Then c = c×{0} and c' = c×{-1} cobound an annulus in E_F, namely the product c×[-1,0].We take an arc β connecting c' to F in the interior of M, chosen so that β intersects the annulus c × [-1,0] in a separating arc.Then we stabilize F to get a new rational Seifert surface F', with g(F') = g(F)+1, by attaching the boundary of a small tubular neighborhood of c' ∪β, as shown in the bottom row of Figure <ref>; we also perturb the arc α_- ⊂ F×{-1} slightly so that it avoids this neighborhood.The end result is that we have a properly embedded disk D in the exterior E_F'≅ M ∖(F'×(-1,1)) of F', consisting of the annulus c×[-1,0] ⊂ E_F minus a neighborhood of the arc β.The intersection∂ D ∩∂ M = c ∩∂ Mconsists of the two chosen arcs from p_1 ×{±1} to p_0 ×{∓1}, and the rest of ∂ D consists of a pair of properly embedded arcsα'_+ ×{+1} ⊂ F' ×{+1},α'_- ×{-1} ⊂ F' ×{-1}from p_1 ×{±1}∈λ^1_M ×{±1} to p_0 ×{±1}∈λ^0_M ×{±1}.We are now ready to construct the desired map f: M → N, where N is the mapping torusM_ψ_n = A × [0,1]/(x,1) ∼ (ψ_n(x),0)as described in Lemma <ref>.We start by choosing a mapg: (F',∂ F') → (A, ∂ A)as follows: we choose an identification of ∂ F' with the two components of ∂ A, and then extend this by sending the arc α'_- ⊂ F' homeomorphically onto some properly embedded arc γ connecting the components of ∂ A.We extend this to collar neighborhoods of each, getting a partially defined homeomorphismg: N(∂ F' ∪α'_-)N(∂ A ∪γ)as shown in Figure <ref>.This sends the circle∂( N(∂ F' ∪α'_-) ) ∖∂ F'homeomorphically to a circle in A that bounds a disk, and this circle bounds the portion of F' on which g has not yet been defined, so we now use Lemma <ref> to extend g to the rest of F'. We now define f: M → N on the union of ∂ M and the rational Seifert surface F' as follows.We first choose a homeomorphismf|_∂ M: ∂ M →∂ Nthat takes the two rational longitudes λ^i_M to the components of∂ A ×{1}⊂A × [0,1]/(x,1) ∼ (ψ_n(x),0)≅ N.Having done so, we use the above map g: (F',∂ F') → (A,∂ A) to setf(x) = (g(x),1) ∈A × [0,1]/(x,1) ∼ (ψ_n(x),0)for all x ∈ F'.We can extend f to a collar neighborhood of ∂ M, and then to a neighborhood F' × [-1,1] of F' such thatf(F' ×{1})⊂ A ×{ϵ}, f(F' ×{-1})⊂ A ×{1-ϵ}where ϵ > 0 is small (say ϵ=1/10 for concreteness).This is illustrated in Figure <ref>.We note that so far the image of f, which has been defined on a neighborhood of ∂ M ∪ F', is the union ofA ×([0,ϵ] ∪ [1-ϵ,1])and a neighborhood of ∂ N.The complement of that image is a solid torus V ⊂ N, and the properly embedded disk D ⊂ E_F' has its boundary ∂ D sent to an essential curve in ∂ V, consisting of * one arc in each component of ∂ A × [ϵ,1-ϵ];* the image g(α'_-)×{1-ϵ} = γ×{1-ϵ} of the arc α'_-×{-1}⊂ F'×{-1};* the image ψ_n(g(α'_+)) ×{ϵ} of the arc α'_+ ×{+1}⊂ F'×{+1}.The curve f(∂ D) may not bound a disk in V, but if we change the parameter n in the monodromy ψ_n, then its intersection with A ×{ϵ} changes by the corresponding number of Dehn twists along the core of that annulus.Thus by a suitable choice of n we can arrange for f(∂ D) to be nullhomologous in V, hence null-homotopic in V; we apply a further homotopy, supported away from ∂ M, so that f|_∂ D is a homeomorphism sending ∂ D to the boundary of a properly embedded disk in V.We extend f across D by sending it homeomorphically to that disk, and then further extend f to a collar neighborhood of D.At this point we have defined f on a neighborhood of ∂ M ∪ F' ∪ D, and the boundary of the subdomain where f remains undefined is sent to a 2-sphere in V ⊂ N that bounds a ball.We thus apply Lemma <ref> again to extend f to the rest of M, and this completes the proof. § SPLICING KNOTS IN MANIFOLDS WITH 2-TORSION HOMOLOGYIn this section we study images of character varieties in the pillowcase to understand what happens when we splice the complements of knots in 3-manifolds whose homology is 2-torsion. §.§ The nullhomologous caseOur main result here is a generalization of <cit.>, which describes the case where Y_1 ≅ Y_2 ≅ S^3, using the methods of <cit.>.Let Y_1 and Y_2 be closed, orientable 3-manifolds such that H_1(Y_1;) and H_1(Y_2;) are both 2-torsion, and let K_1 ⊂ Y_1 and K_2 ⊂ Y_2 be non-trivial nullhomologous knots with irreducible complements.We splice their exteriors E_K_1 = Y_1 ∖ N(K_1) and E_K_2 = Y_2 ∖ N(K_2) to form a closed 3-manifoldY = E_K_1∪_∂ E_K_2,gluing the meridian and longitude μ_1 and λ_1 in ∂ E_K_1 to the longitude and meridian λ_2 and μ_2 in ∂ E_K_2, respectively.Then there is a representationρ: π_1(Y) →(2)with non-abelian image.Proposition <ref> gives us degree-1 mapsY → E_K_1(λ_2) ≅ Y_1 and Y → E_K_2(λ_1) ≅ Y_2,which induce surjections π_1(Y) →π_1(Y_i) for i=1,2.If there is some non-abelian representation π_1(Y_i) →(2) then we can compose it with the surjection from π_1(Y) to get the desired ρ: π_1(Y) →(2).Thus we may assume from now on that both Y_1 and Y_2 are (2)-abelian.Since neither K_1 nor K_2 is unknotted, we apply Theorem <ref> to see that their zero-surgeries have non-trivial instanton homology: if w_ℓ∈ H^2( (Y_ℓ)_0(K_ℓ); ) is Poincaré dual to a meridian of K_ℓ, thenI^w_ℓ_*((Y_ℓ)_0(K_ℓ)) ≠ 0for ℓ=1,2.Noting that each Y_ℓ is (2)-abelian, Theorem <ref> says that the pillowcase imagesj(X(E_K_ℓ)) ⊂contain homologically essential loops.This means that for ℓ=1,2 we can find continuous pathsγ^ℓ_t = (α^ℓ_t,β^ℓ_t): [0,1] → [0,π] × [0,2π]such that * β^ℓ_0 = 0, β^ℓ_1 = 2π, and 0 < β^ℓ_t < 2π for 0 < t < 1;* 0 < α^ℓ_t < π for 0 < t < 1;* for each t∈[0,1], there is a representation ρ^ℓ_t: π_1(E_K_ℓ) →(2) such thatρ^ℓ_t(μ_ℓ)= [e^iα^ℓ_t 0; 0 e^-iα^ℓ_t ],ρ^ℓ_t(λ_ℓ)= [e^iβ^ℓ_t 0; 0 e^-iβ^ℓ_t ]; * for 0 < t < 1, each ρ^ℓ_t is irreducible, since ρ^ℓ_t(λ_ℓ) ≠ 1.Since H_1(Y_1;) is 2-torsion, Lemma <ref> also tells us that * 0 < α^1_t < π for all t ∈ [0,1],since (α^1_0,β^1_0) = (α^1_0,0) is a limit point of (α^1_t,β^1_t) ∈ j(X^(E_K_1)) as t approaches 0 from above, and likewise for (α^1_1,β^1_1).Now if we let τ = inf{ t ∈[0,1] |β^2_t = π}, so 0 < τ < 1, then the transposed pathγ̃^2_t = (β^2_t,α^2_t): [0,τ] → [0,π] × [0,2π]starts on the line {0}× [0,2π] and ends on the line {π}× [0,2π], with second coordinate α^2_t ∈ (0,π) for all t ∈ (0,τ].Then γ̃^2_t separates the rectangle [0,π]×[0,2π], with the subsets(0,π) ×{0}and (0,π) ×{2π}in different path components of the complement of its image.These subsets contain γ^1_0 = (α^1_0,β^1_0) and γ^1_1 = (α^1_1,β^1_1) respectively, so γ̃^2 must intersect the path γ^1 at some pointγ^1_t = (α^1_t,β^1_t),0 < t < 1.Taking t̃∈ [0,τ] so that γ^1_t = γ̃^2_t̃, it follows thatρ^1_t(μ_1) = ρ^2_t̃(λ_2)= [e^iα^1_t 0; 0 e^-iα^1_t ],ρ^1_t(λ_1) = ρ^2_t̃(μ_2)= [e^iβ^1_t 0; 0 e^-iβ^1_t ].Thus ρ^1_t and ρ^2_t̃ agree on the torus ∂ E_K_1 = ∂ E_K_2 inside Y, and so they glue together to give a representationρ: π_1(Y) →(2).This representation restricts to π_1(E_K_1) as the irreducible ρ^1_t, where 0 < t < 1, and its restriction to π_1(E_K_2) is likewise irreducible since ρ(λ_2) = ρ(μ_1) = ρ^1_t(μ_1) is not the identity.Thus ρ is irreducible as well.§.§ The homologically essential caseIn this subsection, we consider what happens if we splice two knot complements where one of the knots is homologically essential.We will ultimately prove the following.Let K_1 ⊂ Y_1 and K_2 ⊂ Y_2 be knots in rational homology spheres such that H_1(Y_1;) and H_1(Y_2;) are both 2-torsion.Suppose that the exteriors E_K_1 and E_K_2 are irreducible, and that * The rational longitude λ_1 ⊂∂ E_K_1 has order 2 in H_1(E_K_1;);* K_2 is nullhomologous, with irreducible, boundary-incompressible complement.We form a closed 3-manifoldY = E_K_1∪_∂ E_K_2by splicing the exteriors along their boundaries so that μ_1 ∼λ_2 and λ_1 ∼μ_2.Then there is a representationρ: π_1(Y) →(2)with non-abelian image. Unlike in Theorem <ref>, one key obstacle is that we cannot make use of a degree-1 pinching mapY → E_K_2(λ_1) ≅ Y_2,because we may not be able to collapse E_K_1 onto a solid torus.However, since the rational longitude λ_K_1 has order 2 in homology, we can use Proposition <ref> to pinch it to the next best thing, the twisted I-bundle over the Klein bottle. Suppose to the contrary that Y is (2)-abelian.We first note that by the Mayer–Vietoris sequence, the homology H_1(Y) is isomorphic toH_1(E_K_1) ⊕ H_1(E_K_2)/μ_1∼λ_2, λ_1∼μ_2≅H_1(Y_1) ⊕ H_1(E_K_2)/λ_1 ∼μ_2≅ H_1(Y_1) ⊕ H_1(Y_2),where we first use the fact that [λ_2] = 0 in H_1(E_K_2), and then that H_1(E_K_2) ≅ H_1(Y_2) ⊕ with thesummand generated by [μ_2].In particular H_1(Y;) is 2-torsion.Now we apply Proposition <ref> to construct a degree-1 mapY → N ∪_∂ E_K_2,where N is the twisted I-bundle over the Klein bottle.The rational longitude λ_0 of N is still glued to μ_2, since this map preserves rational longitudes, but a priori we only know that some curve μ⊂∂ N that is dual to λ_0 has been glued to λ_2.However, we can now pinch E_K_2 to a solid torus as in Proposition <ref>, so we have a composition of degree-1 mapsY → N ∪_∂ E_K_2→ N(λ_2) = N(μ).This induces a surjection on π_1 and hence on H_1, so we conclude that H_1(N(μ)) is 2-torsion since H_1(Y) is; and that both Y' = N ∪_∂ E_K_2and N(μ) are (2)-abelian, since Y is.Since N(μ) is (2)-abelian and its first homology is 2-torsion, Proposition <ref> says that μ is the unique slope μ_0 such that N(μ_0) ≅^3#^3.We now claim that Y_2 must be (2)-abelian.We know that Y' is (2)-abelian, and since the slopes μ_0 and λ_0 are glued to λ_2 and μ_2 respectively, we haveπ_1(Y') = ⟨ a,b | aba^-1=b^-1⟩∗π_1(E_K_2)/a^2=μ_0∼λ_2, b=λ_0∼μ_2.Suppose that there is a non-abelian representation π_1(Y_2) →(2), or equivalently some non-abelian ρ: π_1(E_K_2) →(2) with ρ(μ_2) = 1.Then since every element of (2) has a square root, we can extend ρ to π_1(Y') by setting ρ(b) = 1 and letting ρ(a) be some square root of ρ(λ_2).This contradicts the fact that Y' is (2)-abelian, so Y_2 is (2)-abelian after all.Now we recall from Proposition <ref> that for every t ∈/2π, there is a representation ρ_t: π_1(N) →(2) withρ_t(μ_0)= -1,ρ_t(λ_0)= [e^it 0; 0 e^-it ],and that this has non-abelian image if t ∉π.If for some t we can find a representation ρ_K_2: π_1(E_K_2) →(2) withρ_K_2(μ_2)= [e^it 0; 0 e^-it ],ρ_K_2(λ_2)= -1,then we could glue this to the corresponding ρ_t to get a representation ρ': π_1(Y') →(2).But then ρ_K_2 has non-abelian image, since otherwise we would have ρ_K_2(λ_2) = 1, and so ρ' must be non-abelian as well.This would also contradict the fact that Y' is (2)-abelian.In conclusion, we have shown that if Y is (2)-abelian then there cannot be any representations ρ: π_1(E_K_2) →(2) with ρ(λ_2) = -1.But such representations up to conjugacy generate I^w_*( (Y_2)_0(K_2) ), where w is dual to a meridian of K_2, so the latter invariant must be zero.On the other hand, we know that Y_2 is (2)-abelian, and that K_2 ⊂ Y_2 is nullhomologous with irreducible, boundary-incompressible complement, so Theorem <ref> says that I^w_*((Y_2)_0(K_2)) ≠ 0.This is a contradiction, so we conclude that the spliced manifold Y could not have been (2)-abelian after all. § GLUING COMPLEMENTS OF KNOTS IN SUMS OF ^3Our goal in this somewhat lengthy section is to prove the following theorem.Let K_1 ⊂ Y_1 and K_2 ⊂ Y_2 be nullhomologous knots in rational homology spheres whose 2-surgeries satisfy(Y_1)_2(K_1)≅#^k ^3, (Y_2)_2(K_2)≅#^ℓ^3for some integers k, ℓ≥ 1, and suppose that their exteriors E_K_1 and E_K_2 are irreducible and not solid tori.We form a closed 3-manifoldY = E_K_1∪_∂ E_K_2by gluing the exteriors along their boundaries so that μ_1 ∼μ_2^-1 and λ_1 ∼μ_2^2 λ_2.Then there is a representationπ_1(Y) →(2)whose restrictions to each of π_1(E_K_1) and π_1(E_K_2) have non-abelian image. The gluing map used to construct Y in Theorem <ref> is not arbitrary: it produces a toroidal 3-manifold whose homology is 2-torsion, and we will eventually see in <ref> that for such manifolds, this is essentially the only gluing map we need to consider that is not of the form (μ_1,λ_1) ∼ (λ_2,μ_2).The proof of Theorem <ref> will occupy the next several subsections. §.§ Knots with #^n ^3 surgeries Suppose that Y is a rational homology 3-sphere, and K ⊂ Y is a nullhomologous knot such that Y_2(K) ≅#^n ^3 for some n≥ 1.If Y_0(K) is irreducible then we have I^w_*(Y_0(K)) ≠ 0, which we can use to understand something about the (2) character variety of the complement of K.We wish to understand exactly when this happens, so that we can almost always guarantee that I^w_*(Y_0(K)) will be nonzero.Let Y be a rational homology 3-sphere, and suppose for some nullhomologous knot K ⊂ Y with irreducible exterior E_K = Y ∖ N(K) that either * Y_2(K) ≅#^n ^3, where n ≥ 1; or* Y_p(K) is a lens space of order p for some prime p.Then Y_0(K) is irreducible unless (Y,K) ≅ (S^3,U).We first consider the case where Y_2(K) ≅#^n ^3 for some n ≥ 2.In this case, the Dehn filling E_K(μ^2λ) produces the connected sum #^n ^3, which is reducible.Since E_K is irreducible, any pair of slopes producing reducible fillings must have distance 1 <cit.>; but then λ has distance 2 from μ^2λ, so E_K(λ) ≅ Y_0(K) must be irreducible as well.From now on we suppose that Y_p(K) ≅ L(p,q) for some prime p, which may be either 2 or odd, so that Y is an integral homology sphere.We will suppose that Y_0(K) is reducible.Then the exterior E_K has a reducible Dehn filling (of slope 0) and a Dehn filling with finite fundamental group (of slope p), and these filling slopes have distance p ≥ 2.A theorem of Boyer and Zhang <cit.> thus asserts that one of the following must hold: either * E_K is a simple (i.e., irreducible and atoroidal) Seifert fibered manifold, or* E_K is a cable on the twisted I-bundle over the Klein bottle.In the latter case any Dehn filling of E_K must contain a Klein bottle, but we know that there are no embedded Klein bottles in ^3 <cit.>.There are also no Klein bottles in a lens space L(p,q) where p is odd: any Klein bottle B would be non-separating, hence [B] would be a nonzero class in H_2(L(p,q);/2) ≅ 0.Thus E_K must be Seifert fibered instead.We refer to <cit.> for the facts about Seifert fibered 3-manifolds that we will use below.If we fix a Seifert fibration on E_K, then it extends over any Dehn filling of ∂ E_K as long as the filling in question is not along the fiber slope.In particular, the Seifert fibration extends over either Y_p(K) ≅ L(p,q) or Y_0(K).If it extends over Y_0(K) then we know that the only non-prime Seifert fibered space is ^3 #^3, so since Y_0(K) is not a rational homology sphere it must be prime; then Y_0(K) is reducible by assumption and also prime, so it must be S^1× S^2.In either case, every Seifert fibration on L(p,q) and on S^1× S^2 has base orbifold homeomorphic to S^2, so the fibration on E_K has base orbifold homeomorphic to a disk.Next, we claim that Y_0(K) ≅ S^1× S^2.We have already argued that this is the case if the Seifert fibration on E_K extends over Y_0(K).If it does not, then the longitude of K must have been the fiber slope, and since the base orbifold of E_K is orientable, it follows that Y_0(K) is a connected sum of lens spaces and copies of S^1× S^2 <cit.>.Then from H_1(Y_0(K);)≅ we must have Y_0(K) ≅ S^1× S^2 as claimed.We have shown that the core of 0-surgery on K ⊂ Y is a knot K' ⊂ S^1× S^2 that admits an L(p,q) surgery, and whose exterior is Seifert fibered.Baker, Buck, and Lecuona <cit.> showed that the only such knots are (a,b)-torus knots in S^1× S^2, and that if a≥ 2 then the corresponding lens spaces are L(na^2,nab+1) for n∈; but these do not have homology of prime order, hence cannot be L(p,q).Thus K' must be isotopic to S^1 ×{}⊂ S^1× S^2, and it follows that (Y,K) ≅ (S^3,U). Let K ⊂ Y be a nullhomologous knot with irreducible complement in a rational homology sphere, and suppose that Y_2(K) ≅#^n^3 for some n ≥ 1 but that (Y,K) ≇(S^3,U).Then Y is not (2)-abelian.Proposition <ref> says that Y_0(K) is irreducible, so if w ∈ H^2(Y_0(K);) is Poincaré dual to a meridian of K, then I^w_*(Y_0(K)) ≠ 0 by Theorem <ref>.The homology of Y is 2-torsion sinceH_1(Y;) ⊕ (/2) ≅ H_1(Y_2(K);) ≅ (/2)^nis 2-torsion, so if Y were (2)-abelian then Theorem <ref> would give us a non-abelian representation π_1(Y_2(K)) →(2).But this is impossible, since every representation ofπ_1(Y_2(K)) ≅π_1(#^n^3) ≅ (/2) ∗…∗ (/2)into (2) must send each /2 factor into {±1} and thus have central image, so Y must not be (2)-abelian after all.§.§ The pillowcase image of a knot with a #^n^3 surgery Suppose that Y is a rational homology sphere and that K ⊂ Y is a nullhomologous knot with irreducible complement such that Y_2(K) ≅#^n^3, and that (Y,K) ≇(S^3,U).Since Corollary <ref> tells us that Y is not (2)-abelian, we cannot argue as in Lemma <ref> that the pillowcase imagei^*(X(E_K)) ⊂ X(T^2)avoids the lines {α=0} and {α=π}: there may be a non-abelian representation π_1(Y)→(2) that sends the homotopy class of the longitude λ to something non-trivial.In particular, it no longer makes sense to talk about the image j(X(E_K)) in the cut-open pillowcase.Thus in what follows we will stick to the pillowcase, identified asX(T^2) ≅(/2π) × (/2π)/(α,β) ∼ (-α,-β).We will also describe it in terms of a fundamental domain for the above quotient, namely asX(T^2) ≅[0,π] × [0,2π]/{[ (0,β) ∼ (0,2π-β),; (π,β) ∼ (π,2π-β),;(α,0) ∼ (α,2π) ]},which equips it with a quotient map [0,π] × [0,2π] → X(T^2).Let K ⊂ Y be a nullhomologous knot in a rational homology sphere with Y_2(K) ≅#^n^3 for some n ≥ 0.Then the pillowcase imagei^*(X(E_K)) ⊂ X(T^2)does not contain any points (α,β) with 2α+β∈ 2π, except for (0,0) and (π,0).Moreover, its intersection with the line{ 2α + β≡π2π}⊂ X(T^2)is connected and contains the point (π/2,0).(See Figure <ref>.)Let ρ: π_1(E_K) →(2) be a representation with i^*([ρ]) = (α,β) and 2α+β∈π.Then up to conjugacy we haveρ(μ^2λ) = [e^i(2α+β)0;0 e^-i(2α+β) ] = ±1.We will consider each value separately below.If ρ(μ^2λ) = 1, then 2α+β≡ 0 2π, and ρ factors throughπ_1(E_K)/μ^2λ≅π_1(Y_2(K)) ≅π_1(#^n^3) ≅ (/2)^∗ n.Every homomorphism (/2)^∗ n→(2) has central image, because each /2 factor must be sent to {±1}, so in particular this means that ρ must have central image.But then ρ factors through H_1(E_K;), and thus it sends the nullhomologous λ to 1.This is equivalent to β≡ 02π, and then α∈π as claimed.We assume from now on that ρ(μ^2λ) = -1, so 2α+β≡π2π.If the only such representations satisfy (α,β) = (π/2,0) then there is nothing to show, so we will assume that (α,β) is different from (π/2,0).In particular, since β≢0 2π we know that ρ(λ) ≠ 1, and thus ρ must have non-abelian image.In this case, while ρ itself no longer factors through π_1(Y_2(K)), the representation ρ: π_1(E_K) →(3)does send μ^2λ to the identity.This means that ρ factors as a compositionπ_1(E_K) ↠π_1(E_K)/μ^2λ≅ (/2)^∗ n(3).It must also have non-trivial image, since otherwise the image of ρ would have been abelian.Now we let x_1,…,x_n be generators of the /2 factors of (/2)^∗ n.The map ϕ sends each x_i to an element ϕ(x_i) ∈(3) of order at most 2, hence either to the identity or to a 180-degree rotation about some axis L_i; and it sends at least one x_i to such a rotation, since ρ is non-trivial.The space of such rotations is connected and homeomorphic to ^2, since each rotation is uniquely determined by its axis and vice versa.Thus we can define a family of homomorphismsϕ_t: (/2)^∗ n→(3),with ϕ_0 = ϕ and ϕ_1 having abelian image, as follows: * If ϕ(x_i) = 1 then we let ϕ_t(x_i) = 1 for all t ∈ [0,1].* If ϕ(x_i) is a 180-degree rotation around an axis L_i, then we choose a path γ_i: [0,1] →^2 from [L_i] to [1:0:0] and let ϕ_t(x_i) be the 180-degree rotation about γ_i(t).We see that ϕ_1 has abelian image of order 2, since it sends each x_i to either 1 or the 180-degree rotation about the x-axis, and at least one of the ϕ_1(x_i) is a rotation.The corresponding continuous family of homeomorphismsρ̅_t: π_1(E_K) ↠π_1(E_K)/μ^2λ≅ (/2)^∗ n(3)satisfies ρ̅_t(μ^2λ) = 1 for all t ∈ [0,1] by construction.Moreover, we know that ρ̅_0 = ρ lifts to a representation π_1(E_K) →(2), so the obstruction w_2(ρ̅_0) to lifting must be zero, and then since w_2(ρ̅_t) = w_2(ρ̅_0) by continuity, it follows that all of the ρ̅_t lift to a continuous family of representationsρ_t: π_1(E_K) →(2). We now have ρ_t(μ^2λ) = 1, so ρ_t(μ^2λ) must be either 1 or -1, and thenρ_t(μ^2λ) = ρ_0(μ^2λ) = -1for all t.This says that the pillowcase images i^*([ρ_t]) = (α_t,β_t) all lie on the line 2α+β≡π2π.Moreover, since the image in (3) of ρ̅_1 = ρ_1 has order 2, it follows that the lift ρ_1 has cyclic image of order 4 in (2).But then ρ_1 has abelian image, so we must have (α_1,β_1) = (π/2,0), and then the points (α_t,β_t) trace a continuous path in i^*(X(E_K)) from our original (α,β) = (α_0,β_0) to (α_1,β_1) = (π/2,0).We conclude that the intersectioni^*(X(E_K)) ∩{ 2α + β≡π2π}⊂ X(T^2)is connected, since it contains a continuous path from every one of its points to (π/2,0).It must also contain the point (π/2,0), as claimed, as the image of an abelian representationπ_1(E_K) ↠ H_1(E_K;) ≅ H_1(Y) ⊕→(2)which is trivial on H_1(Y) and sends the meridian generating thesummand to ([i0;0 -i ]). Let K ⊂ Y be a nullhomologous knot in a rational homology sphere with Y_2(K) ≅#^n^3 for some n ≥ 1, and suppose that the exterior E_K is irreducible and that (Y,K) ≇(S^3,U).Then the pillowcase imagei^*(X(E_K)) ⊂ X(T^2)satisfies exactly one of the following: * The image i^*(X(E_K)) contains the entire line {2α+β≡π2π}.* The image i^*(X(E_K)) contains neither P=(0,π) nor Q=(π,π), and then it contains a homologically essential simple closed curveC ⊂ i^*(X(E_K)) ⊂ X(T^2) ∖{P,Q}that is disjoint from the line {2α+β∈ 2π}. Proposition <ref> tells us that i^*(X(E_K)) contains all of {2α+β≡π2π} if and only if it contains both of the endpoints P=(0,π) and Q=(π,π), since its intersection with this line is connected.We further observe that it contains P if and only if it contains Q, since we can multiply a representationρ: π_1(E_K) →(2)with ρ(μ) = ±1 and ρ(λ) = -1 by a central characterχ: π_1(E_K) ↠ H_1(E_K) ≅ H_1(Y) ⊕→{±1}sending the meridian (as a generator of thesummand) to -1 in order to get a new representation ρ̃ with ρ̃(μ) = ∓1 and ρ̃(λ) = -1.Thus if i^*(X(E_K)) does not contain the entire line {2α+β≡π}, then it cannot contain either of P or Q.Now we suppose that P,Q ∉i^*(X(E_K)).Since E_K is irreducible and (Y,K) ≇(S^3,U), Proposition <ref> and Theorem <ref> tell us that I^w_*(Y_0(K)) ≠ 0, where w ∈ H^2(Y_0(K)) is Poincaré dual to a meridian of K.Using the assumption that P,Q ∉i^*(X(E_K)), we can now apply Proposition <ref> to get the desired essential curve C ⊂ i^*(X(E_K)) ∖{P,Q}.Finally, suppose that the curve C intersects the line {2α+β∈ 2π}; according to Proposition <ref>, this can only happen at a point of the form (kπ,0) where k is 0 or 1.Parametrizing C by a continuous, injective mapf: /↪ i^*(X(E_K)) ↪ X(T^2)so that f(0) = (kπ,0), we claim that there must be a sequence t_n → 0 such that f(t_n) is not on the line {β=0}: assuming otherwise, there is some ϵ > 0 such that f restricts to a continuous, injective mapf|_(-ϵ,ϵ): (-ϵ,ϵ) ↪ [0,π] ×{0}↪ X(T^2),with 0 sent to an endpoint (kπ,0) of the line segment [0,π] ×{0}, and this is impossible.Now since f(t_n) ∈ i^*(X^(E_K)) and t_n → 0, we see that f(0) = (kπ,0) is a limit point of the image i^*(X^(E_K)).But H_1(Y) is 2-torsion, since H_1(Y) ⊕ (/2) ≅ (/2)^⊕ n, so Lemma <ref> says this can only happen if Y_2(K) ≅#^n ^3 is not (2)-abelian, a contradiction.We conclude that C cannot pass through (kπ,0) after all.§.§ Symmetries of the pillowcaseWe do not need to use instanton homology for the remainder of this section, since Propositions <ref> and <ref> will suffice for the proof of Theorem <ref>.Given a nullhomologous knot K ⊂ Y, we will therefore writeI_K = i^*(X(E_K)) ⊂ X(T^2)for the pillowcase image of the (2)-character variety of K.With the gluing map of Theorem <ref> in mind, we now define a mapσ: X(T^2) → X(T^2)in terms of the coordinates (<ref>), by the formulaσ(α,β) = (-α, 2α + β) = (α, 2π-(2α+β)).See Figure <ref>.It is straightforward to check that this is well-defined, that σ^2 = Id and thus σ is a homeomorphism, and thatσ(0,β)= (0,β),σ(π,β)= (π,β)for all β.Our goal in this subsection is to understand the image under σ of the image I_K = i^*(X(E_K)) in the pillowcase, where K ⊂ Y is a knot as in the statement of Theorem <ref>.Indeed, the representation promised by Theorem <ref> will eventually come from finding a point in the pillowcase where one image I_K_1 intersects another skewed image σ(I_K_2).Define an involution of the pillowcase X(T^2) byτ(α,β) = (π-α, 2π-β)in either of the coordinates (<ref>) or (<ref>), as shown in Figure <ref>.If K ⊂ Y is a nullhomologous knot in an arbitrary 3-manifold, then the pillowcase imageI_K := i^*(X(E_K)) ⊂ X(T^2)is invariant (as a set) under τ, meaning that τ(I_K) = I_K, and so is the image σ(I_K).We work with (<ref>) for convenience.Take a point (α,β) ∈ I_K, which means that there is some representation ρ: π_1(E_K) →(2) such thatρ(μ)= [e^iα 0; 0 e^-iα ],ρ(λ)= [e^iβ 0; 0 e^-iβ ].We take a central characterχ: π_1(E_K) ↠ H_1(E_K) ≅ H_1(Y) ⊕→{±1},defined by sending H_1(Y) to +1 and the meridian μ (which generates thesummand) to -1, and then we get a new representationρ̃= χ·ρ: π_1(E_K) →(2)such that i^*([ρ̃]) = (α+π,β).In X(T^2) we can identify(α+π,β) ∼ (-α-π, -β) = (π-α, 2π - β) = τ(α,β),so τ(α,β) also lies in the image I_K.This proves that I_K is τ-invariant.We now claim that σ∘τ = τ∘σ, which we can check directly by computingσ(τ(α,β))= σ(π-α,2π-β) = (π-α, 2π - (2(π-α) + (2π-β))) = (π-α, -2π + 2α + β) = τ(α, 2π-(2α-β)) = τ(σ(α,β)).But then we apply this to the τ-invariant set I_K to getσ(I_K) = σ(τ(I_K)) = τ(σ(I_K)),so σ(I_K) is τ-invariant as well. We can now use the involution τ to study the skewed image σ(I_K) of the character variety of K.Let K ⊂ Y be a nullhomologous knot in a rational homology sphere with Y_2(K) ≅#^n ^3 for some n ≥ 0, and suppose that K has irreducible complement and that (Y,K) ≇(S^3,U).Then exactly one of the following must be true: * The image σ(I_K) ⊂ X(T^2) contains the entire line L_π = {β≡π2π}.* The image σ(I_K) ⊂ X(T^2) avoids the points P=(0,π) and Q=(π,π), and contains a homologically essential simple closed curveC̃⊂ X(T^2) ∖{P,Q}such that C̃ is disjoint from the line L_0 = {β∈ 2π}.Moreover, the intersection σ(I_K) ∩ L_π is connected.Proposition <ref> tells us that either I_K contains the line L'_π = {2α+β≡π2π}, or it avoids P and Q and contains a homologically essential simple closed curveC ⊂ I_K ⊂ X(T^2) ∖{P,Q}that is disjoint from the line L'_0 = {2α+β≡ 0 2π}.In the first case, σ(I_K) contains the line σ(L'_π) = {β≡π2π} = L_π.In the second case, we note that σ fixes both P and Q, hence restricts to a homeomorphismX(T^2) ∖{P,Q} X(T^2) ∖{P,Q}.But then σ(C) avoids P and Q, just as C does, and it remains homologically essential in their complement.We take C̃ = σ(C), and note that C̃ is disjoint from the line σ(L'_0), which is precisely {β∈2π} = L_0.Finally, in either case Proposition <ref> tells us that the intersection I_K ∩ L'_π is connected, so the same is true of its image under σ, which is σ(I_K) ∩σ(L'_π) = σ(I_K) ∩ L_π.§.§ Intersections of pillowcase images We are now ready to prove the following proposition, which will imply Theorem <ref>, as discussed at the beginning of the previous subsection.Let Y_1 and Y_2 be rational homology spheres, and let K_1 ⊂ Y_1 and K_2 ⊂ Y_2 be nullhomologous knots with irreducible exteriors such that (Y_ℓ)_2(K_ℓ) ≅#^n_ℓ^3 for ℓ=1,2, where n_1,n_2 ≥ 1.Suppose that neither pair (Y_ℓ,K_ℓ) is homeomorphic to (S^3,U).Then the subsetsI_K_1, σ(I_K_2) ⊂ X(T^2)intersect at some point (α,β), where neither 2α+β nor β is an integer multiple of 2π. We split the proof of Proposition <ref> into two cases, which occupy the following two lemmas.Proposition <ref> holds if at least one of the pillowcase images I_K_1 and I_K_2 contains the point P = (0,π).We note that σ(P) = P, so if P belongs to both I_K_1 and I_K_2 then it also belongs to σ(I_K_2) and hence to I_K_1∩σ(I_K_2); in this case we have 2α+β = β = π∉2π, as desired.Now suppose that P ∈ I_K_1, but that P ∉I_K_2 and hence P = σ(P) ∉σ(I_K_2).Then Proposition <ref> says that I_K_1 contains the entire lineL'_π = {2α+β≡π2π},whose endpoints are at P=(0,π) and Q=(π,π); and Lemma <ref> says that σ(I_K_2) contains a homologically essential simple closed curveC̃_2 ⊂ X(T^2) ∖{P,Q}disjoint from the line {β∈ 2π}.Since X(T^2) ∖{P,Q} is topologically a twice-punctured sphere, with first homology , we can measure the homology class of C̃_2 by counting its intersections with any arc from P to Q.The line L'_π is such an arc, and since C̃_2 is non-zero in homology we conclude that they must intersect.We let (α,β) be any point of the intersection L'_π∩C̃_2; then (α,β) belongs to I_K_1∩σ(I_K_2) by definition, and as a point of L'_π and of C̃_2 it satisfies 2α+β∉2π and β∉2π respectively, as desired.The remaining case, where P∉I_K_1 and P ∈σ(I_K_2), is nearly identical.In this case σ(I_K_2) contains the entire line L_π = {β≡π2π} from P to Q by Lemma <ref>, while Proposition <ref> gives us an essential curve C_1 ⊂ X(T^2)∖{P,Q} in the image I_K_1, with C_1 disjoint from {2α+β∈ 2π}.Since C_1 is essential it must intersect the arc L_π from P to Q, and at any point (α,β) in the intersection we have β∉2π since (α,β) ∈ L_π, and 2α+β∉2π since (α,β) ∈ C_1. Proposition <ref> holds if neither I_K_1 nor I_K_2 contains the point P=(0,π).Letting Q=(π,π), Proposition <ref> tells us that there is a homologically essential, simple closed curveC_1 ⊂ I_K_1⊂ X(T^2) ∖{P,Q}that is disjoint from the line {2α+β∈ 2π}.Similarly, by Lemma <ref> there is an essential curveC̃_2 ⊂σ(I_K_2) ⊂ X(T^2) ∖{P,Q}that is disjoint from the line {β∈ 2π}.We will let τ: X(T^2) → X(T^2) be the involution of Lemma <ref>, which exchanges the points P and Q and fixes the images I_K_1 and σ(I_K_2) setwise.(In particular, we note that τ(C̃_2) ⊂σ(I_K_2) as well.)First, we observe that if the intersection( C_1 ∪τ(C_1) ) ∩( C̃_2 ∪τ(C̃_2) )is nonempty, then any point (α,β) in the intersection will suffice.It must satisfy 2α+β∉2π since it lies on either C_1 or τ(C_1), and then β∉2π since it lies on either C̃_2 or τ(C̃_2).Thus we may assume from now on that the sets( C_1 ∪τ(C_1) ) and( C̃_2 ∪τ(C̃_2) )are disjoint.Since each of the simple closed curves C_1, τ(C_1), C̃_2, and τ(C̃_2) is homologically essential in X(T^2) ∖{P,Q}, they must all separate P = (0,π) from Q = (π,π) and intersect the line segmentL_π = [0,π] ×{π}from P to Q.Let α_0 ∈ [0,π] be the minimal coordinate such that at least one of these four curves passes through (α_0,π), and let γ be the curve in question.We split the remainder of the proof into two cases.Case 1: The curve γ is either C̃_2 or τ(C̃_2). Letting D be the disk component of X(T^2) ∖γ that contains P, we claim that in this case C_1 must be disjoint from D, as shown in Figure <ref>.Assuming otherwise, it would be contained in the punctured disk D ∖{P} (since it is disjoint from γ = ∂ D), and it is disjoint from the arc [0,α_0]×{π} from P to ∂ D by assumption, so it would necessarily be nullhomotopic in D ∖{P}.This would mean that C_1 bounds a disk D' ⊂ D that does not contain P, and then D' cannot contain Q either since Q ∉D, so C_1 = ∂ D' would not separate P from Q in X(T^2), a contradiction.Thus we know that the disk D bounded by γ is disjoint from C_1, and by an identical argument it is also disjoint from τ(C_1).Applying the involution τ, we know that the disk τ(D) is bounded by τ(γ) and contains τ(P) = (π,π) = Q.This disk must be disjoint from both C_1 and τ(C_1), since otherwise we could apply τ again to see that either τ(C_1) or τ^2(C_1) = C_1 meets τ^2(D) = D, which we know to be impossible.So now the simple closed curve C_1 is disjoint from both D and τ(D), whose boundaries are C̃_2 and τ(C̃_2) in some order, and it separates P ∈ D from Q ∈τ(D).It follows that D and τ(D) lie in different components of the complement X(T^2) ∖ C_1, and in particular so do their boundaries C̃_2 and τ(C̃_2).Now the curves C̃_2 and τ(C̃_2) both intersect the line segment L_π: by assumption one of them does at x = (α_0,π), and then the other one must meet L_π at τ(x) = (π-α_0,π).Then x and τ(x) lie in different components of X(T^2) ∖ C_1.They also belong to the intersection σ(I_K_2) ∩ L_π, which is connected by Lemma <ref>, so there must be some point(α,β) ∈σ(I_K_2) ∩ L_πthat also lies in C_1.Then (α,β) belongs to I_K_1∩σ(I_K_2), we have β = π∉2π since (α,β) ∈ L_π, and similarly 2α+β∉2π since (α,β) ∈ C_1.Thus (α,β) is our desired point of intersection, and this proves the lemma in the case where γ is either C̃_2 or τ(C̃_2).Case 2: The curve γ is either C_1 or τ(C_1).In this case, an identical argument shows that C_1 and τ(C_1) lie in different components of the complement X(T^2) ∖C̃_2.Now the line segmentL'_π = { (α, π-2α) | 0 ≤α≤π}= { 2α+β≡π2π}in X(T^2) has its endpoints at P and Q, so it must intersect any homologically essential curve in X(T^2) ∖{P,Q}.This includes the curve C_1, so we take a pointx ∈ C_1 ∩ L'_πand note that τ(x) ∈τ(C_1) ∩ L'_π as well, since τ fixes L'_π setwise.Now Proposition <ref> says that the intersection I_K_1∩ L'_π is connected.This intersection contains both x and τ(x), which lie in different components of X(T^2) ∖C̃_2 since they belong to C_1 and τ(C_1) respectively.It follows that I_K_1∩ L'_π must intersect C̃_2 ⊂σ(I_K_2) at some point(α,β) ∈ I_K_1∩ L'_π.We have 2α+β∉2π since (α,β) ∈ L'_π, and β∉2π since (α,β) ∈C̃_2, so (α,β) is the desired point and we are done.One of the following must apply: either P = (0,π) lies in at least one of I_K_1 and I_K_2, or it lies in neither of them.In the first case we apply Lemma <ref>, and in the second case we apply Lemma <ref>.§.§ Constructing a representation We are finally ready to prove Theorem <ref>, using the information provided by Proposition <ref>.We recall the hypotheses of Theorem <ref> here for convenience: we have nullhomologous knots K_1 ⊂ Y_1 and K_2 ⊂ Y_2, whose exteriors are irreducible and not solid tori, and these satisfy(Y_1)_2(K_1)≅#^k ^3, (Y_2)_2(K_2)≅#^ℓ^3.The closed manifold Y = E_K_1∪_∂ E_K_2 is then formed from their exteriors by gluing μ_1 and λ_1 to μ_2^-1 and μ_2^2λ_2, respectively. Since the exteriors E_K_1 and E_K_2 are not solid tori, Proposition <ref> tells us that there is a point(α,β) ∈ i^*(X(E_K_1)) ∩σ( i^*(X(E_K_2)) )in the pillowcase X(T^2) such that β∉2π and 2α+β∉2π.Since (α,β) lies in i^*(X(E_K_1)), this means that there is a representationρ_1: π_1(E_K_1) →(2)such thatρ_1(μ_1)= [e^iα 0; 0 e^-iα ],ρ_1(λ_1)= [e^iβ 0; 0 e^-iβ ],and ρ_1 has non-abelian image, since β∉2π implies that ρ_1(λ_1) ≠ 1.Similarly, since σ is an involution of the pillowcase, there is a unique point (γ,δ) ∈ X(T^2) such thatσ(γ,δ) = (α,β).Then since (α,β) lies in σ( i^*(X(E_K_2)) ) we have (γ,δ) ∈ i^*(X(E_K_2)), so there is a representationρ_2: π_1(E_K_2) →(2)such thatρ_2(μ_2)= [e^iγ 0; 0 e^-iγ ],ρ_2(λ_2)= [e^iδ 0; 0 e^-iδ ].We have (γ,δ) = σ(α,β) = (-α, 2α+β) as well, so the condition 2α+β∉2π is equivalent to δ∉2π.This means that ρ_2(λ_2) ≠ 1, so ρ_2 also has non-abelian image.Now from (α,β) = σ(γ,δ) = (-γ, 2γ+δ) we conclude that, up to replacing ρ_2 with a conjugate to replace the coordinates (γ,δ) with the equivalent (-γ,-δ), we haveρ_1(μ_1)= ρ_2(μ_2^-1),ρ_1(λ_1)= ρ_2(μ_2^2λ_2).This says that when we form the closed 3-manifoldY = E_K_1∪_∂ E_K_2by gluing μ_1 to μ_2^-1 and λ_1 to μ_2^2λ_2, the representations ρ_1 and ρ_2 agree on the common torus ∂ E_K_1∼∂ E_K_2, and so we can glue them together to defineρ: π_1(Y) →(2)whose restrictions to E_K_1 and E_K_2 are the non-abelian representations ρ_1 and ρ_2, as desired. § TOROIDAL MANIFOLDS AS UNIONS OF KNOT COMPLEMENTSIn this section we prepare to prove Theorem <ref> by studying toroidal 3-manifolds of the formY = M_1 ∪_T M_2,where H_1(Y;) is p-torsion for some prime p and each M_i is a compact orientable 3-manifold with boundary T.Our goal is to express the M_i as complements of knots in rational homology spheres, which can be glued together in a standard way so that when p=2, we will be able to find representations of each π_1(M_i) by using the results of the previous sections. §.§ Nullhomologous rational longitudesIn this subsection, we suppose that Y = M_1 ∪_T M_2 has a separating incompressible torus T, and that the rational longitudes of M_1 and M_2 are both nullhomologous.Under these assumptions, we will express M_1 and M_2 as the complements of nullhomologous knots in closed 3-manifolds, with a short list of standard forms for the gluing map ∂ M_1 ≅∂ M_2.Let Y = M_1 ∪_T M_2 be a closed 3-manifold with H_1(Y;) ≅ (/p)^r, for some prime p and integer r ≥ 0, and with separating incompressible torus T.Suppose that the rational longitudes of M_1 and M_2 are both nullhomologous.Then there are closed 3-manifolds Y_1 and Y_2, withH_1(Y_1;)≅ (/p)^k, H_1(Y_2;)≅ (/p)^ℓfor some integers k and ℓ, and nullhomologous knots K_1 ⊂ Y_1 and K_2 ⊂ Y_2 with exteriors M_i ≅ Y_i ∖ N(K_i)(i=1,2),such that one of the following holds. * k+ℓ = r, and the identification ∂ M_1 ≅∂ M_2 sends (μ_1,λ_1) to (λ_2,μ_2).In this case there are degree-1 maps Y → Y_i for each i=1,2.* k+ℓ = r-1, and the identification ∂ M_1 ≅ T ≅∂ M_2 equatesμ_1= aμ_2 + bλ_2,λ_1= pμ_2 + cλ_2in H_1(T), for some integers a,b,c with ac-bp=-1 and 0 ≤ b < c < p.Then there are degree-1 mapsY → (Y_1)_-p/a(K_1) and Y → (Y_2)_p/c(K_2).In particular, when p=2 we must have (a,b,c)=(-1,0,1), so that (μ_1,λ_1) is sent to (-μ_2,2μ_2+λ_2) and there are degree-1 maps Y → (Y_i)_2(K_i) for i=1,2. Each of these degree-1 maps induces a surjection on π_1 with non-trivial kernel. To prove Proposition <ref>, we begin by finding candidate pairs (Y_i,K_i) somewhat arbitrarily, and then in the subsequent lemma we will use Dehn surgery to make better choices as needed.Let Y be a closed 3-manifold satisfying H_1(Y;) ≅ (/p)^r for some prime p and integer r ≥ 0, and suppose that Y has an incompressible torus T separating it into Y = M_1 ∪_T M_2.Suppose in addition that the rational longitudes of M_1 and M_2 are both nullhomologous.Then there is a pair of closed 3-manifolds Y_1 and Y_2 withH_1(Y_1;)≅ (/p)^k, H_1(Y_2;)≅ (/p)^ℓ,together with nullhomologous knots K_1 ⊂ Y_1 and K_2 ⊂ Y_2 whose exteriors areM_1≅ Y_1 ∖ N(K_1), M_2≅ Y_2 ∖ N(K_2)respectively, such that if (μ_i,λ_i) are meridian-longitude pairs for each K_i then either * k+ℓ = r, and the gluing ∂ M_1 ≅∂ M_2 identifies μ_1 with λ_2 and λ_1 with μ_2;* or k+ℓ=r-1, and in ∂ M_1 ≅∂ M_2 we have λ_1 = pμ_2+cλ_2 for some c∈ that is not a multiple of p. Let j_i: T ↪ M_i denote inclusion, and let λ_1 and λ_2 be the rational longitudes of M_1 and M_2, so that the classes (j_1)_*(λ_1) and (j_2)_*(λ_2) are both zero by assumption.We let M_1(λ_2) denote the Dehn filling of M_1 along the slope λ_2.Since λ_2 is nullhomologous in M_2, Proposition <ref> provides a degree-1 pinching map Y → M_1(λ_2) that collapses M_2 to a solid torus.Degree-1 maps are surjective on fundamental groups and hence on first homology, so there is a surjectionH_1(Y;) ↠ H_1(M_1(λ_2);)and hence H_1(M_1(λ_2);) is a quotient of H_1(Y;) ≅ (/p)^r.This means that H_1(M_1(λ_2);) ≅ (/p)^s for some s ≤ r.We now pick a curve μ_1 ⊂ T such that {λ_1, μ_1} is an integral basis of H_1(T); if {λ_1,λ_2} is an integral basis then we will insist that μ_1 = λ_2.We then letY_1 = M_1(μ_1),and we take K_1 ⊂ Y_1 to be the core of this Dehn filling.Then K_1 is nullhomologous, with meridian μ_1 and longitude λ_1.The manifold M_1(λ_2) can be built by Dehn surgery on K_1, say with some slope a/b, and thenH_1(Y_1;) ⊕ (/a) ≅ H_1(M_1(λ_2);) ≅ (/p)^simplies that |a| divides p.The homology H_1(Y_1;) must then have the form (/p)^k, where k is either s or s-1 depending on whether |a| is 1 or p, respectively.By the same argument we can pick a Dehn filling Y_2 = M_2(μ_2), with nullhomologous core K_2, so that H_1(Y_2;) ≅ (/p)^ℓ.Again, if {λ_1,λ_2} is an integral basis of H_1(T) then we will take μ_2 = λ_1; in this case we have μ_1 = λ_2 as well, which is consistent with the fact that the identification ∂ M_1 ≅∂ M_2 is orientation-reversing.We now compute that H_2(Y;) = 0, so the Mayer-Vietoris sequence for Y = M_1 ∪_T M_2 consists in part of a short exact sequence0 →H_1(T)_≅^2H_1(M_1)_≅⊕ (/p)^k⊕H_1(M_2)_≅⊕(/p)^ℓ→H_1(Y)_≅ (/p)^r→ 0.The image of (j_1)_* is spanned by (j_1)_*(μ_1) = (1,0) ∈⊕ (/p)^k and by (j_1)_*(λ_1) = (0,0), so it is precisely thesummand of H_1(M_1), and likewise for the image of (j_2)_*.Thus H_1(Y) ≅ (/p)^r is homeomorphic to (/p)^k+ℓ plus the cokernel of the injective mapH_1(T) ⟨μ_1⟩⊕⟨μ_2⟩,whose codomain is viewed as a subset of H_1(M_1) ⊕ H_1(M_2).We observe that (j) is cyclic, since the image of j contains j(μ_1) = (1,a) for some a∈.We also know that (j) is p-torsion, as a summand of H_1(Y) ≅ (/p)^r.Thus either * (j) ≅/p and k+ℓ = r-1, or* j is onto and k+ℓ = r.If j is onto then it sends some element aμ_1+bλ_1 to (0,1); we must have a=0, since (j_1)_*(λ_1)=0 in H_1(M_1), and then bλ_1 ↦ (0,1) implies that b=±1.This means that λ_1 is homologous to ±μ_2 as elements of H_1(M_2), hence λ_1 = ±(μ_2 + nλ_2) in H_1(T).But then in H_1(T) we have{λ_1,λ_2} = {μ_2+nλ_2,λ_2}= {μ_2,λ_2}= H_1(T),so we must have taken μ_1 = λ_2 and λ_1 = μ_2, as claimed.In the remaining case, where (j) ≅/p and k+ℓ = r-1, we know thatj(μ_1)= (1,a), j(λ_1)= (0,b)for some a,b ∈, and since {μ_1,λ_1} is an integral basis of H_1(T) we have |b| = |(j)| = p.Up to changing the orientation of K_2, and hence the sign of μ_2, we can take b = p.This means that as classes in H_1(T) we haveλ_1 = pμ_2 + cλ_2for some c ∈, and since λ_1 is primitive we must have p ∤ c. In what follows we will always let μ_1,λ_1 and μ_2,λ_2 be meridian–longitude pairs for K_1 and K_2 respectively.We will suppose that they are identified as classes in H_1(T) by the relationsμ_2= aμ_1 + bλ_1,λ_2= cμ_1 + dλ_1.Since both (μ_1,λ_1) and (μ_2,λ_2) are integral bases of H_1(T) ≅^2, and the gluing map ∂ M_1 →∂ M_2 reverses orientation, we must have ad-bc=-1, and then we can rewrite these relations asμ_1= -dμ_2 + bλ_2,λ_1= cμ_2 - aλ_2. We now study the case of Lemma <ref> where k+ℓ = r-1.Suppose that we have Y = M_1 ∪_T M_2 as in Lemma <ref>, with H_1(Y;) ≅ (/p)^r, and that the resulting K_1 ⊂ Y_1 and K_2 ⊂ Y_2 satisfyH_1(Y_1;)≅ (/p)^k, H_1(Y_2;)≅ (/p)^ℓwhere k+ℓ=r-1.Then we can arrange the gluing map to identifyμ_1= aμ_2 + bλ_2λ_1= pμ_2 + cλ_2as elements of H_1(T;), where the coefficients satisfy ac-bp=-1 and 0 ≤ b < c < p.In particular, when p=2 we have (μ_1,λ_1) = (-μ_2,2μ_2+λ_2), or equivalently (μ_2,λ_2) = (-μ_1,2μ_1+λ_1).We already know from Lemma <ref> that λ_1 = pμ_2 + cλ_2 for some integer c≢0p.We write c=qp+r, with 0<r<p, and then let Y'_2 be the result of (1/q)-surgery on K_2 ⊂ Y_2, with K'_2 ⊂ Y'_2 the core of this surgery.Then Y'_2 has the same homology as Y_2, and K'_2 is still nullhomologous, with longitude λ'_2 = λ_2 and meridianμ'_2 = μ_2 + qλ_2.In terms of the peripheral curves for K'_2, we haveλ_1= pμ_2 + (pq+r)λ_2 = p(μ_2+qλ_2) + rλ_2 = pμ'_2 + rλ'_2.We thus replace (Y_2,K_2) with (Y'_2,K'_2), so now we have λ_1 = pμ_2+c'λ_2 where c'=r is strictly between 0 and p.Having arranged that 0<c<p as above, we have an identificationμ_1= aμ_2 + bλ_2λ_1= pμ_2 + cλ_2for some integers a and b.Since the gluing map ∂ M_1 ≅∂ M_2 is an orientation-reversing homeomorphism, we have ac-bp = -1.We now write b = nc + s, where n∈ and 0 ≤ s < c, and let Y'_1 be the result of (-1/n)-surgery on K_1 ⊂ Y_1, with core K'_1.Then K'_1 has meridian μ'_1 = μ_1 - nλ_1 and longitude λ'_1, so that we identify λ'_1 = λ_1 = pμ_2 + cμ_2 andμ'_1= (aμ_2 + bλ_2) - n(pμ_2 + cλ_2) = (a-np)μ_2 + (b-nc)λ_2 = a'μ_2 + b'λ_2with a' = a-np and b' = b-nc = s.We can thus arrange that the coefficient b'=s satisfies 0 ≤ b' < c, as desired.We now have arranged for the coefficients of (<ref>) to satisfy 0 < c < p and 0 ≤ b < c, as desired.The relation ac-bp = -1 is an immediate consequence of the fact that the identification ∂ M_1 ≅∂ M_2 is an orientation-reversing homeomorphism.And finally, in the case p=2 these relations imply one after the other that c=1, b=0, and a=-1, as claimed. Putting the above lemmas together now allows us to prove Proposition <ref>. Lemma <ref> combines with Lemma <ref> (in the case k+ℓ=r-1) to show that we can find K_1 ⊂ Y_1 and K_2 ⊂ Y_2 as claimed.Since λ_2 is nullhomologous in M_2 = Y_2 ∖ N(K_2), there is a degree-1 maph: Y → M_1(λ_2),built from Proposition <ref> by preserving M_1 but pinching M_2 to a solid torus in which μ_1 ⊂ T bounds a disk.The same argument, with the roles of M_1 and M_2 switched, gives a degree-1 map Y → M_2(λ_1).Now in the case k+ℓ=r we haveM_1(λ_2)= M_1(μ_1) ≅ Y_1, M_2(λ_1)= M_2(μ_2) ≅ Y_2,so Y admits degree-1 maps onto both Y_1 and Y_2.Otherwise, we have k+ℓ=r-1, with identifications(μ_1,λ_1) = (aμ_2+bλ_2, pμ_2 + cλ_2)in H_1(T) where ac-bp=-1, implying thatpμ_1 - aλ_1 = (pb-ac)λ_2 = λ_2.Then we can fill each of M_1 and M_2 along the rational longitudes λ_2 and λ_1 to getM_1(λ_2)= M_1(pμ_1-aλ_1) ≅ (Y_1)_-p/a(K_1), M_2(λ_1)= M_2(pμ_2+cλ_2) ≅ (Y_2)_p/c(K_2),so there are degree-1 maps from Y onto each of these.In both cases, each of the maps from Y induce surjections on π_1, because they have degree 1; to see that these surjections have non-trivial kernel, we note that π_1(T) injects into π_1(Y), and yet either λ_2 or λ_1 (whichever one is filled to produce the respective maps) is a homotopically essential curve in T, and hence in Y, that bounds a disk in the quotient. In Lemma <ref>, and hence in Proposition <ref>, we can replace the condition 0 ≤ b < c < p with 0 ≤ b < c ≤p/2 if we are allowed to possibly reverse the orientation of Y.Indeed, in the proof of the lemma we wrote c=qp+r and replaced c with r by performing 1/q-surgery on K_2.We chose the remainder r to satisfy 0 < r < p, but suppose that we arrange for -p/2 < r ≤p/2 instead.In this case, if c'=r is negative then we can replace Y with -Y before continuing: we reverse the orientations of Y_1 and Y_2, and also reverse the string orientation of K_1 but not that of K_2.This reverses the peripheral curves λ_1 and μ_2, but not μ_1 or λ_2, and so the relationλ_1 = pμ_2 + c' λ_2,-p2 < c' < 0becomes λ_1 = pμ_2 + (-c') λ_2, where the coefficient -c' is now positive and less than p/2.We then replace c with -c' and follow the rest of the proof as written to achieve 0 ≤ b < c ≤p/2.§.§ The homologically essential caseIn this subsection, we consider decompositions of the form Y=M_1∪_T M_2 where at least one of the rational longitudes of the M_i is homologically essential.Our goal is to prove the following.Let Y = M_1 ∪_T M_2 be a toroidal manifold, where T is an incompressible torus, and suppose that H_1(Y;) ≅ (/p)^r for some prime p and integer r ≥ 0.Let λ_1,λ_2 ⊂ T be the rational longitudes of M_1 and M_2, respectively, and suppose that λ_1 is not nullhomologous in M_1.Then there are closed 3-manifolds Y_1 and Y_2, withH_1(Y_1;)≅ (/p)^k, H_1(Y_2;)≅ (/p)^ℓwhere k ≥ 2 and k+ℓ=r, and knots K_1 ⊂ Y_1 and K_2 ⊂ Y_2 satisfying the following. * The knot K_1 is homologically essential, of order p, while K_2 is nullhomologous.* M_1 and M_2 are the exteriors of K_1 and K_2, i.e., M_i ≅ Y_i ∖ N(K_i).* The identification ∂ M_1 ≅ T ≅∂ M_2 sends μ_1 to λ_2 and λ_2 to μ_1.* There is a degree-1 map Y → Y_1, inducing a surjection π_1(Y) →π_1(Y_1) with non-trivial kernel.We begin with some lemmas allowing us to find nice bases for H_1(M_1) and H_1(M_2), and to express the rational longitudes for each in these bases.Let Y = M_1 ∪_T M_2 be a toroidal manifold, with H_1(Y;) ≅ (/p)^r for some prime p and integer r ≥ 0.Let λ_1, λ_2 ⊂ T be the rational longitudes of M_1 and M_2, respectively.If some λ_j is not nullhomologous in M_j, then the following are true. * Exactly one of the λ_j is not nullhomologous, and it satisfies p· (i_j)_*(λ_j) = 0 in H_1(M_j;).* The curves λ_1 and λ_2 form an integral basis of H_1(T;) ≅^2.Here the maps i_j: T ↪ M_j are the respective inclusions of ∂ M_j into M_j.We examine the Mayer–Vietoris sequence for Y = M_1 ∪_T M_2, which reads in part0 →H_1(T)_≅^2 H_1(M_1) ⊕ H_1(M_2) H_1(Y)_≅(/p)^r→ 0since H_2(Y;) ≅ 0.Each of the H_1(M_j) has rank at least 1 by half lives half dies, and this sequence shows that their total rank is 2, so we can writeH_1(M_j;) ≅⊕ A_j (j=1,2)where each A_j is torsion.Moreover, the image of the map i is torsion-free, so q sends A_1⊕ A_2 injectively into (/p)^r, hence we can write A_j = (/p)^n_j for each j, and we have n_1+n_2 ≤ r.Suppose without loss of generality that (i_1)_*(λ_1) is non-zero.Since it is torsion it lies in A_1, hence p· (i_1)_*(λ_1) = 0 as claimed.In fact, it generates a /p summand of the (/p)-vector space A_1 = (/p)^n_1, so n_1 ≥ 1 and we can writeH_1(M_1;) ≅ (/p) ⊕ (/p)^n_1-1⊕with the first summand generated by the rational longitude, i.e.,(i_1)_*(λ_1) = (1,0,0).We pick a class μ_1 ⊂ H_1(T;) that is dual to λ_1, meaning they form an integral basis of H_1(T;), and write (i_1)_*(μ_1) = (x,y,n) in these coordinates.If x≠ 0 then we can replace μ_1 with μ_1-xλ_1 in order to arrange that x=0.Moreover, applying half lives half dies over =/p, we see that_/p( (i_1)_*(λ_1), (i_1)_*(μ_1) ) = _/p( (1,0,0), (0,y, n mod p) }is 1-dimensional, so y is zero and n is a multiple of p.And then over =/ℓ, where ℓ≠ p is prime, we have H_1(M_1;) ≅ and_/ℓ( (i_1)_*(λ_1), (i_1)_*(μ_1) ) = _/ℓ( 0, n mod ℓ),which can only be 1-dimensional if n is not a multiple of ℓ.Thus up to changing the sign of thesummand, we can write n = p^e for some integer e ≥ 1.We now consider the pairz = ( (0,0,p^e-1), 0 ) ∈ H_1(M_1) ⊕ H_1(M_2).This cannot lie in the image of i: the element (0,0,p^e-1) is not in the image of (i_1)_*, since it does not belong to the span of (i_1)_*(λ_1)=(1,0,0) and (i_2)_*(μ_1)=(0,0,p^e).Thus q(z) is nonzero by exactness.Since q(z) ≠ 0 lies in H_1(Y) ≅ (/p)^r, it cannot be a multiple of p, so neither can z and we must therefore have e=1.Moreover, we know that q(pz) = p· q(z) = 0, so the pairpz = ( (0,0,p^e), 0) = ( (i_1)_*(μ_1), 0 )lies in the image of i.In other words, there is a class α∈ H_1(T;) such that(i_1)_*(α)= (i_1)_*(μ_1), (i_2)_*(α)= 0.The first relation implies that α = μ_1 + pkλ_1 for some k∈, so α and λ_1 are also dual classes.This means that α is primitive, so the second relation now says that α is a rational longitude for M_2, hence α = ±λ_2.In particular λ_2 is nullhomologous in M_2, and the elements {λ_1, λ_2} = {λ_1, ±(μ_1+pkλ_1)} form an integral basis for H_1(T;), as claimed. Under the hypotheses of Lemma <ref>, suppose that the rational longitude λ_1 is not nullhomologous.Then we can writeH_1(M_1;)≅ (/p)^n_1⊕, H_1(M_2;)≅ (/p)^n_2⊕,with n_1 ≥ 1, such that the integral basis λ_1,λ_2 of H_1(T) satisfies(i_1)_*(λ_1)= ((1,0,…,0), 0), (i_2)_*(λ_1)= (0,1), (i_1)_*(λ_2)= ((0,0,…,0), p), (i_2)_*(λ_2)= (0,0)in these coordinates.If H_1(Y) ≅ (/p)^r then we also have n_1+n_2 = r-1, and Dehn filling either of the M_i along the other rational longitude gives usH_1(M_1(λ_2);)≅ (/p)^n_1+1, H_1(M_2(λ_1);)≅ (/p)^n_2. We recall from the proof of Lemma <ref> that we can take coordinatesH_1(M_1) ≅ (/p) ⊕ (/p)^n_1-1⊕such that n_1 ≥ 1 and (i_1)_*(λ_1)= (1,0,0) ∈ H_1(M_1), (i_1)_*(λ_2)= (0,0,p) ∈ H_1(M_1)up to changing the sign of thesummand.We also know that we can write H_1(M_2) ≅ (/p)^n_2⊕,and that the rational longitude λ_2 is nullhomologous in M_2, so that(i_2)_*(λ_2) = (0,0) ∈ H_1(M_2)in these coordinates.Thus by half lives half dies overthe element (i_2)_*(λ_1) must be non-torsion.If we write(i_2)_*(λ_1) = (w,m) ∈ H_1(M_2)then m must therefore be nonzero; we choose a generator of thesummand so that m > 0.If some prime ℓ≠ p divides m, then we take c≡ℓ^-1p and we see that (w,m) = ℓ· (cw,m/ℓ) is ℓ times an integral class, so (i_2)_* is zero over =/ℓ, contradicting half lives half dies.Thus m=p^f for some integer f ≥ 0.Returning to the Mayer–Vietoris sequence (<ref>), we know that(/p)^r ≅(i) ≅H_1(M_1) ⊕ H_1(M_2)/⟨ i_*(λ_1), i_*(λ_2) ⟩.We can define a surjection (i) →/p^f+1 in the coordinatesH_1(M_1) ⊕ H_1(M_2) ≅( (/p) ⊕ (/p)^n_1-1⊕) ⊕( (/p)^n_2⊕)by sending ( (a,v_1,m_1),(v_2,m_2) ) ↦ m_2 - p^f a p^f+1; this is well-defined, since a∈/p defines a unique residue class p^f a ∈/p^f+1, and since we havei_*(λ_1) = ((1,0,0),(w,p^f))↦ p^f - p^f · 1 ≡ 0 i_*(λ_2) = ((0,0,p),(0,0))↦ 0.But (i) ≅ (/p)^r can only surject onto /p^f+1 if f=0.Thus (i_2)_*(λ_1) = (w,1) for some w ∈ (/p)^n_2, and by a change of basis we can take this element rather than (0,1) to be the generator of thesummand of H_1(M_2), so that (i_2)_*(λ_1) = (0,1).We have now found coordinates on each H_1(M_j;) so that the images (i_j)_*(λ_1) and (i_j)_*(λ_2) have the desired form.The computations ofH_1(M_1(λ_2);)≅H_1(M_1)/⟨ (i_1)_*(λ_2) ⟩, H_1(M_2(λ_1);)≅H_1(M_2)/⟨ (i_2)_*(λ_1) ⟩follow immediately.Moreover, in these coordinates we have(i) ≅( (/p) ⊕ (/p)^n_1-1⊕) ⊕( (/p)^n_2⊕)/⟨((1,0,0),(0,1)), ((0,0,p),(0,0)) ⟩,which is readily checked to be isomorphic to (/p)^n_1+n_2+1.But (i) ≅ (/p)^r as well, so we conclude that r = n_1+n_2+1. We now complete the proof of Proposition <ref>. Lemma <ref> says that the rational longitudes λ_1 and λ_2 form a basis of H_1(T), and that [λ_1] ∈ H_1(M_1) has order p while λ_2 is nullhomologous in H_1(M_2).We thus define the Y_i by Dehn fillings along these curves:Y_1= M_1(λ_2), Y_2= M_2(λ_1)and we take K_1 ⊂ Y_1 and K_2 ⊂ Y_2 to be the cores of these Dehn fillings.It follows that their respective meridians are μ_1 = λ_2 and μ_2 = λ_1, which are dual to their rational longitudes λ_1 and λ_2 respectively, so then [K_1] ∈ H_1(Y_1) has order p while [K_2] = 0 in H_1(Y_2).Lemma <ref> says that these Y_i have homology of the formH_1(Y_1)= (/p)^k, H_1(Y_2)= (/p)^ℓwhere k=n_1+1 and ℓ = n_2 in the notation of that lemma, and that k+ℓ = (n_1+1)+n_2 = r.Since n_1 ≥ 1 we have also k ≥ 2, as claimed.Finally, since λ_2 is nullhomologous in M_2, Proposition <ref> gives us a degree-1 pinching mapY → M_1(λ_2) ≅ Y_1in which M_2 is sent onto a solid torus.The curve λ_2 lies in the image of the map π_1(T) →π_1(Y), which is injective since T is incompressible; thus λ_2 is a nontrivial element of π_1(Y), but it lies in the kernel of the homomorphism π_1(Y) →π_1(Y_1) induced by the above pinching map, so that homomorphism has nontrivial kernel.§ PROOF OF THEOREM <REF>We will now use Proposition <ref> to prove Theorem <ref>, which we restate here for convenience.Let Y be a closed, orientable 3-manifold with H_1(Y;) ≅ (/2)^r for some r ≥ 0.If Y is not homeomorphic to #^r ^3, then there is an irreducible representation π_1(Y) →(2,). We first verify Theorem <ref> in the atoroidal case before going on to prove it in general.Suppose that Y is a closed, atoroidal 3-manifold, with H_1(Y;) ≅ (/p)^r for some prime p and some integer r ≥ 0.If Y is (2,)-reducible, then it must be either #^r ^3 or a lens space of order p ≥ 3.If Y is a connected sum then each of its summands must also be (2,)-reducible with first homology (/p)^r' for some r' ≤ r, so we will assume for now that Y is prime.Then Y is both prime and atoroidal, so by geometrization it must be either Seifert fibered or hyperbolic.If Y is hyperbolic then it has a holonomy representation π_1(Y) ↪(2,), and this always lifts to (2,) <cit.>, so Y cannot be (2,)-reducible.This leaves only the Seifert fibered case.Among Seifert fibered manifolds, we know from <cit.> that the only rational homology spheres that are (2)-abelian are * S^3 and lens spaces, * ^3 #^3, * those with base orbifold S^2(3,3,3) and with |H_1(Y)| even, * and those with base orbifold S^2(2,4,4). In case (<ref>), the only Y such that H_1(Y) is p-torsion are S^3 and lens spaces of order p; and we can ignore case (<ref>) since it is not prime.For cases (<ref>) and (<ref>), we note that given a Seifert fibrationY ≅ S^2((α_1,β_1),(α_2,β_2),(α_3,β_3)),we then haveH_1(Y) = [ α_1 0 0 β_1; 0 α_2 0 β_2; 0 0 α_3 β_3; 1 1 1 0 ]and in particular|H_1(Y)| = |α_1α_2β_3 + α_1β_2α_3 + β_1α_2α_3|.(See <cit.>.)This quickly rules out case (<ref>), since if (α_1,α_2,α_3) = (3,3,3) then |H_1(Y)| is always a multiple of 18 (recalling that it must be even), hence not a prime power.And for case (<ref>), where (α_1,α_2,α_3) = (2,4,4), we let x,y,z,w be the generators specified by the presentation (<ref>), and we define a surjectionH_1(Y)↠/4x↦ 2 y,z↦ 1 w↦ 0.Since H_1(Y) surjects onto /4, it cannot possibly have the form (/p)^r with p prime.We conclude that the only prime examples are S^3 and lens spaces of order p.Finally, we note that if p=2 then every prime summand of Y is ^3, so Y ≅#^r ^3 as claimed.If instead p ≥ 3, then each summand is a lens space of order p; but then there cannot be more than one summand, or else Y would not be (2,)-reducible by exactly the same construction as in Proposition <ref>, so we have r ≤ 1 and Y is prime after all.We will suppose in what follows that H_1(Y;) ≅ (/2)^r for some r≥ 0, but that Y ≇#^r ^3 is (2,)-reducible.We will also assume that Y is prime: otherwise, by assumption there must be a prime summand Y' ≇^3, and then Y' is also (2,)-reducible with 2-torsion homology, so we might as well replace Y with Y'.Lemma <ref> says that if Y is atoroidal then Y ≅#^r ^3, so we may also assume that Y contains an incompressible torus.Since Y is prime and contains an incompressible torus T, we can writeY = M_1 ∪_T M_2where each M_i is irreducible and has incompressible boundary.(The torus T must separate Y because Y is a rational homology sphere.) We split the argument into three cases, depending on the rational longitudes λ_i of the M_i: in the first two we suppose that the λ_i are nullhomologous, so one of the conclusions of Proposition <ref> applies, and we number these cases according to the relevant conclusion of that proposition.In the remaining case, at least one of the λ_i is essential, so Proposition <ref> applies instead.Propositions <ref> and <ref> each give us closed manifolds Y_i and knots K_i ⊂ Y_i whose exteriors are the M_i, so we will refer freely to these pairs (Y_i,K_i) in the discussion below.Case <ref>.In this case the M_i are complements of non-trivial, nullhomologous knots that have been spliced together by gluing meridians to longitudes and vice versa.We apply Theorem <ref> to get an irreducible representation ρ: π_1(Y) →(2), hence if Y is (2,)-reducible then this case cannot occur.Case <ref>.In this case we have degree-1 maps Y → (Y_1)_2(K_1) and Y → (Y_2)_2(K_2), withH_1((Y_1)_2(K_1))≅ (/2)^k+1, H_1((Y_2)_2(K_2))≅ (/2)^ℓ+1and k+ℓ = r-1.The knots K_i ⊂ Y_i are nullhomologous, and the degree-1 maps Y → (Y_i)_2(K_i) for i=1,2 tell us that each of the (Y_i)_2(K_i) must be (2,)-reducible as well.If (Y_1)_2(K_1) ≅#^k+1^3 and (Y_2)_2(K_2) ≅#^ℓ+1^3, then Theorem <ref> tells us that Y cannot be (2,)-reducible or even (2)-abelian, a contradiction.Thus without loss of generality we must have (Y_1)_2(K_1) ≇#^k+1^3.In particular (Y_1)_2(K_1) is (2,)-reducible with first homology (/2)^k+1, but it is not homeomorphic to #^k+1^3.We let Y' be a prime summand of (Y_1)_2(K_1) different from ^3 (which may be (Y_1)_2(K_1) itself), and then by collapsing the other prime summands to S^3 we have a degree-1 mapY → (Y_1)_2(K_1) → Y'.Here Y' is prime by construction, it is (2,)-reducible since (Y_1)_2(K_1) is, and H_1(Y') is 2-torsion since it is a summand of H_1( (Y_1)_2(K_1) ) ≅ (/2)^k+1.Case 3. In this case one of the λ_i is essential, so we suppose without loss of generality that λ_1 is nonzero in H_1(M_1;).Then Proposition <ref> applies: we haveH_1(Y_1)≅ (/2)^k, H_1(Y_2)≅ (/2)^ℓwhere k ≥ 2 and k+ℓ = r; the knot K_1 is homologically essential in Y_1, with rational longitude of order 2, while K_2 ⊂ Y_2 is nullhomologous; and the gluing of ∂ M_1 to ∂ M_2 identifies μ_1 ∼λ_2 and λ_1 ∼μ_2.We now apply Theorem <ref> to see that Y cannot be (2)-abelian, a contradiction.Thus if Y is (2,)-reducible then this case does not occur.In each of the three cases above, we have found either a contradiction or a degree-1 map of the form f:Y → Y', where Y' ≇^3 is prime and (2,)-reducible and H_1(Y') is 2-torsion, and the mapf_*: π_1(Y) →π_1(Y')is a surjection with non-trivial kernel.We can thus replace Y with Y' and repeat.This process produces an infinite sequence of closed, prime 3-manifolds and degree-1 mapsY = Y_1Y_2Y_3 ⋯,in which none of the f_i are homotopy equivalences because the maps (f_i)_*: π_1(Y) →π_1(Y_i+1) are not injective.But Theorem <ref> says that such a sequence cannot exist, so we conclude that our original manifold Y ≇#^r^3 could not have been (2,)-reducible after all. § FROM /P TO P-TORSION HOMOLOGYIn this section we consider (2,)-reducible 3-manifolds whose first homology is p-torsion for some odd prime p.Our goal is to show the following, which in favorable situations reduces their classification to the case where the homology is in fact cyclic.Let p ≥ 3 be an odd prime with the property that every closed, (2,)-reducible 3-manifold with first homology /p is a lens space.If Y is a closed, (2,)-reducible 3-manifold with H_1(Y;) ≅ (/p)^r for some r ≥ 1, then r=1 and Y is a lens space. In practice one has to check even less than the stated hypothesis: in Theorem <ref> we will give a stronger, but much less concise, version of this theorem.There are many odd primes p that do not satisfy the hypothesis of Theorem <ref>.Indeed, Motegi <cit.> produced toroidal, (2,)-abelian manifolds Y by gluing together the exteriors of any two torus knots T_a,b and T_c,d, identifying the meridian of one with the Seifert fiber of the other and vice versa; then H_1(Y) is cyclic of order |abcd-1|, which may be prime.For example, taking T_2,3 and T_-2,3 as our torus knots shows that the hypothesis fails for p=37; taking T_2,3 and T_±2,5 rules out p=59 and p=61; and so on. One important difference from the case p=2 is that if p is odd, then (2,)-reducible manifolds with p-torsion homology are always prime, as the following lemma shows.Let p ≥ 3 be an odd prime, and suppose that Y is a closed, (2,)-reducible 3-manifold such that H_1(Y;) is p-torsion.Then Y is prime.Suppose not, and write Y = Y_1 # Y_2, where neither summand is S^3.Then neither Y_1 nor Y_2 can be a homology sphere, since otherwise it would not be (2,)-reducible by Theorem <ref> and so neither would Y.This means that each H_1(Y_i) is p-torsion and non-trivial, so each π_1(Y_i) surjects onto H_1(Y_i) and hence onto /p, and then we have a surjectionπ_1(Y) ≅π_1(Y_1) ∗π_1(Y_2) ↠ (/p) ∗ (/p).Since p ≥ 3, there is a non-abelian homeomorphism(/p) ∗ (/p) →(2)defined by sending generators of each /p factor to the unit quaternions exp(2π/pj) and exp(2π/pk), respectively.Composing (<ref>) and (<ref>) gives an irreducible representation π_1(Y) →(2), so we have a contradiction. Lemma <ref> simplifies some parts of the story, because we no longer have to worry about connected sums of (2,)-reducible manifolds, as we did for #^r ^3 in the 2-torsion case.We already encountered this fact in Lemma <ref>, where we saw that the only atoroidal examples are lens spaces of order p. §.§ Zero-surgery on knots in lens spaces We begin by generalizing Theorem <ref> to nullhomologous knots in arbitrary lens spaces.Let K ⊂ Y be a nullhomologous knot in S^3 or a lens space, and let w ∈ H^2(Y_0(K);) be Poincaré dual to the image in Y_0(K) of a meridian of K.Then I^w_*(Y_0(K)) ≠ 0 if and only if K is not an unknot in Y.The case Y=S^3 is Theorem <ref>, so we can assume that Y is a non-trivial lens space. Since K is nullhomologous in Y, we know that K is in fact nullhomotopic in Y.Hom and Lidman <cit.> proved that since Y is a prime rational homology sphere and K is nullhomotopic, the manifold Y_0(K) contains a non-separating 2-sphere if and only if K is unknotted, and then the proposition follows from Proposition <ref> below. We devote the remainder of this subsection to proving Proposition <ref>, which generalizes Theorem <ref> for manifolds Y with first Betti number 1; the key difference is that we do not require Y to be irreducible.Let Y be a closed 3-manifold with b_1(Y)=1, and let w ∈ H^2(Y;) satisfy w · R = 1 for some surface R ⊂ Y.Then I^w_*(Y) = 0 if and only if Y contains a non-separating two-sphere. The proof of Proposition <ref> makes use of several basic properties of framed instanton homology I^#(Y,λ) over a field of characteristic zero, including a connected sum theorem relating it to the usual instanton homology of an admissible bundle; we will refer to <cit.> for all of the needed results.If Y is a rational homology sphere, then I^#(Y,λ) ≠ 0 for any λ.The invariant I^#(Y,λ) is equipped with a /2 grading, and its Euler characteristic with respect to this grading isχ(I^#(Y,λ)) = |H_1(Y;)| > 0according to <cit.>, so we must have I^#(Y,λ) > 0. Let w → Y be an admissible Hermitian line bundle, and λ∈ H_1(Y;) the Poincaré dual of c_1(w).Then I^w_*(Y) = 0 if and only if I^#(Y,λ) = 0.Scaduto <cit.> proved thatI^#(Y,λ) ≅(u^2-64) ⊗ H_*(S^3),where u=4μ() is a degree-4 operator on the /8-graded invariant I^w_*(Y), but only we take the kernel of the action of u^2-64 on four consecutive gradings.The operator u^2-64 is nilpotent <cit.>, so (u^2-64) = 0 if and only if I^w_*(Y) is zero in those gradings; and then u restricts to an isomorphism I^w_*(Y)I^w_*+4(Y), so this is equivalent to I^w_*(Y) = 0 in all gradings.We write the prime decomposition of Y asY ≅ Y_0 # Y_1 #…# Y_k,where Y_0 is the unique summand with b_1(Y_0) = 1 and then the Y_i with i ≥ 1 are all rational homology spheres.If we write the Poincaré dual λ∈ H_1(Y;) of w as λ = λ_0+…+λ_k with λ_i ∈ H_1(Y_i) for all i, then I^# satisfies a Künneth formulaI^#(Y,λ) ≅⊗_i=0^k I^#(Y_i,λ_i);this is explained in <cit.> when the λ_i are all zero, but the same proof works in full generality.By Lemma <ref> we have I^#(Y_i,λ_i) ≠ 0 for all i ≥ 1, so I^#(Y,λ) ≠ 0 if and only if I^#(Y_0,λ_0) ≠ 0.Two applications of Lemma <ref> now tell us that I^w_*(Y) ≠ 0 if and only if I^w_0_*(Y_0) ≠ 0, where w_0 = w|_Y_0 is the Poincaré dual to λ_0.Since Y_0 is prime, either it is S^1× S^2 and then I^w_0_*(Y_0) = 0, or it is irreducible and then I^w_0_*(Y_0) ≠ 0 by Theorem <ref>.Since Y has a non-separating S^2 if and only if one of its prime summands is S^1× S^2, it follows that Y contains such a sphere if and only if I^w_0_*(Y_0) = 0, hence if and only if I^w_*(Y) = 0 as claimed.§.§ Splicing knots in lens spaces This subsection is devoted to proving an analogue of Theorem <ref>, in which the knots can lie in lens spaces rather than in 3-manifolds whose first homology is 2-torsion.Let each of Y_1 and Y_2 be either S^3 or a lens space, and let K_1 ⊂ Y_1 and K_2 ⊂ Y_2 be nullhomologous knots with irreducible, boundary-incompressible exteriors.Form a closed, toroidal 3-manifoldY = E_K_1∪_∂ E_K_2by gluing the meridian μ_1 and longitude λ_1 of K_1 to the longitude λ_2 and meridian μ_2 of K_2, respectively.Then there is a representationρ: π_1(Y) →(2)with non-abelian image.Suppose that each Y_i is a lens space of order n_i ≥ 3.Then we can define representationsρ_i: π_1(E_K_i) ↠π_1(E_K_i)/μ_i≅π_1(Y_i) ≅/n_i↪(2),satisfying ρ_i(μ_i) = 1, and we have ρ_i(λ_i) = 1 since the image of ρ_i is abelian.Each ρ_i restricts to the trivial representation on π_1(T), so they glue together to give a representation ρ: π_1(Y) →(2), and we can guarantee that ρ has non-abelian image by choosing to send generators of π_1(Y_1) ≅/n_1 and π_1(Y_2) ≅/n_2 to the unit quaternions exp(2π/n_1j) and exp(2π/n_2k), respectively.From now on we assume without loss of generality that Y_1 is either S^3 or ^3; the proof now follows essentially the same argument as Theorem <ref>.Neither K_1 nor K_2 is unknotted, so if w_1 ∈ H^2((Y_1)_0(K_1)) and w_2 ∈ H^2((Y_2)_0(K_2)) are Poincaré dual to meridians of K_1 and K_2 then we know thatI^w_1_*((Y_1)_0(K_1))≠ 0, I^w_2_*((Y_2)_0(K_2))≠ 0by Proposition <ref>.Since both Y_1 and Y_2 are (2)-abelian, the character varieties X(E_K_1) and X(E_K_2) have well-defined images in the cut-open pillowcase = [0,π] × (/2π)of <ref> (see Lemma <ref>), and Theorem <ref> provides us with essential closed curvesC_1⊂ j(X(E_K_1)), C_2⊂ j(X(E_K_2))in the cut-open pillowcase images of each.Just as in the proof of Theorem <ref>, the curves C_1 and C_2 now give rise to continuous pathsγ^ℓ_t = (α^ℓ_t, β^ℓ_t): [0,1] → [0,π] × [0,2π], ℓ=1,2such that for each ℓ we have * β^ℓ_0 = 0, β^ℓ_1 = 2π, and 0 < β^ℓ_t < 2π for 0 < t < 1;* 0 < α^ℓ_t < π for 0<t<1 by Lemma <ref>, since β^ℓ_t ∉2π;* and each γ^ℓ_t is the pillowcase image of some ρ^ℓ_t: π_1(E_K_ℓ) →(2) satisfyingρ^ℓ_t(μ_ℓ)= [e^iα^ℓ_t 0; 0 e^-iα^ℓ_t ],ρ^ℓ_t(λ_ℓ)= [e^iβ^ℓ_t 0; 0 e^-iβ^ℓ_t ].In particular ρ^ℓ_t is irreducible for 0<t<1, since ρ^ℓ_t(λ_ℓ) ≠ 1.Lemma <ref> tells us slightly more about α^1_t, namely that0 < α^1_t < πfor allt ∈ [0,1]since Y_1 is (2)-abelian and H_1(Y_1) is either trivial or /2.The paths{ (α^1_t, β^1_t) }_t∈[0,1]and{ (β^2_t,α^2_t) }_t∈[0,1]must intersect exactly as before, say at some point (α̃,β̃), where 0<α̃<π since α̃= α^1_t for some t.This point of intersection gives rise to a representation ρ: π_1(Y) →(2), and the restriction ρ|_E_K_2 must have pillowcase coordinates (β̃,α̃).Then 0 < α̃< π implies that ρ|_E_K_2(λ_2) ≠ 1, so ρ|_E_K_2 cannot have abelian image and thus neither can ρ.§.§ Manifolds with p-torsion homology We are now ready to prove Theorem <ref>, which will follow quickly from the next proposition.Let p ≥ 3 be an odd prime such that every closed, (2,)-reducible 3-manifold with first homology /p is a lens space.If Y is a closed, (2,)-reducible 3-manifold with H_1(Y;) ≅ (/p)^r for some integer r ≥ 2, then there is a closed Y' with first homology (/p)^r' for some r' ≥ 2 and a degree-1 mapY → Y'that is not a homotopy equivalence.Both Y and Y' are prime, toroidal, and (2,)-reducible.We know that Y is prime by Lemma <ref>, and that it contains an incompressible torus by Lemma <ref>: indeed, if it were atoroidal then it would have to be a lens space, but H_1(Y) is not cyclic.By the same argument, once we have constructed Y' with the desired homology and degree-1 map f: Y→ Y', it will follow that Y' is (2,)-reducible, and then that Y' is prime and toroidal.We thus focus on constructing Y' and the map f, which will be a pinch map of the sort provided by Proposition <ref>; if it collapses a submanifold bounded by an incompressible torus T to a solid torus, then it will not be a homotopy equivalence, since the kernel of the induced map f_*: π_1(Y) →π_1(Y') contains non-trivial elements of the subgroup π_1(T) ⊂π_1(Y).Since Y is prime and has an incompressible torus T, we can writeY ≅ M_1 ∪_T M_2where each M_i is irreducible, with incompressible torus boundary.We let λ_i ⊂∂ M_i denote the rational longitude of each M_i.Suppose first that each of the λ_i is nullhomologous in its respective M_i.Then Proposition <ref> says that we can write each M_i as the exterior of some nullhomologous knot K_i ⊂ Y_i, with H_1(Y_i;) ≅ (/p_i)^n_i, such that one of two cases occurs: Case 1: n_1+n_2=r, and we form Y by gluing μ_1 to λ_2 and λ_2 to μ_1.In this case we use Proposition <ref> to pinch either M_2 or M_1 to a solid torus, giving us degree-1 mapsY→ M_1(λ_2) ≅ M_1(μ_1) = Y_1, Y→ M_2(λ_1) ≅ M_2(μ_2) = Y_2.If n_1=n_2=1 then the manifolds Y_1 and Y_2 are (2,)-reducible with first homology /p, so they are both lens spaces by our assumption on p, but then Proposition <ref> says that Y is not even (2)-abelian and we have a contradiction.Now since n_1+n_2 = r ≥ 2 but (n_1,n_2) ≠ (1,1), it follows that n_i ≥ 2 for some i, so we let Y' be the corresponding Y_i and we are done. Case 2: n_1+n_2=r-1.In this case Proposition <ref> says that for some a,b,c with ac-bp=-1 we have a pair of degree-1 mapsY → (Y_1)_-p/a(K_1) and Y → (Y_2)_p/c(K_2),neither of which is a homotopy equivalence.The targets of these maps have first homology (/p)^n_1+1 and (/p)^n_2+1, respectively, and(n_1+1) + (n_2+1) = r+1 ≥ 3,so we must have n_i+1 ≥ 2 for some i.We take Y' to be the corresponding Dehn surgery on K_i ⊂ Y_i.We have now proved the proposition except in the case where one of the rational longitudes is homologically essential, so we suppose without loss of generality that λ_1 is nonzero in H_1(M_1).Now we apply Proposition <ref> to see that we can write each M_i as the exterior of a knot K_i ⊂ Y_i, with meridian μ_i and rational longitude λ_i, such that * H_1(Y_1) ≅ (/p)^k for some k ≥ 2;* λ_2 is nullhomologous in M_2;* and the gluing map ∂ M_1 ≅∂ M_2 identifies μ_1 with λ_2.Proposition <ref> then gives us a degree-1 mapY → M_1(λ_2) ≅ M_1(μ_1) ≅ Y_1,so we take Y' = Y_1 and the proof is complete.Let Y_1 = Y be (2,)-reducible with first homology (/p)^r_1 for some r_1 ≥ 2.By the hypothesis on p, Proposition <ref> provides us with an (2,)-reducible manifold Y_2, whose first homology is (/p)^r_2 for some r_2 ≥ 2, and a degree-1 mapf_1: Y_1 → Y_2that is not a homotopy equivalence.We repeat with Y_2 in place of Y_1 and so on, constructing an infinite sequenceY_1Y_2Y_3 ⋯of degree-1 maps between prime, toroidal manifolds, in which none of the maps f_i is a homotopy equivalence.This contradicts Theorem <ref>, so our initial manifold Y cannot exist after all.§.§ A strengthening of Theorem <ref> We can deduce the conclusion of Theorem <ref> from a seemingly much weaker hypothesis on the prime p, by a similar appeal to Theorem <ref>.Fix an odd prime p ≥ 3. For any choice of * integer homology 3-spheres Y_1 and Y_2,* knots K_1 ⊂ Y_1 and K_2 ⊂ Y_2 with irreducible, boundary-incompressible complements,* and integers a,b,c satisfying ac-bp=-1 and 0 ≤ b < c < p/2such that both(Y_1)_-p/a(K_1) and (Y_2)_p/c(K_2)are lens spaces, we can form a closed 3-manifoldY = E_K_1∪_∂ E_K_2by gluing ∂ E_K_1 to ∂ E_K_2 so thatμ_1= aμ_2 + bλ_2λ_1= pμ_2 + cλ_2in the homology of the torus ∂ E_K_1∼∂ E_K_2.Suppose we have chosen p so that every such Y admits an irreducible representation π_1(M) →(2,).With the above assumption on p, if Y is a closed 3-manifold such that H_1(Y;) ≅ (/p)^r for some r≥ 1, then either Y is a lens space of order p or there is an irreducible homomorphism π_1(Y) →(2,).There are exactly p-1/2 tuples (a,b,c) to consider in the hypothesis of Theorem <ref>, since once we have fixed c between 1 and p-1/2 inclusive, the condition ac-bp=-1 implies that b ≡ p^-1c.For any p this includes (a,b,c) = (-1,0,1), and then for example when p=5 we must also consider (a,b,c)=(2,1,2).By Theorem <ref>, it suffices to prove the theorem when H_1(Y;) ≅/p, so we will assume from now on that Y is (2,)-reducible with H_1(Y;) ≅/p, but that Y is not a lens space.Lemmas <ref> and <ref> respectively tell us that Y is prime, and that it contains an incompressible torus T since it is not a lens space, so we can writeY = M_1 ∪_T M_2where each M_i is irreducible with incompressible boundary T.Moreover, Proposition <ref> guarantees that the rational longitude of each M_i is nullhomologous in M_i, because otherwise we would have H_1(Y;) ≅ (/p)^r for some r ≥ 2.Following Proposition <ref> and Remark <ref>, we can therefore write each M_i as the exterior of a nullhomologous knot K_i in some 3-manifold Y_i such that either * H_1(Y_1) ⊕ H_1(Y_2) ≅/p, and Y is formed by gluing μ_1 to λ_2 and λ_2 to μ_1;* H_1(Y_1) ⊕ H_1(Y_2) ≅ 0, and Y is formed by some gluing such thatμ_1= aμ_2 + bλ_2λ_1= pμ_2 + cλ_2in H_1(T), where ac-bp=-1 and 0 ≤ b < c < p/2.For the last case, Remark <ref> may require us to reverse the orientation of Y and of the M_j in order to achieve c ≤p/2 rather than c < p, but this does not affect whether or not Y is (2,)-reducible.We also note that the inequality c ≤p/2 is strict here because p is odd.In the first case, we have degree-1 pinching maps from Y to each of Y_1 and Y_2, so Y_1 and Y_2 must be (2,)-reducible as well.One of them is a homology sphere, so it must be S^3 by Theorem <ref>, and the other has first homology /p.If the latter is a lens space then Proposition <ref> gives us a non-abelian representation π_1(Y) →(2) and hence a contradiction, so it must be toroidal.In the second case, both Y_1 and Y_2 are homology spheres, and we have degree-1 pinching mapsY → (Y_1)_-p/a(K_1) and Y → (Y_2)_p/c(K_2),so both (Y_1)_-p/a(K_2) and (Y_2)_p/c(K_1) are (2,)-reducible, with first homology /p.If neither of these is toroidal then they must both be lens spaces, hence by hypothesis there is an irreducible representation π_1(Y) →(2,).But we assumed Y is (2,)-reducible, so at least one of (Y_1)_-p/a(K_1) and (Y_2)_p/c(K_2) must be toroidal after all.In both cases, we have found (up to a possible change of orientation) a degree-1 map Y → Y', where Y' is (2,)-reducible and toroidal with H_1(Y';) ≅/p, by pinching some submanifold with incompressible torus boundary onto a solid torus.This is not a homotopy equivalence, so we can repeat this process indefinitely with Y' in place of Y and so on, to get an infinite sequenceY → Y' → Y”→⋯of degree-1 maps which are not homotopy equivalences.This contradicts Theorem <ref>, so the claimed Y cannot exist after all.§ MANIFOLDS WITH 3-TORSION HOMOLOGYOur goal in this section is to prove Theorem <ref>, which we restate here.Let Y be a closed 3-manifold such that H_1(Y;) is 3-torsion.If Y is not homeomorphic to ± L(3,1), then there is an irreducible representation π_1(Y) →(2,). We prove Theorem <ref> by appealing to Theorem <ref>.We note in the hypothesis of Theorem <ref> that there is a unique triple of integers (a,b,c) with ac-3b=-1 and 0 ≤ b < c < 3/2, namely (a,b,c)=(-1,0,1), so Theorem <ref> is now an immediate consequence of the following analogue of Theorem <ref>.Let K_1 ⊂ Y_1 and K_2 ⊂ Y_2 be knots such that for each j=1,2: * the manifold Y_j is an integer homology sphere,* the exterior E_K_j is irreducible with incompressible boundary, and* the Dehn surgery (Y_j)_3(K_j) is a lens space of order 3.We glue the exteriors along their boundaries to form a toroidal manifoldY = E_K_1∪_∂ E_K_2by identifying μ_1 ∼μ_2^-1 and λ_1 ∼μ_2^3 λ_2.Then there is a representationρ: π_1(Y) →(2)with non-abelian image. The proof of Theorem <ref> will be similar in spirit to the content of <ref> but simpler, largely because in Lemma <ref> below, we will be able to put stronger restrictions on the pillowcase images of the various character varieties X(E_K_j) than we could in the corresponding Proposition <ref>.To set the stage, given an odd prime p, we define an involution of the pillowcase byσ_p(α,β) = (-α,pα+β) = (α,2π-(pα+β))by analogy with the map σ of Subsection <ref>.If we letP= (0,π), Q= (π,π)as before, then σ_p(P) = P but σ_p(Q) = (π,0) ≠ Q.(Similarly, the map σ_p does not commute with the involution τ of Lemma <ref>, because σ_p(τ(Q)) = P but τ(σ_p(Q)) = (0,0).)Suppose under the hypotheses of Theorem <ref> that the pillowcase imagesi^*(X(E_K_1)) andσ_3(i^*(X(E_K_2)))intersect at some point other than (0,0) or (2π/3,0).Then there is a representationρ: π_1(Y) →(2)with non-abelian image.Let (α,β) ∈ X(T^2) be the given point of intersection, and write σ_3(α,β) = (γ,δ); since σ_3 is an involution, this means that(α,β) = σ_3(γ,δ) = (γ,2π-(3γ+δ)).Now since (α,β) ∈ i^*(X(E_K_1)) and (γ,δ) ∈ i^*(X(E_K_2)), there are representationsρ_j: π_1(E_K_j) →(2),j=1,2such thatρ_1(μ_1)= [e^iα 0; 0 e^-iα ],ρ_1(λ_1)= [e^iβ 0; 0 e^-iβ ]and (using the fact that (γ,δ) ∼ (-γ,-δ) in X(T^2))ρ_2(μ_2)= [ e^-iγ 0; 0e^iγ ],ρ_2(λ_2)= [ e^-iδ 0; 0e^iδ ].This means that ρ_2(μ_2^-1) = [e^iγ 0; 0 e^-iγ ] = [e^iα 0; 0 e^-iα ] = ρ_1(μ_1),ρ_2(μ_2^3λ_2) = [ e^-i(3γ+δ)0;0e^i(3γ+δ) ] = [e^iβ 0; 0 e^-iβ ] = ρ_1(λ_1)and so ρ_1 and ρ_2 glue together to give us the desired representation ρ.At this point we need only show that ρ has non-abelian image.But if its image is abelian then each of ρ_1 and ρ_2 must have abelian image as well, hence β≡δ≡ 0 2π.If β∈ 2π then we know that δ = 2π - (3α+β) is a multiple of 2π if and only if 3α is, so we must have(α,β) = (0,0)or(2π3,0)in the pillowcase.Since we have assumed that our given intersection (α,β) is not either one of these points, we conclude that ρ has non-abelian image after all. The remainder of this section will be devoted to finding a point of intersection to which we can apply Lemma <ref>.Most of the argument applies equally well to other odd primes p, so we will not specialize to p=3 until the end.To summarize the upcoming argument, each character variety will provide us with a closed curve γ_i in the pillowcase.Each γ_i is homologically essential in the complement of two points P=(0,π) and Q=(π,π), and is further constrained by the fact that the corresponding knots have lens space surgeries.If we choose the γ_i carefully, this will imply that γ_1 and σ_3(γ_2) must intersect somewhere.Now if they meet at the point (2π/3,0), then Lemma <ref> seems to say that we are stuck; but we will show that they must be transverse there, and since any pair of closed curves in the 2-sphere X(T^2) ≅ S^2 have intersection number zero, we can then deduce the existence of a second, more useful point of intersection.This last part does not readily generalize to other primes p, unfortunately, because we have to show that the corresponding curves intersect away from one of the p-1/2 points of the form (2kπ/p,0) where 1 ≤ k ≤p-1/2, and when p > 3 there are at least two such points.We begin with the following analogue of Proposition <ref>, which is illustrated in Figure <ref> when p=3.Let K be a knot in an integral homology sphere Y, and suppose for some prime p ≥ 3 that Y_p(K) is a lens space of order p.If ρ: π_1(E_K) →(2) has pillowcase coordinates i^*([ρ]) = (α,β) where pα+β∈π, then ρ is reducible and β≡ 02π.In this case there is an open neighborhood of (α,β) ∈ X(T^2) that does not contain the images of any irreducible representations.We suppose first that i^*([ρ]) = (α,β) where pα+β is an integral multiple of 2π.Then ρ(μ^pλ)=1, so ρ factors throughπ_1(E_K)/μ^pλ≅π_1(Y_p(K)) ≅/p,and therefore its image is cyclic.This means that ρ has abelian image, and hence ρ(λ)=1 (equivalently, β=0) as usual.Now suppose instead that pα+β is an odd multiple of π, so ρ(μ^pλ) = -1.Then the central characterχ: π_1(E_K) ↠ H_1(E_K) ≅→{±1}sending μ to -1 satisfies χ(μ^pλ) = (-1)^p = -1 since p is odd, so χ·ρ is a representation sending μ^pλ to 1.We conclude as above that χ·ρ has cyclic image, hence so does ρ itself, and then ρ(λ)=1 and β=0 once again.Finally, suppose that we have a sequence of irreducible representations ρ_n ∈ R^(E_K) whose pillowcase images i^*([ρ_n]) converge to a point (α,β) with pα+β∈π.Since R(E_K) is compact, we can pass to a convergent subsequence, whose limit ρ satisfies i^*([ρ]) = (α,β); since pα+β∈π, we deduce from above that ρ is abelian, hence β=0 and α = kπ/p for some integer k with 0 ≤ k ≤ p.In addition, Lemma <ref> says that α cannot be 0 or π since Y_p(K) is (2)-abelian and ρ is a limit of irreducible representations, and therefore 0 < k < p.Since ρ is a reducible limit of irreducible representations, Heusener, Porti, and Suárez Peiró <cit.> also proved that the Alexander polynomial of K satisfiesΔ_K(e^2kπ i/p) = Δ_K(e^2α i) = 0.(They attribute this to Klassen <cit.>, who proved it for knots in S^3.)Thus Δ_K(t) vanishes at a primitive pth root of unity, so a result of Boyer and Nicas <cit.> says that the fundamental groupπ_1(Y_p(K)) ≅/pis not cyclically finite.By definition this means that some normal subgroup of /p has infinite abelianization, which is absurd, so ρ cannot be a limit of irreducible representations after all. If K is a knot in a homology sphere Y, and Y_p(K) is a lens space of order p for some prime p ≥ 3, then Lemma <ref> implies that the pillowcase imagei^*(X(E_K)) ⊂ X(T^2)avoids the points P=(0,π) and Q=(π,π).In the following lemmas, we will say that a closed, embedded curve γ⊂ i^*(X(E_K)) is p-avoiding if it is homologically essential in X(T^2) ∖{P,Q}.Such curves can only intersect the lines pα+β≡ 0 π at points of the form (π k/p,0), where k is an integer and 0 ≤ k ≤ p, and we are about to show that in fact k cannot be 0 or p.Let p≥ 3 be prime, and let K be a knot in a homology sphere Y whose exterior is irreducible and has incompressible boundary.If Y_p(K) is a lens space of order p, then the pillowcase image i^*(X(E_K)) ⊂ X(T^2) contains a p-avoiding curve.Any such curve necessarily avoids (0,0) and (π,0) but intersects both of the lines L_0 = {β≡ 0 2π} and L_π = {β≡π2π}.Proposition <ref> says that Y_0(K) is irreducible, so if w ∈ H^2(Y_0(K)) is Poincaré dual to a meridian of K, then Theorem <ref> says thatI^w_*(Y_0(K)) ≠ 0.The image i^*(X(E_K)) does not contain P or Q by Lemma <ref>, since these points both satisfy pα+β∈π but β=π∉2π.Now Proposition <ref> gives us the desired p-avoiding curve γ.Since γ is homologically essential in the complement of {P,Q}, it must intersect any path from P to Q, and in particular it meets the line L_π somewhere.To see that γ avoids (0,0) and (π,0), we argue exactly as in the proof of Proposition <ref>: by Lemma <ref> these are not limit points of the image i^*(X^(E(K))) of irreducible characters, so γ can only approach (nπ,0) (where n is 0 or 1) along the arc β≡ 0 2π.In particular, if we parametrize γ as a mapγ: /↪ X(T^2)with γ(0) = (nπ,0), then γ must embed some open interval (-ϵ,ϵ) in the arc [0,π] ×{0} as a neighborhood of the endpoint (nπ,0), and this is impossible.Now suppose that γ avoids the line L_0.In this case, Lemma <ref> says that γ is disjoint from both L_0 and each of the lines {pα+β≡ 0p}, since it can only meet the latter along L_0.But then γ lies in the complement of all of these lines, which is a disjoint union of p open disks in X(T^2) ∖{P,Q} as illustrated in Figure <ref>.Since γ is connected it must lie in one of these disks, which means that it is nullhomotopic in the complement of P and Q.This contradicts the fact that γ is homologically essential, so γ must meet the line L_0 after all. Let γ⊂ X(T^2) be a p-avoiding curve for some prime p ≥ 3.If γ contains a point of the form A_k = (2kπ/p, 0), where k is an integer satisfying 0 < k < p/2, then there is an ϵ-neighborhood U of this point such thatγ∩ U = (2kπp - ϵ, 2kπp + ϵ) ×{0}.In particular, if γ' ⊂ X(T^2) is another p-avoiding curve passing through A_k, then γ and σ_p(γ') intersect transversely at A_k.Suppose that γ belongs to the pillowcase image of the character variety of K ⊂ Y.Then Lemma <ref> says that A_k has some ϵ-neighborhood U in the pillowcase where every point of the corresponding image i^*(X(E_K)) is the image (α,0) of a reducible representation.Since γ is a subset of i^*(X(E_K)), it follows that the intersection γ∩ U must be the open arc Γ = (2kπ/p - ϵ, 2kπ/p + ϵ) ×{0}, as claimed.Now if γ' also passes through A_k, then it intersects some ϵ'-neighborhood U' of A_k in the open arc Γ' = (2kπp - ϵ', 2kπp + ϵ') ×{0},where ϵ' may be different from ϵ because γ' may come from a different knot K' ⊂ Y'.In any case, we have σ_p(A_k) = A_k, so the image of Γ' is an arcσ_p(Γ') = { (α, 2π - pα) |2kπp-ϵ' < α < 2kπp+ϵ' }of slope -p through A_k.See the left side of Figure <ref>.The arcs Γ and σ_p(Γ') intersect transversely at A_k, hence so do the simple closed curves γ and σ_p(γ') to which they belong. Let γ, γ' ⊂ X(T^2) be p-avoiding curves for some prime p ≥ 3.If the intersectionγ∩σ_p(γ')is empty, then there are integers k and k' with 0 < k,k' < p such that γ contains the point (kπ/p,0) and γ' contains the point (k'π/p,0).It suffices to prove the desired conclusion for γ', since we can use the fact that σ_p is an involution to writeγ' ∩σ_p(γ) = σ_p( γ∩σ_p(γ') ) = ∅and thus freely exchange the roles of γ and γ'.According to Lemma <ref>, the simple closed curve γ meets both of the linesL_π = {β≡ 0 π}and L_0 = {β≡π2π},so it contains an embedded path Γ from L_π to L_0.Letting P=(0,π) and Q=(π,π) as usual, we form a path Γ̃ from σ_p(P) = P to σ_p(Q) = (π,0) by first following L_π from P until it meets Γ, then following Γ until it meets L_0, and then following L_0 from there to σ_p(Q). See the right side of Figure <ref>.Since γ' is homologically essential in X(T^2)∖{P,Q}, the image σ_p(γ') is also homologically essential in X(T^2) ∖{σ_p(P),σ_p(Q)}, and so it must intersect any path from σ_p(P) = P to σ_p(Q).In particular, the intersectionΓ̃∩σ_p(γ')is nonempty.But we have assumed that σ_p(γ') is disjoint from γ and hence from the path Γ⊂Γ̃, so σ_p(γ') must intersect Γ̃ along either L_π or L_0.This means thatσ_p(γ') ∩{β∈π}≠∅,and we apply σ_p to both sets to deduce that the intersectionγ' ∩σ_p( {β∈π}) = γ' ∩{ pα+β∈π}is also nonempty.Lemma <ref> says that any point in this intersection must have the form (k'π/p,0), where 0 < k' < p by Lemma <ref>, so γ' contains such a point after all. We are now ready to specialize to p=3 and thus prove Theorem <ref>. In order to find the desired representation π_1(Y) →(2), Lemma <ref> says that it suffices to prove thati^*(X(E_K_1)) andσ_3( i^*(X(E_K_2)) )intersect at some point of X(T^2) other than A_0 = (0,0) or A_1 = (2π/3,0).We use Lemma <ref> to find a pair of 3-avoiding curvesγ_j ⊂ i^*(X(E_K_j)) ⊂ X(T^2) ∖{P,Q},j=1,2that both avoid the point A_0.If they both pass through A_1, then Lemma <ref> says that the simple closed curves γ_1 and σ_3(γ_2) meet transversely at A_1.We view these curves as lying in the pillowcase, which is topologically S^2, and then after orienting them arbitrarily their intersection number must be zero.Their transverse intersection at A_1 contributes ±1 to this intersection number γ_1 ·σ_3(γ_2) = 0, so there must be at least one other point of intersection.This other point is neither A_0 nor A_1, so in this case we are done.In the remaining case, the curves γ_1 and σ_3(γ_2) do not intersect at A_0 or A_1.If they have another point of intersection then we are done, so we can assume that γ_1 and σ_3(γ_2) are disjoint.But then Lemma <ref> says that there must be integers k_1,k_2 ∈{1,2} such that(k_1π/3,0) ∈γ_1 and(k_2π/3,0) ∈γ_2.At least one of the k_j is equal to 1, since otherwise γ_1 and σ_3(γ_2) both contain A_1 = σ_3(A_1).For each such j, we use the involutionτ(α,β) = (π-α,2π-β)of the pillowcase, which fixes each of the pillowcase images i^*(X(E_K_j)) ⊂ X(T^2) setwise by Lemma <ref>, to replace γ_j with the 3-avoiding curve τ(γ_j) that passes through τ(π/3,0) = (2π/3,0).But now we have found 3-avoiding curves in i^*(X(E_K_1)) and i^*(X(E_K_2)) that both pass through A_1, so by the first case above they also must intersect at some point other than A_0 and A_1, completing the proof. This completes the proof of Theorem <ref>. § THE SURGERY EXACT TRIANGLE IN IRREDUCIBLE INSTANTON HOMOLOGYIn this appendix, we verify some details needed for the proof of Theorem <ref>, which generalizes the surgery exact triangle in instanton homology <cit.> to the irreducible instanton homology of surgery on a nullhomologous knot in some Y such that H_1(Y;) is 2-torsion.We repeat Scaduto's proof of the surgery exact triangle <cit.>, in which the maps in the exact triangle are induced by 2-handle cobordismsYY_0(K)Y_1(K)Y →⋯.In each case, we write (W,c) to indicate that W is a cobordism, and c ⊂ W is a properly embedded surface such that [c] ∈ H_2(W,∂ W) is Poincaré dual to the first Chern class of some line bundle, which specifies a (3)-bundle on W as usual.The cobordism map associated to (W,c) is then defined by counting solutions to the perturbed ASD equation on this bundle.More generally, the proof of exactness also involves counting instantons on various compositions of these cobordisms, taken over various 1- and 2-dimensional families of metrics.Since these define chain maps between various irreducible instanton homology groups, and chain homotopies between these, the instantons we count always have irreducible flat limits at either end of their cobordisms.Scaduto's proof of exactness works without modification as long as the relevant moduli spaces can be compactified without reducible connections appearing in the middle of a broken flowline, since these would not be counted.In what follows we will omit K from our notation, writing Y_n := Y_n(K).We will also concatenate subscripts to denote the composition of two or more cobordisms, so that for example W_01 = W_0 ∪_Y_0 W_1 and W_120 = W_1 ∪_Y_1 W_2 ∪_Y W_0.We first describe the basic cobordisms in (<ref>).We build W_0 by attaching a 0-framed 2-handle to Y × [0,1] along K×{1}.Then W_1 is the result of attaching a -1-framed 2-handle along a meridian of K, and W_2 is likewise built out of a -1-framed 2-handle attached along a meridian of the previous attaching curve.We have b_1(W_i) = b^+_2(W_i) = 0 for each i=0,1,2, and the W_i have signatures σ(W_0) = σ(W_1) = 0, σ(W_2) = -1. The claim that b_1(W_i) = 0 follows from noting that Y is a rational homology sphere and the knot K ⊂ Y is nullhomologous.Now the signatures of W_0, W_01, and W_012 are the same as the signatures of the linking matrices[ 0 ], [01;1 -1 ], [010;1 -11;01 -1 ]for the Kirby diagrams of the respective cobordisms, and these signatures are 0, 0, and -1 respectively, so that σ(W_0) = σ(W_1) = 0 and σ(W_2) = -1 by additivity of signatures.Since each b_2(W_i) is 1, the claim that b_2^+(W_i) = 0 follows immediately. Each W_i is labeled with a properly embedded surface c_i, following <cit.>: if the incoming end of W_i is decorated with λ, then c_i is (up to orientation) the union of a cylinder λ× [0,1] with a meridional disk of the attaching curve for the 2-handle, pushed slightly into the interior of W_i so that it is properly embedded with boundary on the outgoing end.We note that if for some cobordism (W,c) the ends of c are nullhomologous in ∂ W, as they are when ∂ W consists of copies of Y or Y_1, then [c] ∈ H_2(W,∂ W) lifts to a class in H_2(W), so that c^2 ∈; in this case the class of c2 determines uniquely the value of c^2 4.We have c_2^2 ≡ 0 4 and c_01^2 ≡ c_012^2 ≡ c_201^2 ≡ -1 4.We realize c_0 ⊂ W_0 by taking a disk in Y×{1} with boundary μ_K ×{1}, where μ_K is a meridian μ_K of the attaching curve K×{1}, and pushing its interior into the interior of Y× [0,1].The homology H_2(W_0) is generated by the union F_0 of a Seifert surface for K and a core of the 2-handle; we have F_0^2 = 0, and c_0 · F_0 = 1.Then W_1 is built by attaching a -1-framed 2-handle to Y_0 × [0,1] along μ_K, and c_1 is the union of μ_K × [0,1] and a disk bounded by a meridian of μ_K with some orientation.We observe that W_01 is diffeomorphic to a blow-up of the trace of 1-surgery on K, and then H_2(W_01) is generated by a capped-off Seifert surface F_1 and the exceptional sphere E, with F_1^2 = 1 and E^2 = -1.We have c_01· E ≡ 12 by <cit.> (see also <cit.>), andc_01· F_0 ≡ c_0 · F_0 ≡ 1 2since F_0 ⊂ W_0, whencec_01· F_1 ≡ c_01· (F_0 - E) ≡ 0 2.Since c_01 has nullhomologous ends in Y and Y_1, it lifts to a class in H_2(W_01).Then the above intersection numbers tell us that c_01≡ E 2, and so c_01^2 ≡ -1 4.Meanwhile, we have c_2 ≡ 0 2 as in <cit.>, so that c_2^2 ≡ 0 4.And we note that the surfaces c_012 and c_201 are each homologous to a disjoint union of closed surfaces in the classes of c_01⊂ W_01 and c_2 ⊂ W_2 in some order, so we conclude thatc_012^2 ≡ c_201^2 ≡ c_01^2 + c_2^2 ≡ -1 4.Next, we note that a cobordism either to or from Y_0, equipped with a bundle whose restriction to Y_0 is the admissible w, does not admit any reducible ASD connections at all.This is because any such connection must limit at the Y_0 end to a reducible flat connection over Y_0, and such limiting connections do not exist by the admissibility of w.Thus we can restrict our attention to the cobordisms(W_2,c_2), (W_01,c_01), (W_012,c_012), (W_201,c_201),which are the only other cobordisms considered in <cit.>.We note from Lemma <ref> that these all have b_1=0, and that by additivity of signature their signatures are -1, 0, -1, -1 respectively while their second Betti numbers are 1, 2, 3, 3, so thatb^+_2(W_2)=0,b^+_2(W_01) = b^+_2(W_012) = b^+_2(W_201) = 1.Moreover, since K is nullhomologous it follows that each of the classes [c_s] ∈ H_2(W_s,∂ W_s) appearing in (<ref>) has nullhomologous boundary in ∂ W_s, hence lifts to a class in the corresponding H_2(W_s).From now on, we focus our attention on reducible connections on each of the cobordisms (<ref>).We will call a reducible instanton central if its holonomy is central, and abelian if it is not.An abelian instanton on any of the cobordisms (W,c) of (<ref>) limits to a central connection at either end, because H_1(Y) and H_1(Y_1) are both 2-torsion and thus all reducible flat connections on Y and Y_1 are central.According to <cit.>, the components of the space of abelian instantons on (W,c) are parametrized by pairs{{x,y}⊂ H^2(W;) | x+y=PD(c), x≠ y };given a U(2)-bundle E→ Y such that λ = (E) has first Chern class PD(c), we send a reducible connection that induces a splitting into complex line bundles E ≅η⊕ (λ⊗η^-1) to the set {x,y} = {c_1(η),c_1(λ⊗η^-1)}.Given a perturbation π_W on W restricting to perturbations π,π' on the incoming and outgoing ends, and given an abelian instanton Λ in the component labeled by {x,y}, the component D^ν_Λ,π_W of the linearized ASD operator at Λ that is normal to the reducible locus has index N(Λ;π,π') ∈ 2 equal toN(Λ;π,π') = -2(x-y)^2 - 2b^+_2(W) - 2,by <cit.>, which in turn comes from <cit.> (and is greatly simplified here because b_1(W)=0 and because the limiting flat connections at either end of W must be central).We omit perturbations from the notation from here on, but note that by taking them sufficiently small on the interior of W, we will always have -2(x-y)^2 ≥ 0 by <cit.>.We can say more about the -2(x-y)^2 term.Working dually in homology, we note that H_2(W) is free abelian, since W is built by attaching 2-handles to either Y or Y_1 (both of which have H_2=0) along nullhomologous knots.In fact, we have a splittingH_2(W,∂ W) ≅ H_2(W) ⊕(i_*: H_1(∂ W) → H_1(W)),whose second term is 2-torsion since H_1(∂ W) is.The class c lies in the H_2(W) summand, so if we have x+y=c then we can write the summands with respect to this splitting as x=(α, τ) and y=(c-α,τ) for some α∈ H_2(W) and some 2-torsion element τ∈(i_*).But then x-y = 2α-c, and both α and c have integral square since they lift to H_2(W).We can thus compute that-2(x-y)^2 ≡ -2c^2 8,and the right side of this congruence only depends on the mod 2 class of c.We will use this to bound N(Λ) from below.Fix s ∈{01,012,201}.Then there are no central instantons on (W_s,c_s), and any abelian instanton Λ on (W_s,c_s) has normal index N(Λ) ≥ -2.To see that there are no central instantons, we note that they restrict to reducible flat connections over Y_0 ⊂ W_s, and the admissibility of w → Y_0 rules such connections out. If Λ is reducible, then by equations (<ref>) and (<ref>), we haveN(Λ) = -2(x-y)^2 - 4where the -2(x-y)^2 term is nonnegative.We also have -2(x-y)^2 ≡ -2c_s^2 ≡ 28, by (<ref>) and Lemma <ref>, so we conclude that it is at least 2 and thus N(Λ) ≥ -2.As described above, we simply repeat the proof of the exact triangle in <cit.>, and we only have to check that there are no compactness issues caused by broken flowlines with reducible (hence central) flat connections on a 3-manifold in the middle.If every ASD connection in a broken flowline is irreducible, but they are glued along flat connections at least one of which is central, then the index will be at least 3.We can thus restrict our attention to broken flowlines with at least one reducible instanton.We have seen that this can only happen on one of the cobordisms (W,c) listed in (<ref>), and the proof does not make use of higher-dimensional families on (W_2,c_2), so we really only need to consider(W,c) = (W_s,c_s),s=01,012,201.We remark that by Lemma <ref>, a reducible instanton Λ on one of these (W,c) must be abelian with normal index at least -2, and by <cit.> its index satisfies(Λ) = N(Λ) - (1 - b_1(W) + b^+_2(W)) = N(Λ) - 2,so then (Λ) ≥ -4.We now fix a pair of irreducible flat connections a and a' on the ends Y_a and Y_a' of W.Suppose that some sequence of instantons in the moduli space (W;a,a') converges to a broken flowlineabb'a',where B and B' are ASD connections on × Y_a and × Y_a', and Λ is abelian; then generically B and B' have index at least 1, while (Λ) ≥ -4.Then by standard gluing results, the broken flowline has index(B) + (Λ) + (B') + 5 ≥ 3.(To explain the constant on the left, we first glue B to the abelian Λ along the central b, with H^0_b(Y_a)=3, and then similarly glue the irreducible result of this to B' along the central b'.)In particular, this broken flowline does not belong to the compactification of an at most 2-dimensional moduli space.The same holds for broken flowlines with longer chains of connections over × Y_a or × Y_a'.We conclude that the moduli spaces of dimension at most 2 that appear in <cit.> can be compactified without adding any broken flowlines with reducible components, so the proof of the exact triangle goes through as before.myalpha
http://arxiv.org/abs/2310.17965v1
{ "authors": [ "Sudipta Ghosh", "Steven Sivek", "Raphael Zentner" ], "categories": [ "math.GT" ], "primary_category": "math.GT", "published": "20231027083031", "title": "Rational homology 3-spheres and SL(2,$\\mathbb{C}$) representations" }
DPSS-based Codebook Design for Near-Field XL-MIMO Channel Estimation Shicong Liu1,Xianghao Yu1, Zhen Gao2, and Derrick Wing Kwan Ng3 1Department of Electrical Engineering, City University of Hong Kong, Hong Kong 2Advanced Research Institute of Multidisciplinary Science, Beijing Institute of Technology, Beijing, China 3School of Electrical Engineering and Telecommunications, University of New South Wales, Sydney, Australia Email: [email protected], [email protected], [email protected], [email protected] January 14, 2024 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================= Future sixth-generation (6G) systems are expected to leverage extremely large-scale multiple-input multiple-output (XL-MIMO) technology, which significantly expands the range of the near-field region. While accurate channel estimation is essential for beamforming and data detection, the unique characteristics of near-field channels pose additional challenges to the effective acquisition of channel state information. In this paper, we propose a novel codebook design, which allows efficient near-field channel estimation with significantly reduced codebook size. Specifically, we consider the eigen-problem based on the near-field electromagnetic wave transmission model. Moreover, we derive the general form of the eigenvectors associated with the near-field channel matrix, revealing their noteworthy connection to the discrete prolate spheroidal sequence (DPSS). Based on the proposed near-field codebook design, we further introduce a two-step channel estimation scheme. Simulation results demonstrate that the proposed codebook design not only achieves superior sparsification performance of near-field channels with a lower leakage effect, but also significantly improves the accuracy in compressive sensing channel estimation. § INTRODUCTION The development of massive multiple-input multiple-output (MIMO) systems has spurred a vision to reshape and control transmission environments of electromagnetic waves, leading to the emergence of advanced technologies such as cell-free massive MIMO and reconfigurable intelligent surfaces (RIS) that enhance service coverage and eliminate dead zones in wireless networks<cit.>. Particularly, for centralized large-scale antenna array deployment strategies, like RIS and extremely large-scale MIMO (XL-MIMO)<cit.>, their vast apertures significantly expand the boundaries of the near-field region <cit.>. In practice, mobile devices within the near-field region can achieve higher transmission rates, which, however, requires accurate channel state information. Unfortunately, the proliferation of antennas and distinctive properties of near-field channels introduce additional hurdles in channel estimation (CE).In the literature, compressive sensing (CS)-based techniques have been proposed to reduce the required excessive training overhead in CE by exploiting the intrinsic sparsity of channel matrices<cit.>. In fact, the performance of such algorithms highly depends on the codebooks that match the channel model. However, the commonly-adopted codebooks in the far-field region, e.g., discrete Fourier transform (DFT) codebook, show a severe mismatch with the near-field spherical wave transmission model, which results in an energy leakage effect in sparse representation, thereby significantly undermining the performance of CS-based algorithms. On the other hand, although the spherical wave codebook<cit.> matches the near-field transmission model, the columns within the codebook matrix are not mutually orthogonal, which may further cause performance degradation and jeopardize the convergence of the algorithms. Besides, the two spatial degrees of freedom (DoFs), i.e., distance and angle, in the spherical codebook result in increased storage requirements and computational complexity for codebook matching. As a remedy, a polar-domain sampling scheme for the spherical wave codebook was proposed<cit.>. The scheme leverages the inverse proportionality between the mutual correlation of spherical wave steering vectors and distance to significantly reduce the codebook size. Later on, a hierarchical near-field codebook was proposed, where the upper-layer codebooks are exploited for target location search while the lower-layer ones are adopted to achieve the highest beam gain around the steering points<cit.>. However, the aforementioned studies are essentially refinements of the conventional spherical wave codebook, which fail to address the high mutual correlation issue among codewords. An alternative codebook design was recently presented <cit.>, which utilized the spatial-chirp beam to reduce training overhead. Also, dictionary learning was exploited in codebook design <cit.>, which iteratively updated the codebook and reconstructed the channel matrix. Nevertheless, the strict orthogonality among codewords still cannot be ensured and a fine-tune procedure is required for different application scenarios. Hence, designing a codebook that is not only small in size but also column-wise orthogonal remains an open problem. In this paper, we address the mismatch between the DFT vectors and the spherical wave transmission model, and also tackle the non-orthogonality associated with the conventional spherical wave codebook. Specifically, we propose a lightweight yet effective codebook by exploring the eigenvalue-decomposition (EVD) of the near-field channel matrix and reveal that the corresponding eigenvectors admit the form of discrete prolate spheroidal sequences (DPSS). By constructing the codebook exploiting these orthogonal vectors, we inherently avoid an oversized codebook and ensure mutual orthogonality among the codewords. Furthermore, we propose a two-step CE scheme for near-field XL-MIMO and evaluate the performance through simulations. Numerical results demonstrate that the proposed CE scheme with the novel DPSS-based codebook achieves a significant improvement in channel sparsification, thereby contributing to higher accuracy in near-field CE compared to the DFT and spherical codebooks. More importantly, the required size of the proposed DPSS-based codebook is substantially smaller than the conventional DFT and spherical wave codebooks, which leads to less stringent storage requirements.Notations: We use normal-face letters to denote scalars and lowercase (uppercase) boldface letters to denote column vectors (matrices). The k-th row vector and the m-th column vector of matrix H∈ℂ^K× M are denoted as H[k,:] and H[:,m], respectively. { H_n}_n=1^N denotes a matrix set with the cardinality of N. The superscripts (·)^T, (·)^ *, (·)^H, and (·)^† represent the transpose, conjugate, conjugate transpose, and pseudo-inverse operators, respectively. 𝒞𝒩(μ,σ^2) denotes the complex Gaussian distribution with mean μ and standard deviation σ, and 𝔼[·] denotes the statistical expectation operator. The 0-norm of a vector ‖·‖_0 counts the number of its non-zero elements. The imaginary unit is represented as j such that j^2=-1. § SYSTEM MODELConsider a user equipment (UE) array[It can be extended to multi-user scenarios by assigning orthogonal pilots for different UEs.] communicates with a base station (BS) equipped with an XL-MIMO array in its near-field region. The generated electric field E( r_ R) at the UE can be expressed by the integral of the spatial impulse response G( r_ T, r_ R) with a current source J( r_ T) at the BS as <cit.>E( r_ R) = ∫_𝒮_ T𝐆( r_ T, r_ R)J( r_ T)  d r_ T,where r_ T=(x_ T,y_ T) and r_ R=(x_ R,y_ R) denote the coordinates of the transmitter and receiver, respectively, and 𝒮_ T denotes the transmit aperture. The impulse response G( r_ T, r_ R) can be derived in dyadic form <cit.> as𝐆( r_ T, r_ R) =jκ Z_0 e^jκ‖ r‖/4π‖ r‖[ (I-r̂r̂^H )+ j/κ‖ r‖(I-3r̂r̂^H ).. -1/(κ‖ r‖)^2(I-3r̂r̂^H ) ] ≃ φ_0 e^-jκ‖ r‖/‖ r‖(I-r̂r̂^H ),where I denotes the identity matrix, φ_0 = jκ Z_0/(4π), κ = 2π/λ is the wavenumber, and Z_0≈ 376.73Ω is the intrinsic impedance of free space. r =r_ R- r_ T and r̂= r/‖ r‖ denotes the direction of r with unit length. For uni-polarized antennas, the impulse response reduces to the scalar form asg( r_ T, r_ R) = φ_0 e^-jκ‖ r‖/‖ r‖.  Consider that both the BS and UE are equipped with uniform linear arrays (ULA)[We consider ULA here for brevity, while it can be extended to other antenna geometries. For example, it can be extended to uniform planar array (UPA) by applying Kronecker products to steering vectors in (<ref>).], the near-field communication scenario is then shown in Fig. <ref>. For the m-th (1≤ m≤ N_ T) antenna element in the transmit array, the downlink line-of-sight (LoS) wireless channel can be modeled as 𝐇_ LoS[:,m]= 𝐠_ R( r_ T^(m))= [ g̃( r_ T^(m), r_ R^(1)),⋯,g̃( r_ T^(m), r_ R^(N_ R)) ]^T,where N_ T and N_ R denote the numbers of antennas at the BS and UE, respectively, and g̃(·) = g(·)/φ_0 is the normalized impulse response. Considering Rician fading, the overall downlink channel matrix can be modeled asH = √(K/1+K)𝐇_ LoS + √(1/1+K)𝐇_ NLoS,where K≥ 0 denotes the Rician factor. The non-line-of-sight (NLoS) channel components satisfy H_ NLoS[n,m]∼𝒞𝒩(0,σ^2), ∀ 1≤ n≤ N_ R, 1≤ m≤ N_ T with σ^2 = 1/(N_ TN_ R).Since XL-MIMO arrays are deployed at both the UE and BS, hybrid analog and digital transceiver architectures have to be considered with practical numbers of radio frequency (RF) chains<cit.>. In this regard, during the downlink training phase, the received signal at the UE from the BS in the t-th training slot can be expressed asy^(t) = (W_ RF^(t) W_ BB^(t))^H(H F_ RF^(t) F_ BB^(t) s^(t) +n^(t)),where W_ RF^(t)∈ℂ^N_ R× N^ RF_ R and W_ BB^(t)∈ℂ^N^ RF_ R× N^ S_ R denote the hybrid combiner matrices, whereas F_ RF^(t)∈ℂ^N_ T× N^ RF_ T and F_ BB^(t)∈ℂ^N^ RF_ T× N^ S_ T denote the hybrid precoders, respectively. N_ R^ RF (N_ T^ RF) and N_ R^ S (N_ T^ S) denote the numbers of RF chains and data streams at the receiver (transmitter), respectively. n^(t)∼𝒞𝒩(0,σ_ n^2I) is the additive white Gaussian noise (AWGN) vector, and s^(t) denotes the pilot signal. From (<ref>), (<ref>), and Fig. <ref>, it can be determined that each element in the near-field steering vector g̃( r_ T^(m), r_ R^(n)) = e^-jk√(‖ r_ T^(m)‖^2+‖ r_ R^(n)‖^2-2‖ r_ T^(m)‖‖ r_ R^(n)‖cos(ϑ_m,n))/‖ r_ T^(m)- r_ R^(n)‖ requires information in both the distance and angular domains. This is the main difference between the near-field channel model and the conventional far-field counterpart, where only angular information is decisive <cit.>. Hence, the inclusion of additional parameters related to distance introduces heightened complexity in CE problems. § PROBLEM FORMULATION In this section, we exploit the sparsity of the near-field XL-MIMO channel and formulate the CE problem by capitalizing on the CS technique. Define W^(t) = ( W_ RF^(t) W_ BB^(t))^H and f^(t) = F_ RF^(t) F_ BB^(t) s^(t) for notational brevity, the signal model in (<ref>) can be rewritten as y^(t) = ( ( f^(t))^T⊗ W^(t)) vec( H)+ñ^(t), where ⊗ denotes the Kronecker product, vec(·) denotes the vectorization operation, and ñ^(t) =W^(t) n^(t). Stacking τ training slots together, we obtainy = Φ h+ñ,where y = [( y^(1))^H,⋯,( y^(τ))^H]^H is the overall received signal, Φ = [(( f^(1))^T⊗ W^(1))^H,⋯, (( f^(τ))^T⊗ W^(τ))^H]^H is the measurement matrix, and h =vec( H) is the vectorized downlink channel vector. Estimating h in (<ref>) via linear methods requires excessive training overhead τ≥ N_ TN_ R, which is infeasible in XL-MIMO systems. In light of this, CS-based reconstruction methods were proposed to fully utilize the intrinsic sparsity of H <cit.>, and the sparse reconstruction problem can be formulated as(P1)h̃min ‖h̃‖_0s.t. ‖ΦΨh̃-𝐲‖_2 ≤ε,where h̃ is the sparse support vector to be estimated, ε is the error bound, and Ψ is the codebook matrix. A desirable codebook should match the signal model of h to capture inherent features and efficiently sparsify the channel vector as h̃. Besides, the mutual correlation between codewords in Ψ should be sufficiently low to avoid converging to multiple similar sparse representations that cause ambiguity <cit.>.In conventional far-field CE problems, the channel matrix can be efficiently sparsified by steering matrices with uniform angular domain sampling (i.e., DFT matrices) as H =A_ R^ DH̃( A_ T^ D)^H, where A_ R^ D = [ a_ R(θ_1),⋯, a_ R(θ_β N_ R)]∈ℂ^N_ R×β N_ R.a_ R(θ) = [1,e^jπsinθ,⋯,e^jπ(N_ R-1) sinθ]^H in (<ref>) is the far-field steering vector, β≥ 1 is the oversampling rate, and θ_i = -π/2+iπ/β N_ R, where i=1,⋯,β N_ R. Note that the steering matrix A_ T^ D at the transmitter side entails a similar form to A_ R^ D. Given that h= vec( A_ R^ DH̃( A_ T^ D)^H) = (( A_T^D)^* ⊗ A_R^D) h̃, codebook Ψ is typically designed as Ψ=( A_T^D)^* ⊗ A_R^D∈ℂ^N_ RN_ T×β^2 N_ RN_ T. However, the near-field channel matrix modeled in (<ref>) can no longer be properly sparsified by the far-field steering matrices in (<ref>) due to the model mismatch between 𝐠_ R(·) and a_ R(·), which will lead to a significant power leakage, increasing the number of iterations in CS-based CE algorithms, and degrading the channel reconstruction accuracy <cit.>. § PROPOSED CHANNEL ESTIMATION BASED ON EIGENFUNCTION REPRESENTATIONS In this section, we propose a novel codebook design to combat the challenges introduced by the model mismatch. Specifically, we employ EVD to the auto-correlation matrix of the near-field channel and derive the general form of the eigenvectors. The eigen-codebook is therefore constructed based on the eigenvectors, which are able to efficiently sparsify the near-field channel matrices. Furthermore, a two-step CE scheme is proposed to fully exploit the advantages of the proposed codebook. §.§ Codebook DesignRecall that problem (P1) requires the identification of a codebook Ψ that efficiently sparsifies the near-field channel matrix. In this regard, the singular value decomposition (SVD) decomposes the channel matrix in the form of H =UΣ V^H, where H can be properly sparsified to a diagonal singular value matrix Σ by unitary matrices U and V. The resulting codebook Ψ =V^*⊗ U also shows mutual orthogonality between codewords. Inspired by the SVD, we consider designing the codebook matrix exploiting the singular vectors. Since the channel matrix is not a square matrix when N_ R≠ N_ T, singular vectors can be obtained separately from the corresponding EVD of the auto-correlation matrices. For the transmit eigenvectors, we first define the auto-correlation matrix byR_ T    =𝔼[ H^H H]   =K/1+K H_ LoS^H H_ LoS+1/1+K𝔼[H_ NLoS^H H_ NLoS]   =γ K H_ LoS^H H_ LoS+γ I,where we denote γ=1/(1+K) for notational brevity. The identity matrix on the right-hand side has no impact on calculating eigenvectors since it only adds γ to each eigenvalue. Therefore, the element located at the m^'-th row and m-th column of R_ T can be expressed by R_ T[m^',m]= γ K𝐠_ R^H( r_ T^(m))𝐠_ R( r_ T^(m))+γ1_m,m^'=γ1_m,m^'+γ K∑_n=1^N_ Re^-jκ‖ r_ T^(m)- r_ R^(n)‖/‖ r_ T^(m)- r_ R^(n)‖×e^jκ‖ r_ T^(m^')- r_ R^(n)‖/‖ r_ T^(m^')- r_ R^(n)‖,where 1_m,m^' is the indicator function. Introducing the near-field paraxial approximation<cit.>, we haveR_ T[m^',m] ≈  γ1_m,m^'  +γ K/r_0^2∑_n=1^N_ Re^-jκ(x_ T^(m)-x_ R^(n))^2-(x_ T^(m^')-x_ R^(n))^2/2y_0 =  γ1_m,m^'+γ K e^jκ(x_ T^(m^'))^2-(x_ T^(m))^2/2y_0/r_0^2  ×∑_n=1^N_ R e^jκx_ R^(n)(x_ T^(m)-x_ T^(m^'))/y_0 ≜  γ1_m,m^'+γ K e^jκ(x_ T^(m^'))^2-(x_ T^(m))^2/2y_0 R^'_ T[m^',m],where r_0 is an approximation of the distance term in the denominator of the second term in (<ref>) and y_0 denotes the center of the y-coordinate of the UE array. Hence, we can rewrite the EVD procedure of R_ T as R_ T v_m =D_ T^-1( γ K𝐑^'_ T+γ I) D_ T v_m = λ_m v_m,where v_m is the m-th eigenvector of R_ T and λ_m is the corresponding eigenvalue. The compensation matrix D_ T is an (x_ T^(m))^2-related phase term extracted from R_ T according to (<ref>) asD_ T =diag(e^jκ(x_ T^(1))^2/2y_0,⋯,e^jκ(x_ T^(N_ T))^2/2y_0). Note that extracting D_ T from R_ T only changes the phase of each eigenvector since ( γ K𝐑^'_ T+γ I) D_ T v_m = λ_mD_ T v_m. Furthermore, (<ref>) yieldsR_ T^'[m^',m]   =1/r_0^2∑_n=1^N_ R e^jκx_ R^(n)(x_ T^(m)-x_ T^(m^'))/y_0(a)≈1/r_0^2∫_-L_ R/2^L_ R/2e^jκ/y_0x (x_ T^(m)-x_ T^(m^')) dx    = 2y_0sin[ κ L_ R (x_ T^(m)-x_ T^(m^') )/2y_0]/ r_0^2 κ(x_ T^(m)-x_ T^(m^'))   ∝sin[2π W (x_ T^(m)-x_ T^(m^')]/(x_ T^(m)-x_T^(m^')),where L_ R=(N_ R-1)λ/2 denotes the aperture of the UE array with half-wavelength antenna spacing. Note that (a) asymptotically holds when N_ R is sufficiently large. R_ T^' is a Toeplitz matrix with each column composed of a shifted sinc function, and the m-th eigenvector D_ T v_m of this matrix is called the (m-1)-th order discrete prolate spheroidal sequence (or Slepian sequence) within frequency W=κ L_ R/(4π y_0) <cit.>. Typically, estimating the auto-correlation matrix requires a large number of samples. However, with the result in (<ref>), the auto-correlation matrix can be well-determined directly by a series of sinc functions given the frequency W, from which we can generate the codebook using an efficient EVD operation. Similarly, we can calculate the eigenvectors { u_n }_n=1^N_ R of R_ R =H H^H=𝐔Λ^'𝐔^H∈ℂ^N_ R× N_ R and finally form the eigen-codebook matrix as Ψ_e = 𝐃_ 𝐓𝐕^* ⊗𝐃_ 𝐑𝐔∈ℂ^N_ RN_ T× N_ RN_ T for problem (P1). By resorting to the EVD tailored for the near-field channel matrix, we can effectively eliminate the mismatch issue associated with the DFT codebook. Moreover, the proposed DPSS-based eigen-codebook naturally holds orthogonality among columns, since both V and U are unitary matrices. This is one of the key advantages compared to the spherical codebook <cit.>, which shall be validated via simulation in the next section.§.§ Proposed Two-Step Near-Field Channel Estimation To calculate the near-field eigen-codebook matrix, we need to know the (approximate) location of the UE[Establishing a coordinate system with the BS as the origin can avoid dependence on its location information, but we still require the location of the UE.]. In this subsection, we propose a two-step algorithm for near-field CE, which firstly estimates the location and then designs the eigen-codebook to solve the sparse reconstruction problem (P1). The two-step CE procedure can be given as * Coarse Localization: Construct a spherical wave codebook Ψ_p with angle and distance sampled in the polar-domain <cit.> for coarse location estimation as i= argj max ‖( Ψ_p^H)[j,:] Φ^H y‖^2, from which the location coordinates (x̂_i,ŷ_i) can be obtained through index-coordinate mapping of the codebook Ψ_p. * Channel Estimation with the proposed DPSS-based Eigen-Codebook: Calculate the compensation matrix D̂_ T (or D̂_ R) in (<ref>) with estimated coordinates[According to the paraxial approximation in (<ref>), employing exp ( -jκ(x̂_i)^2/2ŷ_i ) for all diagonal elements to construct D̂_ T (or D̂_ R) provides an accurate approximation of D_ T (or D_ R) as expressed in (<ref>).] (x̂_i,ŷ_i). Construct the DPSS-based eigen-codebook Ψ_e according to Algorithm <ref> and employ CE with CS-based algorithms such as orthogonal matching pursuit (OMP) <cit.>. § SIMULATION RESULTSIn this section, we evaluate the channel reconstruction performance based on the proposed eigen-codebook via numerical simulations. The performance is evaluated by normalized mean square error (NMSE) asNMSE( Ĥ , H) = 𝔼[‖Ĥ - H‖_F^2/‖ H‖_F^2],where ‖·‖_F is the Frobenius norm, and Ĥ is an estimation of H.§.§ Simulation Setup Throughout the simulation, we consider the large arrays at the BS and UE are equipped with N_ T=192 and N_ R = 4 antennas with half-wavelength spacing, respectively, and the carrier frequency is set as f_c = 28GHz. The BS array is placed symmetrically onthe x-axis and the UE is in the near-field region of the BS as shown in Fig. <ref>. Unless otherwise specified, we deploy a single RF chain at both the BS and UE. The distance from the UE to the center of the BS array is selected uniformly from [1m, 20m], and the Rician factor is set to K = 13dB <cit.>.We mainly consider three types of codebooks in the simulation, namely the DFT codebook <cit.>, the spherical wave codebook in the polar-domain <cit.>, and the proposed DPSS codebook. For the DFT codebook, we set the number of angle grids as β N_ T and β N_ R at the BS and UE, respectively, with β being the oversampling rate in (<ref>). For the spherical wave codebook, both angle and distance grids are set as β√(N_ T) and β√(N_ R) at the BS and UE, respectively, where · denotes the rounding operator. In this case, as was mentioned in Section <ref>, the sizes of the DFT codebook and spherical wave codebook are β^2N_ TN_ R and β√(N_ R)β√(N_ T)≃β^2N_ TN_ R, respectively. Note that the size of the proposed DPSS-based codebook is irrelevant to β because the number of eigenvectors will not change. Additionally, the compressive ratio (CR) of sparse reconstruction problem is defined as μ = τ/(N_ RN_ T). For fair comparison, the performance achieved by all codebooks is evaluated based on the OMP algorithm. §.§ Numerical Results We firstly investigate the sparsification performance of the proposed codebook. The channel sparse representations of the DFT codebook, spherical wave codebook, and proposed codebook are compared in Fig. <ref>(a), where the sparse representation is obtained by h̃ = Ψ^† h. As can be observed, the conventional DFT codebook shows a severe energy leakage effect in the near-field region, which can be improved by the spherical wave codebook sampled in the polar-domain. Meanwhile, the proposed DPSS-based eigen-codebook entails the sparsest pattern among the three codebooks. Note that the proposed codebook is compensated by the matrix D̂_ T( R) and therefore, the sparse representation shows no specific angular information. In addition, different from the DFT and spherical codebooks, the support appears in the first several indices since the SVD always sorts the non-zero singular values first.On the other hand, Fig. <ref>(b) plots the colormap that represents the values of Ψ^HΨ. As can be observed, the codewords in the proposed DPSS-based codebook are strictly orthogonal to each other, which is far beyond the capabilities of the spherical codebook. We further validate the approximation error of the derivation procedure in (<ref>). As is depicted in Fig. <ref>(c), the auto-correlation curve stands for the absolute value of R_ T^'[1,:], while the red circles show the value of the normalized sinc function. The approximation procedure shows negligible error, which confirms the high accuracy of our proposed approximation in (<ref>).We then investigate the CE accuracy performance with CR μ = {0.25, 0.4, 0.6}. The oversampling rate β is set to 1 to keep the sizes of the three considered codebooks identical. The reconstruction accuracy increases as μ increases, where the proposed DPSS-based eigen-codebook achieves the best NMSE performance as shown in Fig. <ref>. In particular, at I=1 the proposed method shows the same performance as the spherical wave method since we regard the first-step coarse localization in Section <ref> as one iteration. The slight performance drop at I=2 is also due to an abrupt codebook switch at the second step of the proposed CE scheme. Starting from I=10, thanks to the excellent ability to sparsify the near-field channel with mutually orthogonal codewords, the proposed DPSS-based eigen-codebook outperforms the baselines by a large margin and converges to the lowest NMSE among the considered codebooks.We then evaluate the CE performance with oversampling rates β={1,2,3} when CR μ=0.4. As is shown in Fig. <ref>, We can see significant performance improvement for all schemes by increasing β, while the proposed DPSS-based codebook still achieves the highest reconstruction accuracy within sufficient iterations. However, the performance gains for the DFT and spherical codebooks are achieved at the cost of larger codebook sizes. Specifically, as mentioned in Section <ref>, their sizes increase quadratically with β, i.e., β^2N_ TN_ R. In contrast, the increase in β only affects the localization accuracy in the first step of our proposed CE scheme while the size of the DPSS-based codebook remains N_ TN_ R. In other words, the CE performance achieved by the DPSS-based codebook tremendously outperforms those of two baselines even with a much smaller codebook size.As mentioned in Section <ref>, the proposed CE scheme involves a coarse localization as the first step. In Fig. <ref>, we investigate the NMSE performance versus different localization errors ϵ = √((x̂_i-x̅_ R)^2+(ŷ_i-y̅_ R)^2) ( m), which denotes the distance from the center of the UE array (x̅_ R,y̅_ R) to the estimated location coordinate (x̂_i,ŷ_i) in (<ref>). ϵ is assumed to be uniformly distributed within a circular area. As can be observed, the proposed codebook ensures the convergence of the OMP algorithm within the considered error levels[According to the recent field-test <cit.>, the 95th percentile of the localization error is observed to be around 0.2m.], while more OMP iterations are required for a larger value of ϵ. This result demonstrates the robustness of the proposed CE scheme against the localization error. §.§ Storage Analysis We further evaluate the storage requirements of the codebook given a target convergence NMSE. As is shown in Table <ref>, we compare the minimum required codebook size, i.e., the number of codewords, to achieve the NMSE targets {-15, -20, -25, -30}  dB. The sizes of the DFT and spherical codebooks keep increasing with higher NMSE requirements, while the DPSS-based codebook size remains constant. Additionally, the -30 dB NMSE cannot be achieved by enlarging the sizes of the two baseline codebooks, and the corresponding sizes are displayed as N/A. In particular, thanks to its mutual orthogonality among codewords, the DFT codebook can satisfy more stringent NMSE requirements with only slightly larger sizes. Yet, its mismatch with the near-field channel model still leads to a bulkier codebook compared to the proposed DPSS-based one. On the other hand, the two DoFs in both distance and angle of the spherical wave codebook dramatically add to the codebook size as the resolution requirement increases. Compared to the DFT and spherical wave codebook, the proposed DPSS-based codebook does not need to sacrifice NMSE performance for a lower storage, and its orthogonality enables it to converge faster than the spherical wave codebook.§ CONCLUSIONIn this paper, we proposed a novel DPSS-based eigen-codebook for near-field XL-MIMO CE. By leveraging the EVD associated with the near-field channel, the proposed codebook achieves mutual orthogonality among codewords, and outperforms conventional DFT and polar-domain spherical wave codebooks in channel sparsification. We further proposed a two-step CE scheme, with which our proposed DPSS-based codebook achieves the best NMSE performance in CE. Furthermore, we compared the minimum required codebook size for different NMSE targets, which proved the proposed codebook effectively reduces the storage requirements.99 WC21Yu X. Yu, V. Jamali, D. Xu, D. W. K. Ng and R. Schober, “Smart and reconfigurable wireless communications: From IRS modeling to algorithm design," IEEE Wireless Commun., vol. 28, no. 6, pp. 118-125, Dec. 2021. TWC17Ngo H. Q. Ngo, A. Ashikhmin, H. Yang, E. G. Larsson, and T. L. Marzetta, “Cell-free massive MIMO versus small cells," IEEE Trans. Wireless Commun., vol. 16, no. 3, pp. 1834-1850, Mar. 2017. WC23Wang Z. Wang et al., “Extremely large-scale MIMO: Fundamentals, challenges, solutions, and future directions," IEEE Wireless Commun., Apr. 2023. JSAC20Dardari D. Dardari, “Communicating with large intelligent surfaces: Fundamental limits and models,” IEEE J. Sel. Areas Commun., vol. 38, no. 11, pp. 2526-2537, Nov. 2020. TIT06Donoho D. L. Donoho, “Compressed sensing," IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289-1306, Apr. 2006. WCL19Han Y. Han, S. Jin, C.-K. Wen, and X. Ma, “Channel estimation for extremely large-scale massive MIMO systems,” IEEE Wireless Commun. Lett., vol. 9, no. 5, pp. 633-637, May 2020. TCOM22Cui M. Cui and L. Dai, “Channel estimation for extremely large-scale MIMO: Far-field or near-field?” IEEE Trans. Commun., vol. 70, no. 4, pp. 2663-2677, Apr. 2022. WCL22Chen J. Chen, F. Gao, M. Jian, and W. Yuan, “Hierarchical codebook design for near-field mmWave MIMO communications systems” IEEE Wireless Commun. Lett., to appear. TWC23Shi X. Shi, J. Wang, Z. Sun, and J. Song, “Spatial-chirp codebook-based hierarchical beam training for extremely large-scale massive MIMO,” IEEE Trans. Wireless Commun., to appear. TCOM23Zhang X. Zhang, H. Zhang, and Y. C. Eldar, “Near-field sparse channel representation and estimation in 6G wireless communications," IEEE Trans. Commun., to appear. TIT05Poon A. S. Y. Poon, R. W. Brodersen, and D. N. C. Tse, “Degrees of freedom in multiple-antenna channels: A signal space approach,” IEEE Trans. Inf. Theory, vol. 51, no. 2, pp. 523-536, Feb. 2005. AO20Miller D. Miller, “Communicating with waves between volumes: Evaluating orthogonal spatial channels and limits on coupling strengths,” Appl. Opt. vol. 39, no. 11, pp. 1681-1699, 2000. TSP20Ke M. Ke, Z. Gao, Y. Wu, X. Gao, and R. Schober, “Compressive sensing-based adaptive active user detection and channel estimation: Massive access meets massive MIMO,” IEEE Trans. Signal Process., vol. 68, pp. 764-779, Jan. 2020. SPL17Miandji E. Miandji, M. Emadi, J. Unger, and E. Afshari, “On probability of support recovery for orthogonal matching pursuit using mutual coherence,” IEEE Signal Process. Lett., vol. 24, no. 11, pp. 1646-1650, Nov. 2017. Slepian54TIT D. Slepian, “Estimation of signal parameters in the presence of noise," IRE Trans. Inf. Theory., vol. 3, no. 3, pp. 68-89, Mar. 1954. TWC18Rod J. Rodríguez-Fernández, N. González-Prelcic, K. Venugopal, and R. W. Heath, Jr., “Frequency-domain compressive channel estimation for frequency-selective hybrid millimeter wave MIMO systems," IEEE Trans. Wireless Commun., vol. 17, no. 5, pp. 2946-2960, May 2018. TWC22Sakhnini A. Sakhnini, S. De Bast, M. Guenach, A. Bourdoux, H. Sahli, and S. Pollin, “Near-field coherent radar sensing using a massive MIMO communication testbed,” IEEE Trans. Wireless Commun., vol. 21, no. 8, pp. 6256-6270, Aug. 2022.
http://arxiv.org/abs/2310.18180v1
{ "authors": [ "Shicong Liu", "Xianghao Yu", "Zhen Gao", "Derrick Wing Kwan Ng" ], "categories": [ "cs.IT", "eess.SP", "math.IT" ], "primary_category": "cs.IT", "published": "20231027144912", "title": "DPSS-based Codebook Design for Near-Field XL-MIMO Channel Estimation" }
Deep Quantum Circuit Simulation of Low-energy Nuclear States]Deep Quantum Circuit Simulations of Low-energy Nuclear States 1,2]Ang [email protected]]Alessandro [email protected]]Ionel [email protected][1]Travis S. [email protected][1]Quantum Science Center, Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, 37831, Tennessee, USA[2]Physical and Computational Sciences Directorate, Pacific Northwest National Laboratory, 902 Battelle Blvd, Richland, 99354, Washington, USA[3]Theoretical Division, Los Alamos National Laboratory, P.O. Box 1663, MS B283, Los Alamos, 87545, New Mexico, USA Numerical simulation is an important method for verifying the quantum circuits used to simulate low-energy nuclear states. However, real-world applications of quantum computing for nuclear theory often generate deep quantum circuits that place demanding memory and processing requirements on conventional simulation methods. Here, we present advances in high-performance numerical simulations of deep quantum circuits to efficiently verify the accuracy of low-energy nuclear physics applications. Our approach employs several novel methods for accelerating the numerical simulation including 1- and 2-qubit gate fusion techniques as well as management of simulated mid-circuit measurements to verify state preparation circuits. We test these methods across a variety of high-performance computing systems and our results show that circuits up to 21 qubits and more than 115,000,000 gates can be efficiently simulated.[ * January 14, 2024 ====================§ INTRODUCTIONQuantum computing offers many opportunities to explore the complex interactions of many-body nuclear physics <cit.>. This includes enabling efficient methods for modeling quantum physical processes as well as new techniques to simulate their outcomes <cit.>. Recent discoveries in preparing quantum states to model nuclear physics have shown the efficacy of these methods for studying both structure and dynamics <cit.>. In the low-energy regime, this includes calculating binding energies <cit.>, quadrupole moments <cit.>, and electromagnetic transition rates <cit.>. With early validation at small scales, quantum simulation methods for low-energy nuclear physics are expected to solve larger systems that are currently inaccessible to conventional approaches and thus expand computing capabilities for scientific discovery <cit.>A powerful approach to these forms of digital quantum simulation rely on quantum circuits as sequences of gates to prepare and transform a quantum state under a defined Hamiltonian <cit.>. Accurate implementations of these quantum circuits are used to estimate observable outcomes that describe the prepared state <cit.>. The structure of the quantum circuits for quantum simulation are often derived from first-principles models of the underlying many-body problem that are then translated into exact or approximate operations <cit.>. For example, unitary coupled cluster (UCC) theory has been used recently to solve nuclear shell models by constructing quantum circuits that first encode fermionic fields in a spin representation and then apply unitary transformations to approximate the ground state <cit.>. By transforming these physics-driven quantum circuits into the spin representation, the subsequent unitary operator decomposition can be executed using a quantum computer <cit.>. Quantum circuits, therefore, represent the functionality of software programs that run on quantum computing hardware to perform these types of nuclear physics simulations. An important step in designing the quantum simulation circuits is verification, which checks the correctness of the quantum circuit construction and the outputs observed during execution.Numerical simulation is a direct approach to verify the circuit transformations that can be used to check the prepared quantum state as well as the calculation of statistics and diagnostics <cit.>.However, numerical simulation of quantum circuits is challenged by the exponential memory requirements with respect to system size for representing quantum states. Additionally, the probabilistic nature of quantum circuit execution often generates a combinatorial expansion of possible circuit outcomes that must be tracked. Formally, computational complexity arguments suggest that conventional numerical methods are intractable for simulating arbitrary quantum circuits, and it is widely anticipated that the applicability of numerical simulation is upper bounded by circuit width and depth, which is a concern for verification of quantum circuits. The continued development of numerical simulation techniques to accelerate and optimize such calculations is an area of active research within the quantum computing community. The challenges for numerical simulation extend immediately to applications in nuclear physics, where deep quantum circuits spanning many qubits are often required to create highly accurate quantum states of the many-body problem. The case of deep quantum circuits is challenging for numerical simulation because these circuits are constructed from long sequences of unitary transformation, i.e., gates, to ensure accurate preparation of the intended quantum state <cit.>. Recent advances in using mid-circuit measurements as part of more efficient state preparation methods has helped to reduce circuit depth <cit.>, but this approach requires numerical simulation methods to manage the probabilistic outcomes generated by such measurements. Here, we show how numerical simulations of deep quantum circuits can be used to efficiently verify the accuracy of low-energy nuclear physics applications. Our results demonstrate the performance scaling of quantum circuit simulations using novel methods to accommodate mid-circuit measurement integrated into deep circuit constructions. A principal approach is to leverage the right integration of GPU-accelerated numerical simulation methods. The leading contributions of this work are the development of mid-circuit measurement techniques to amplify the probability of observable outcomes; the improved efficiency of simulatingmid-circuit measurement by avoiding repeated circuit sampling, and the improved efficiency of deep circuit numerical simulations using gate fusion. The remainder is organized as follows. Section <ref> introduces background on the theoretical model for the second quantized Hamiltonian, the quantum algorithms for simulating the corresponding states, and methods and notation for numerical simulation of these quantum circuits. Section <ref> specifies the algorithm for state preparation of low-energy nuclear states. Details about the numerical simulation methods are provided in Section <ref> with results presented in Section <ref>. Concluding remarks are offered in Section <ref>. § BACKGROUND §.§ Theoretical Model and Qubit Mapping The nucleons inside the atomic nucleus interact via two-, three-, and so on body forces, with a clear hierarchy, in which two-body interactions dominate, followed by the many-body forces whose contributions diminish with increasing the number of bodies involved. A natural way to take into account the above constraint is given by chiral effective field theory (χEFT).In χEFT the symmetries of the underlying theory of strong interactions of quarks and gluons QCD are used to write down Lagrangians that describe interactions between nucleons and nucleons and pions and each interaction term is multiplied by a low energy constant that is fixed using some of the available experimental data. Interactions developed in this framework successfully predict nuclear binding energies up to A=48 <cit.>. In second quantization, the Hamiltonian governing the static and dynamics properties of the nuclear system can be written as H=∑_i,j=1,N_s t_ij a^†_i a_j +1/2∑_ij,kl V_ij,kl a_i^† a_j^† a_l a_k+⋯,where N_s is the number of single particle states included in the Hilbert space, t_ij and V_ij;kl are the one- and two-body matrix elements of the nucleon-nucleon interaction, while a^†_i (a_i) are the creation (annihilation) operators for state i. Given that the mean field is a good first approximation, we have allowed a general one-body term in Eq. (<ref>). Using realistic full-space inter-nucleon interactions, like the ones rigorously constructed through χEFT, would not be feasible in the near future due to hardware limitations. Hence, a less demanding testing ground for quantum algorithms hasbeen identified in simpler models like the Lipkin-Meshkov-Glick model <cit.> or the phenomenological shell model <cit.>. In particular the shell model offers the advantage that it has some of the complexity of realistic nucleon-nucleon interactions in a small model space, even though it was found to produce very entangled states <cit.>. The simulations presented in this paper were performed using the Cohen-Kurath interaction <cit.> in the p shell, where active the space is restricted to six proton and neutron states with j=1/2 and j=3/2, assuming a ^4He inert core. We would like to emphasize that while an excellent testing ground for quantum algorithms and even hardware benchmarks, shell model calculations in larger model spaces is not our final goal but rather an intermediary step for the time being.The first step in the process of implementing various algorithms on quantum hardware and/or numerical simulators is to map this interaction to the qubit space, and different mappings could have advantages over the others, although the advantage is not always clear nor easy to quantify. The Jordan-Wigner (JW) <cit.> or occupation mapping is the most intuitive second quantization scheme. In this case, the creation and annihilation operators are given by a^†_i = 1/2(∏_j=0^i-1Z_j) (X_i-iY_i),a_i = 1/2(∏_j=0^i-1Z_j )(X_i+iY_i),where the X_i, Y_i and Z_i are the Pauli matrices acting on qubit i. Thus, each single-particle state is associated with a single qubit, and in order to describe a system with N_s, the number of qubits required is N_s. If the qubit is measured in state |0⟩, the state is unoccupied, while a measurement in state |1⟩ represents an occupied state.The Jordan-Wigner mapping is very simple and intuitive, but it has a couple of disadvantages. First, it describes a Fock space of dimension 2^N_s that contains particle numbers from zero to N_s. Forparticle-conserving Hamiltonians as the ones governing the nuclear many-body system, only one subspace with a fixed particle number is active at one time. Moreover, for the nuclear problem, the JW mapping is even less efficient given that the Hamiltonian concurrently preserves the proton and neutron particle numbers. Let us take an example considering N_p protons and N_n neutrons, each in N_s states. In this case, for large N_s, and using the Stirling approximation <cit.>, the number of many-body states with N_p protons and N_p isÑ≈( N_s^2/(N_s-N_p)(N_s-N_n))^N_s × (N_s-N_p)^N_p(N_s-N_n)^N_ne^-N_p-N_n,which is much smaller than the dimension of the Fock space 2^2N_s represented in the Jordan Wigner mapping. Second, in order to represent state i one must include i-1 operators acting on the previous qubits, and hence, the number of one-qubit gates and entanglement gates in general is quite large. This poses challenges for applications on current and near-term devices.While less transparent, other more efficient mappings within the second quantization framework exist. One alternative is the Bravyi-Kitaev mapping <cit.>, where the qubits store partial sums of occupation numbers, requiring the number of states to be powers of 2. This mapping can be as efficient, and in many cases even more efficient in quantum chemistry calculations of ground states of molecular systems <cit.>. In the parity representation, the n^th qubit stores the sum of the parity of the first n modes <cit.>. However, in applications to quantum chemistry, this scheme was not found to be particularly useful <cit.>.A first quantization approach would be more efficient in terms of the number of qubits required to describe the system. In this case, any particular one single particle state is encoded as a fixed combination of many qubit states, with the number of required qubits being given by ≈log_2(N_s). In each case, to describe each particle one needs the same number of qubits, so that the total number of qubits would be N_q≈ nlog_2(N_s), with n the number of particles. It is clear that for this mapping, the scaling is more advantageous as nlog_2(N_s)≪ N_s for n≪ N_s. The main disadvantage of this mapping, however, is that the many-body system is not automatically antisymmetric for a system of fermions.For a large number of particles this is especially challenging.However, a recent quantum algorithm <cit.> was proposed for antisymmetrizing identical fermions with a gate complexity of O(log^cn log_2log_2 N_s) and a circuit size of O(nlog^cnlog_2 N_s). The value of c depends on the choice of the specific algorithm.The advantages of each mapping scheme are not obvious. One possible middle ground between the second and the first quantization mapping could be the scheme proposed in Ref. <cit.>, in which one preserves the antisymmetrization of the basis states, thus eliminating the need for a complicated antisymmetrization first step. In addition, while the main justification for the work in Ref. <cit.> is particle-number conservation, other symmetries can be considered as well. Thus, this approach is perfectly fitted for the shell model. In an occupation basis, the Hamiltonian in Eq. (<ref>) becomesH=∑_α,β⟨α|H|β⟩|α⟩⟨β|,where |α⟩, |β⟩ are many-body states that preserve not only the particle number, but also the projection of the total angular momentum in the M scheme. For the j-j coupling, the many-body states |α⟩ and |β⟩ have good total angular momentum; in this case, the matrix H is block diagonal, as [H,J^2]=0. In the j-j scheme the matrix that needs to be diagonalized is less sparse than in M scheme, but the dimension is smaller. While it is unclear how the trade off between the dimension that needs to be solved would map into quantum hardware, it is clear that many of the tools <cit.> developed for large-scale shell model calculations can be adopted for porting the same type of problems on quantum hardware. Basis states generated in M or j-j schemes are assigned to a certain combination of qubits. A similar mapping was used in Ref. <cit.> to solve for the deuteron in relative coordinates in a harmonic oscillator basis. In Table <ref>, we compare the number of qubits necessary to represent a nuclear system with N_p protons and N_n neutrons for Jordan-Wigner mapping (N_q^JW) and using encoding of the basis states into combinations of qubits (N_q^SM), with a fixed projection of the spin M_tot. It is immediately clear that while it might be more advantageous to encode the shell model basis, the number of Pauli strings in this mapping (N_Pauli^SM) quickly surpasses the number of Pauli strings necessary for the JW mapping (N_Pauli^JW). Thus, the advantage of using the qubit-efficient enconding quickly goes away in particular for the sd shell model space, where the number of single-particle states included in the calculation increases to 12 for each species, assuming an inter ^16O core, and using the “universal" SD interaction <cit.>. Note that in Table <ref> we used an arbitrary order for encoding, and not the one based on the Gray code that could potentially be more efficient <cit.>. Nevertheless, we do not expect the picture presented in Table <ref> to dramatically change.§ ALGORITHM DESIGN§.§ Projection Algorithm for State PreparationSeveral algorithms are now available for preparing selected states on quantum hardware <cit.>. A projection algorithm like the one introduced in Ref. <cit.> could be better suited to the nuclear physics problems, as it can be adapted to take into account even limited information about the nuclear spectrum. All projection algorithms use a series of time evolutions and measurements of one or more ancilla qubits connected to the system in order to project on a desired state. To project the ground state, in Ref. <cit.>, Ge et al. apply cos^M(H̃π/2) on a trial state, with H̃ shifted and re-scaled Hamitlonian H and M an integer. Thus, if M is large enough, the amplitude of any arbitrary state with energy E is reduced by cos^M(Eπ/2), ensuring the suppression of all but the state at E=0. In a very simplistic approach, applying this type of filtering reduces to time evolution with constant time t=π/2 and measuring M ancilla states. A more efficient implementation based on log_2(M) ancilla qubits is presented in Ref. <cit.> and its implementation is discussed in Appendix <ref>.In contrast, the Rodeo algorithm <cit.> exploits the fact that the probability of finding a state with energy E given by P_N=∏_n=1^Ncos^2[(E_target-E)t_n/2]is suppressed by a factor 1/4^N if E≠ E_target for a Gaussian distribution of times t_n and large N, with N the number of measurements. In the proposed implementation, the Rodeo algorithm is based on N controlled time evolution with the Hamiltonian governing the dynamics and normal distributed times. Neither of the two algorithms briefly presented above uses any information about the system one wants to describe. It is well understood in imaginary time evolution approaches in classical calculations that in order to filter out contributions from undesired states, the evolution time is of the order of 1/Δ, where Δ is the gap between the target state and the first state with non-zero overlap with the trial state. A direct application on quantum computers of the imaginary time evolution is cumbersome, but the same general lessons can be applied using real-time evolution. In Ref. <cit.>, we introduced a projection algorithm based on real-time evolution and using a set of optimized times and small phases. The optimization is based on the approximate knowledge of the many-body spectrum and the general assumption regarding the overlap of the eigenstates with the trial state. Since this approach was much more efficient in state preparation than other projection-based algorithms <cit.> even for small gaps, in this work we use this procedure to construct the ground state of a many-body system.In the simplest implementation of the energy filtering algorithm of Ref. <cit.>, a single ancilla qubit is attached to the collection of qubits describing the physical system. The trial state |ψ_0⟩ is usually a Hartree-Fock Slater determinant, which can be trivially represented on a quantum computer, and the ancilla qubit a is set to state |0⟩. Then, one performs a series of time evolutions with times t_i followed by a measurement of the qubit a. The algorithm is successful if all measurements of the ancilla qubit produce state |0⟩. At step i one produces the state |ψ_i⟩ from state |ψ_i-1⟩ as follows|ψ_i⟩=exp[-i(H̃ t_i +δ_i) Y_a]|ψ_i-1⟩⊗|0⟩= cos(H̃ t_i +δ_i )|ψ_i-1⟩⊗|0⟩+ sin(H̃ t_i +δ_i )|ψ_i-1⟩⊗|1⟩.Knowledge of the spectrum, even approximative, is not required, but is useful. For example, the same algorithm can be used to project on quantum numbers. While projection on symmetries has been introduced before <cit.>, the algorithm based on Eq. (<ref>) could be more efficient as a large number of quantum numbers can be eliminated at each iteration <cit.>. Moreover, since the projection on targeted quantum numbers usually increases the gap that controls the algorithm, the projection can become more effective with smaller propagation times, akin to classical calculations.In the investigations presented in this paper, we did not perform symmetry projection. In turn, we have only projected the ground state, assumed at zero energy. To better understand the algorithm in Eq. (<ref>), and the optimization procedure, we start with the trial vector decomposed in an orthogonal basis |ψ_0⟩=∑_α C_α|α⟩,where |α⟩ are the (unknown) eigenstates of H̃ (H̃|α⟩=E_α|α⟩), and C_α (unknown) complex coefficients.Time evolving the system using Eq. (<ref>) produces the state|ψ_1⟩=exp[-i(H̃ t_1 +δ_1) Y_a]|ψ_0⟩⊗|0⟩= ∑_α C_α|α⟩⊗[cos(E_α t_1 +δ_1 )|0⟩+ sin(E_α t_1 +δ_1 )|1⟩].Since the ground state is shifted so that E_0=0, if the phases δ_i are kept small, measuring the ancilla qubit in state |0⟩ will enhance the amplitude of the ground state with respect to the other states. If the gap is known from classical calculations, one can always take the time t_1=π/(2Δ) and δ_1=0, so that after the first measurement, the state at the gap is exactly removed from physical state (without the ancilla qubit). In the case the spectrum is known, one can continue like in the case of projection on quantum numbers. However, for the more general case when the spectrum is not known, it was found that additional exponentially shorter times (t_i=t_i-1/2) are a reasonable choice to remove higher lying excitations <cit.>, with a total evolution time approaching 2t_1. If the suppression of the unwanted states is deemed unsatisfactory, one can repeat the same procedure, while running the circuit in Fig. 12 of Ref. <cit.> will provideinformation about the gap and other states present in the trial state, if desired. The probability to produce the desired state is given by the probability to find that state into the trial state in the case of all phases zero. Using Eq. (<ref>), the amplitude of each state after N measurements of the ancilla (all giving state |0⟩) becomes C_α'=∏_i=1^Ncos(E_α t_i+δ_i)C_α. To optimize the times and phases in order to speed up the calculation, one can start with the exponentially distributed times, assuming some reasonable overlap with the ground state, and random overlaps for the remaining states, and then maximize the final overlap for the final state. This optimization procedure can be classically performed, and general properties regarding the nuclear spectra can be used in the case when the spectrum is not a priori known. If non-zero phases are included in the optimization, they are generally small, and reduce the probability of success for the targeted state by ∏_i=0^Ncos^2(δ_i). §.§ Numerical Simulation of Quantum CircuitGenerally speaking, there are multiple ways to numerically simulate the quantum circuits generated from a quantum algorithm, such as the projection algorithm described. These include state-vector <cit.>, density-matrix <cit.>, tensor-network <cit.>, decision diagrams <cit.>, stabilizer <cit.>, and device-level simulation such as pulse-based simulation <cit.>. Some of their features are summarized in Table <ref>, with respect to the cost of system memory and computation when scaling qubits (Q) and gates (G), as well as the ability to leverage sparsity in the representation and incorporating noise effect. As discussed, the circuit for the time evolution and projection algorithm can be quite deep which is far beyond the capability and coherence time of current NISQ devices. For algorithm verification purposes without inspecting noise effects, in this work we mainly focus on state-vector simulation given its resilience to circuit depth. In the following, we briefly introduce the fundamentals of state-vector representation and the general ways of performing state-vector based numerical simulation in a classical machine. State-Vector Representation: A pure quantum state in a mathematical formulation corresponds to a vector in the Hilbert space. A quantum system in a superposition state thus can be represented as a linear combination of the (orthonormal) eigenstates according to some basis: |Ψ(t)⟩ = ∑_sC_s(t)|Φ_s⟩ The coefficient C_s which corresponds to a particular eigenstate is a complex number thus allowing interference effects among states. Evolution of the quantum states is time dependent, governed by the evolution operators defined in the quantum circuit. State transitions are triggered by evolution operators of which each represents a gate. Through the gate sequence of the circuit, the quantum system evolves toward an objective state of interest, where measurement can be repeatedly applied for sampling the state. The squares of the coefficients sum-up to 1.The simulation approach for pure state is to use an array with complex numbers to represent the coefficients C_s, known as the state-vector. The size of the array depends on the number of eigenstates in the system, which scales as 2^n with respect to the number of qubits n. To ensure numerical accuracy, double-precision is necessary. The system is evolved by applying the gates, each denoting a unitary matrix that describes how the coefficients of certain eigenstates need to be adjusted. Once a gate is applied, the system transits to the next state:|Ψ⟩→ U|Ψ⟩ The state-vector approach describes the fundamental evolution of a quantum system in an ideal scenario without noise impact, thus is widely used for quantum algorithm verification, which is the purpose of this work. State-Vector Simulation: State-vector based simulation is to simulate the operations of applying a series of unitary operators U_m-1⋯ U_1 U_0 to the state-vector |ψ⟩=∑_i=0^2^n-1α_i|i⟩ that describes the state of a quantum system. n is the number of qubits and m is the number of operations or gates. Here, a complex-valued double-precision floating-point vector α⃗ of size 2^n is used to store the coefficients α_i, which costs 16×2^n bytes of memory in a classical computer. U_i with i∈[0,m-1] is a 2×2 (for one-qubit gate) or 4×4 (for two-qubit gate) complex matrix. It has been shown that an arbitrary quantum circuit can be decomposed into 1-qubit and 2-qubit gates <cit.>. In fact, almost all real quantum devices execute 1-qubit or 2-qubit basis gates internally. For example, IBMQ adopts 1-qubit gate , , ,and 2-qubit gateas the basis gates; Rigetti uses 1-qubit gate ,and 2-qubit gatefor internal execution. Multi-qubit gates are decomposed into 1-qubit and 2-qubit gates.To apply a gate U, the operation is |ψ⟩→ U|ψ⟩. For 1-qubit U applying on qubit q in a quantum register, α⃗ is updated through the following expression where s_i=⌊ i/2^q⌋ 2^q+1+(i % 2^q) for every integer i∈[0, 2^n-1-1]:[ α_s_i; α_s_i+2^q ]→ U_2×2·[ α_s_i; α_s_i+2^q ]Regarding 2-qubit unitary gate U applying on qubit p and q (assuming p<q without losing generality), α⃗ is updatedthrough:[ α_s_i; α_s_i+2^p; α_s_i+2^q; α_s_i+2^p+2^q ]→ U_4×4·[ α_s_i; α_s_i+2^p; α_s_i+2^q; α_s_i+2^p+2^q ]where s_i=⌊⌊ i/2^p⌋ /2^q-p-1⌋2^q+1 + (⌊ i/2^p⌋% 2^q-p-1)2^p+1 +(i % 2^p) for every integer i∈[0,2^n-2-1]. To summarize, state-vector based quantum numerical simulation is to perform a sequence of 2×2 or 4×4 operations Eq. (<ref>) and Eq. (<ref>) over the large state-vector coefficient array of complex numbers. Note, although quantum gates without noise are all unitary operations, Eq. (<ref>) and Eq. (<ref>) can be more general and do not necessarily require U to be unitary, which offers the great opportunities of gate fusion, as will be discussed in Section <ref>. §.§ State preparation using Cirac's algorithmWe briefly review here the algorithm used for state preparation that follows the work of Ref. <cit.> and describe how it has been implemented. We consider an Hamiltonian with spectrum in an interval I∈(0,1), spectral gap Δ, an initial trial state |Φ⟩ with overlap χ with the ground state |Ψ_0⟩, and we assume we know the ground state energy E with some precision δ=O(Δ/log(1/χϵ)). Under this conditions, after defining the shifted Hamiltonian H^'=H-E1 the state defined below|Φ̃⟩=cos^M(H^')/||cos^M(H^') |||Φ⟩is ϵ- close to the exact ground state, i.e. |||Φ̃⟩-|Ψ_0⟩|| provided that M=O(1/Δ^2log^2(1/χϵ)). It is possible to approximate the operator in Eq.  <ref> as a linear combination of unitaries in the following way:cos^2mH^'=∑_k=-m_0^m_0α_k e^-2iHk+O(χϵ),with α_k=1/4^m2mm+k.and m_0=O(1/Δlog^3/2(1/χϵ)). The implementation of the above operator can be done using Linear Combination of Unitaries using the Prepare and Select oracles of Ref. <cit.> and briefly reviewed below. Given the 2m_0+1 unitaries of Eq. <ref> we define an ancillary register of dimension n_A=⌈log_2(2m_0+1)⌉. We can implement a block encoding of the linear combination of unitaries using the algorithm originally developed in Ref. <cit.>and using the approach reported in Ref. <cit.>. In particular we first need a prepare operator P acting only on the ancillary register defined by the following equation:P|0⟩ = 1/√(α)∑_k=0^2m_0(α_-2m_0+k)^1/2|k⟩,and α=∑_k=-m_0^m_0|α_k|. Following Ref. <cit.> for the general case the gate decomposition of the unitary Pcan be done using generic circuit synthesis as originally reported in Ref. <cit.> whose exponential scaling should not be a limitation for the problems at hand (i. e. for 10^3 unitariesonly 10 ancillary qubits will be needed). We recall here that an alternative method to compile the prepared oracle down to thegate set is reported in Lemma 3 ofRef. <cit.>.After we define the select oracle, acting both on the ancillary register and the target register, in the following wayS = ∑_k=0^2m_0+1|k⟩⟨k|⊗ U^k,where U=e^-2iH. Although the implementetion in principle requires applying several multi-controlled unitaries, in Ref. <cit.>, Lemma 3.5, has been shown that only a number of single controlled unitaries equal to the number of ancillary qubits are necessary and sufficient. Therefore the implementation of the LCU method for the problem at hand can be expressed as in Fig. 2 of Ref. <cit.>. We are now in the position to discuss the success probability of the procedure similarly to Ref. <cit.> we can define the following quantityη^2 = ⟨Φ| O^2|Φ⟩,where we have defined the operator O=∑_k=-m_0^m_0α_k U^k. The success probability of postselecting all 0s on the ancillary qubit is P_s = η^2/α^2.§ NUMERICAL SIMULATOR DESIGNOur state-vector numerical simulator for low-energy nuclear state-preparation is developed from our previous work SV-Sim <cit.> of the NWQSim package. Considering the deep circuits and the unique amplification based algorithm through mid-circuit ancilla measurement, in this work, we propose two techniques for efficient simulation. We first introduce the simulator framework and then focus on the two techniques.§.§ Simulator Framework Fig. <ref> illustrates the SV-Sim framework <cit.>. It offers both C++ and Python interfaces to support quantum programming environments such as Qiskit <cit.>,Q# <cit.>, QCOR <cit.>, as well as quantum intermediate representations (IR) such as QASM <cit.> and QIR <cit.>. The backends include CPUs, GPUs, Xeon-Phis, and multi-node heterogeneous HPC clusters. In terms of single-device backend implementation, the original SV-Sim harvests performance of heterogeneous accelerators such as GPUs through two major strategies:homogeneous execution, where GPU-side polymorphism is realized through a device functional pointer approach <cit.>, so that all the gate operations on the GPU side can be merged into a single GPU kernel, avoiding kernel creation, context switching, and data movement overhead. Gate-specialized implementation, where the speciality of the gate matrices, including sparsity and identity, are exploited. The new simulator designed in this work still benefits from homogeneous execution, but we could not use gate-specialization anymore due to gate fusion, as will be seen in Section <ref>.In terms of the cluster backend implementation, SV-Sim mainly performs the communication through the shared-memory (SHMEM) model and interfaces, such as NVSHMEM and OpenSHMEM. MPI is also used when necessary. Fig. <ref> shows the architecture for the cluster implementation. Each SHMEM or MPI process owns a simulator object. Therefore, rather than using a single simulator instance to manage the multi-devices, each of them operates on a single CPU or GPU. The state-vector coefficients are evenly distributed among the devices. SHMEM/MPI are used for inter-device communication.In terms of the frontend design, the state preparation algorithm described in Section <ref> is implemented in Qiskit. Although SV-Sim directly supports Qiskit through the Python-API, given the extremely deep circuit, the parsing of the Qiskit gate object and performing gate conversion in Python can lead to considerable overhead. Consequently, we use the QASM frontend.Table <ref> lists the gates defined by QASM <cit.>. Given such gate support, and the gate fusion technique to be presented, when generating QASM, no transpilation optimization is needed in Qiskit, which also improves performance.§.§ Gate FusionThe first challenge for the numerical simulation, as mentioned, is the deep circuit. We propose gate fusion to merge gates and shrink circuit depth. Given it is numerical simulations, we are not limited by the basis gates of a real device. We thus propose the following fusion operations: 1-qubit gates fuse 1-qubit gates: This implies that all consecutive 1-qubit gates applied on the same qubit can be merged into a single unitary gate (see Fig. <ref>). We implement a general 1-qubit gate labeled asin the simulator to perform the merged unitary gate. 2-qubit gates fuse 2-qubit gates: Similarly, we can merge consecutive 2-qubit gates over the same qubit pairs together into a single 2-qubit unitary gate. This includes the conditions of switching the control and target qubit ofgates, as shown in Fig. <ref>. We realize a general 2-qubit gate labeled asin the simulator to run this gate. 2-qubit gates fuse 1-qubit gates: For state-vector simulation, it is also feasible for a 1-qubit gate to concatenate with a 1-qubit identity gate, forming a 2-qubit gate, and thus can be merged with another 2-qubit gate of the same qubit pair. This is similar to the 2-qubit gate “absorbs" a 1-qubit gate. Depending on the order of the 1-and 2-qubit gate, a forward or a backward fusion can be established, see Fig. <ref>. We useto execute the fused 2-qubit gate. To effectively explore all fusion opportunities, we propose the following strategy to compose these fusion operations, shown in Fig. <ref>. We start by first merging consecutive 1-qubit gates applying to the same qubit. Then, we conduct forward and backward absorption for the 2-qubit gates to fuse the surrounding 1-qubit gate. After that, we addgates to ensure all 2-qubit gates are having the same partial order that the first qubit index is smaller than the second, i.e., →. This will facilitate 2-qubit 2-qubit fusion. We finally run 2-qubit fusion to obtain the ultimate circuit for simulation. We show in Section <ref> about the efficiency of gate fusion.§.§ Mid-Circuit Measurement AssertionAs discussed in Section <ref>, the projection algorithm works by asserting the ancilla qubit measuring |0⟩, which will progressively enhance the amplitude of the ground state with respect to alternative states. When all ancilla qubits testing |0⟩, the ground state is well prepared, which can be sampled. Listing <ref> shows the structure of the QASM circuit generated from the projection algorithm. q[0] is the ancilla qubit, which is measured (Line 11) and recycled (Line 13) repeatedly in the middle of the circuit. After the execution, from the m measurement results of r, the algorithm picks those (labeled k) with respect to c being all-zero, which are the right samples of the ground state.Looking at Listing <ref>, although the projection algorithm effectively reduces the ancilla usage to 1 qubit, which can significantly reduce simulation cost, it also introduces mid-circuit measurement and the potential dependency on the ancilla. For state-vector simulation of general circuits that only incurs measurement at the end, a well-known and key optimization is that one can simulate the circuit upon the measurement, and sample the final state-vector for the m shots together at once. Here, because of the mid-circuit measurement and dependency, we can only run one shot a time, drastically degrading the simulation efficiency. If the probability of c being all zero is relatively low (see Table <ref>, we either cannot obtain the state of interest (k=0), or the sampling efficiency is low (k≪ m).Therefore, in addition to gate fusion, we further modify SV-Sim to perform an algorithm-specific implementation. Recalling that for each ancilla measurement over q[0], only q[0] being |0⟩ is of interest. Therefore, during simulation, after obtaining the probability of measuring q[0], we can verify whether q[0] still has any chances to be |0⟩, i.e., P(q[0]=|0⟩) 0. If so, we simply assert the measurement result of sampling q[0] is state |0⟩, and track its probability P(q[0]=|0⟩). Otherwise, the simulation terminates and returns a failure. In this way, we can ensure c is always all-zero at the end. More importantly, despite we still need mid-circuit measurement, the dependency over q[0] is resolved. We can simply run the simulation to the end, and sample m times at once, of which all of them are of interest (i.e., m=k), given c is ensured to be zero. Through this design, we can significantly improve simulation efficiency as well as sampling efficiency for the projection filtering algorithm. § EVALUATION§.§ Environment and Settings We use the U.S. NERSC Perlmutter HPC cluster and the OLCF Summit and Crusher HPC clusters for the evaluation, as listed in Table <ref>. Perlmutter: We use the Perlmutter system as the primary platform for the evaluation. Perlmutter is an HPE Cray EX pre-exascale HPC system based on the HPE Cray Shasta platform. This heterogeneous cluster comprises 1536 GPU nodes (each contains 4 NVIDIA A100 40GB GPUs), 256 advanced GPU nodes (4 A100 80GB GPUs), and 3072 CPU nodes (AMD EPYC 7763) linked by HPE Slingshot 11 high-speed interconnect. Each node is equipped with an AMD Milan EPYC 7763 64-core CPU.Summit: We also use the OLCF Summit cluster. This IBM AC922 system contains more than 27,000 NVIDIA Volta GPUs with more than 9,000 IBM POWER9 CPUs. It has 4,608 nodes. Each node features two IBM POWER9 CPUs with 512 GB DDR4 main memory, and 6 Volta V100 GPUs with 16GB HBM2 memory per GPU. The intra-node interconnect is NVLink and the inter-node network is EDR 100GB InfiniBand.Crusher: Crusher is an early-access testbed system at ORNL for exascale computing. It shares identical hardware and similar software as the Frontier HPC. Crusher has 2 cabinets, with 128 compute nodes and 64 compute nodes, respectively. Each node consists of a 64-core AMD EPYC 7A53 CPU, 512 GB of DDR4 main memory, and 4 AMD MI250X GPUs. The CPUs and GPUs are connected through Infinity Fabric inside a node. Different nodes are connected through 4 HPE Slingshot 25 GB/s NICs.Regarding the problem settings, for testing purposes in this investigation, we used the phenomenological interacting shell model. In this many-body framework one assumes that only a small number of valence nucleons are interacting in a restricted model space via a phenomenological interaction. The rest of the nucleons are assumed to constitute an inert core. To simplify even more the problem, we present in this paper test cases where only neutrons are active. Thus, we have considered here two neutrons in the 0p shell (six active single-particle states) using theCohen-Kurath interaction <cit.>, four neutrons in the1s0d model space (12 active single-particle states) using the “universal sd"Wildental interaction <cit.>, and four neutrons in the 1p0f shell (20 active states) using the modified KB3 interaction <cit.>. Taking into account the ^4He core for the 0p shell, the ^16O core for the 1s0d shell, and the ^40Ca core for the 1p0f shell, ^6He, ^20O, and ^44Ca have been considered. For the 0p shell case, we also used a deformed Hartree-Fock solution computed with the code<cit.>. With that, we formalize 15 problem instances for evaluating the simulation performance, covering a wide range of problem size from 7-qubit (including 1 ancilla), 11,939 gates to 21-qubit (1 ancilla), 115,079,266 gates. Their features are listed in Table <ref>. Their QASM file-size ranges from 272KB to 2.2GB, confirming that the circuit depth for these low-energy nuclear state preparation applications are much deeper than general circuits.Our evaluation covers the following aspects: (i) Gate Fusion: We first evaluate the effectiveness of gate fusion, focusing on gate count reduction and simulation time reduction. (ii) Mid-Circuit Measurement Assertion: We show the savings of measurement times, and the improvement on sampling efficiency and simulation performance. (iii) Performance across Platforms: we show the simulation time on the CPUs and GPUs of the three HPC clusters. For the CPU performance, we show single-core single-thread, and multi-core with 4 threads using OpenMP. (iv) Ground Energy and Sampling Efficiency: we show the obtained ground state energy of the 15 problem instances with respect to different trotter steps. §.§ Evaluation Results (i) Gate Fusion: Using P1 to P8 in Table <ref>, Fig. <ref> illustrates the reduction of gate count through gate fusion presented in Section <ref>. We progressively apply each fusion operation of Fig. <ref>, following the strategy in Fig. <ref>. After that, Fig. <ref> shows the corresponding simulation time reduction with gate fusion, tested using an A100 GPU of Perlmutter. As can be seen, gate fusion can bring on average 2.15× gate count reduction (range: 1.98-2.32×) across the 8 problem instances, corresponding to an on-average 1.98× speedup (range 1.78-2.46×). Regarding these results, we have the following observations: (i) Circuits for different nuclei have specific patterns which may largely impact the benefit from gate reduction. Trotter steps, on the other hand, do not seem to impact the benefit from gate reduction. (ii) For this projection filtering algorithm, the percentage of two-qubit gates is quite high (on average 60.53%, see Table <ref>). Comparatively, there are much more opportunities for 2-qubit gates fuse 1-qubit gates and 2-qubit gates fuse 2-qubit gates than 1-qubit gates fuse 1-qubit gates. The latter (2-qubit gates fuse 2-qubit gates) is particularly unique from general quantum circuits.Overall, these results demonstrate that our gate fusion technique can effectively reduce the gate count and improve simulation performance for the low-energy nuclear ground state projection algorithm. (ii) Mid-Circuit Measurement Assertion: We use P9 as a non-trivial exemplar case for this evaluation. Shown inListing <ref>, the projection circuit includes a series of mid-circuit ancilla measurements to amplify the probability of the ground state, given the measurement result is 0. However, it is also possible that the measurement gives 1, which implies that the present shot fails to project to the correct state, seen as a rejection. As shown in Table <ref>, the filtering steps for P9 is 8, implying 8 mid-circuit measurements, labeled as M1 to M8. In Table <ref>, we repeat three simulations (Case-1 to 3) of the P5 circuit on a Perlmutter A100 GPU using 1024 shots without applying mid-circuit measurement assertion. The values under M1 to M8 show the number of rejected shots during each mid-circuit measurement. Within the three trails, for P9, on average 36 of the 1024 shots can project to the final state of interest, implying a sampling efficiency of 3.52% for the projection algorithm. Additionally, because of repeated end-to-end runs due to mid-circuit measurement, the simulation time even on an A100 GPU is still quite significant, on-average 392 minutes. By applying mid-circuit measurement assertion, despite the probability of measuring all-zero remains low, we can always assert it measures 0 and tracking the probability. The last row of Table <ref> shows the ground truth probability of measuring 0 during each mid-circuit measurement. Overall, the probability of obtaining all 0s for M1 to M8 is 3.77%. With mid-circuit measurement assertion, we can assert the probability, and ensure the ground state is well-prepared. We can then sample the state by 1024 shots at once in parallel, providing 1024 effective samples. Comparatively, to obtain the same number of effective samples, the original method would need 1024/0.0377=27,162 shots on average and have to be executed sequentially due to mid-circuit measurement.Overall, the simulation time with MMA is 52.621s, 447× faster than the baseline. When demanding the same number of effective samples, e.g., 1024, the speedup is about 12,719×. Although the benefit is circuit dependent, it confirms the usefulness and effectiveness of mid-circuit measurement assertion. For larger problems or problems with even lower success rate, the gains can be even more extraordinary. (iii) Performance across Platforms: We perform the simulations over the three CPUs and three GPUs of the Perlmutter, Summit and Crusher systems listed in Table <ref>. Fig. <ref> shows the simulation time of the P1, P5 and P9 circuits on the three GPUs (A100, V100 and MI250X) and CPUs (EPYC-7763, Power-9 and EPYC-7A53 with 1 and 4 threads). We put the raw number of simulation latency (in ms) above the bars. Note, P9 runs too long on the CPUs so we omit them. As can be seen, for small cases such as 7-qubit P1, GPUs do not exhibit any performance advantages. For larger cases, such as 13-qubit P5, the performance of GPUs is well above CPUs. For even larger problems such as the 21-qubit P9, A100 is better than MI250X, and then V100, implying that A100 requires sufficient workload to deliver superior simulation performance.(iv) Ground Energy and Sampling Efficiency: Table <ref> lists the ground state energy obtained through our simulation, the reference ground energy, and the theoretical successful rate of the projection filtering algorithm. We also list the GPU kernel execution time for the simulation. All the results here are obtained on an A100 GPU of Perlmutter.As can be seen, the high success rates indicate that the projection filtering algorithm is quite effective in converging to the ground state energy, even for non-trivial problems. Additionally, with only a few filtering steps, i.e., 4 for ^6He and 8 for ^20O and ^44Ca, we can already achieve an accuracy with less than 0.01 MeV, 0.01 MeV, and 0.1 MeV, respectively. It also confirms that the simulator can generate the correct energy number with proper trotter steps. We also compare the SV-Sim energy results with Qiskit simulation results, they are aligned with each other to 10^-14, further confirming the numerical accuracy of our simulation. Meanwhile, it is interesting to observe that for ^44Ca, increasing the trotter steps from 528 to 1051 only brings very incremental gain of 0.0004 MeV, implying marginal impact. We set the investigation of impact from trotter steps as a future work.§ CONCLUSION Numerical simulation represents an important method for direct verification of quantum circuit correctness. Here we have demonstrated verification of deep quantum circuits for nuclear physics applications using a high-performance numerical simulator that incorporates several unique methods. The first is the use of gate fusion to reduce the effective simulation depth and, therefore, reduce the amount of computation required to generate the quantum state. We have described and demonstrated the use of 1-qubit and 2-qubit gate fusion to reduce the simulation depth of circuits used to prepare nuclear state ansatz. The second method manages the simulated state in the presence of mid-circuit measurements. We simulated the role of mid-circuit measurement by following the complete simulated state and using post-selection of the designated ancilla to calculate the projected state as well as the probability of the outcome. Together these methods have permitted the verification of quantum circuits derived from a state preparation algorithm using an energy-filtering algorithm. We demonstrated simulation of several examples of deep circuits acting on 21 qubits with more than 115,000,000 gates. We have also tested these examples on a variety of high-performance computing systems that emphasize the benefits for GPU-accelerated processing. For future work, we would like to investigate the simulation of larger nucleus problems, such as ^26Mg, ^56Fe and ^58Ni, etc., and the impact of accuracy from the filtering and the trotter steps. In conclusion, we have extended the capabilities for numerical simulation of deep quantum circuits for nuclear physics using high-performance computing systems. These methods provide powerful tools for verification of quantum circuit design. § ACKNOWLEDGEMENT This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Quantum Science Center (QSC). TSH, AL, and AB acknowledge QSC support for advances in numerical simulation methods and quantum circuit synthesis. The work of IS was carried out under the auspices of the National Nuclear Security Administration of the U.S. Department of Energy at Los Alamos National Laboratory under Contract No. 89233218CNA000001. IS gratefully acknowledgepartial support by the Advanced Simulation and Computing (ASC) Program. This research used resources of the Oak Ridge Leadership Computing Facility (OLCF), which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. This research used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231.§ IMPLEMENTATION OF THE GE ET AL. ALGORITHM We briefly review here the algorithm used for state preparation that follows the work of Ref. <cit.> and describe how has been implemented. We consider an Hamiltonian with spectrum in an interval I∈(0,1), spectral gap Δ, an initial trial state |Φ⟩ with overlap χ with the ground state |Ψ_0⟩, and we assume we know the ground state energy E. After defining the shifted Hamiltonian H^'=H-E 1 the state defined below|Φ̃⟩=cos^M(H^')/||cos^M(H^') |||Φ⟩becomes close to the exact ground state, if the power of M is chosen as in Ref. <cit.>. It is possible to approximate the operator in Eq.  (<ref>) as a linear combination of unitaries in the following way:cos^2mH^'=∑_k=-m_0^m_0α_k e^-2iHk+R,with α_k=1/4^m2mm+k.andm_0 properly chosen in order to vanishing of quantity R, a prescription is provided in Ref. <cit.>. The implementation of the above operator can be done using Linear Combination of Unitaries using the Prepare and Select oracles of Ref. <cit.> and briefly reviewed below. Given the 2m_0+1 unitaries of Eq. <ref> we define an ancillary register of dimension n_A=⌈log_2(2m_0+1)⌉. We can implement a block encoding of the linear combination of unitaries using the algorithm originally developed in Ref. <cit.>and using the approach reported in Ref. <cit.>. In particular we first need a prepare operator P acting only on the ancillary register defined by the following equation:P|0⟩ = 1/√(α)∑_k=0^2m_0(α_-2m_0+k)^1/2|k⟩,and α=∑_k=-m_0^m_0|α_k|. Following Ref. <cit.> for the general case the gate decomposition of the unitary Pcan be done using generic circuit synthesis as originally reported in Ref. <cit.> whose exponential scaling should not be a limitation for the problems at hand (i. e. for 10^3 unitariesonly 10 ancillary qubits will be needed).After we define the select oracle, acting both on the ancillary register and the target register, in the following wayS = ∑_k=0^2m_0+1|k⟩⟨k|⊗ U^k,where U=e^-2iH. Although the implementetion in principle requires applying several multi-controlled unitaries, in Ref. <cit.>, Lemma 3.5, has been shown that only a number of single controlled unitaries equal to the number of ancillary qubits are necessary and sufficient. Therefore the implementation of the LCU method for the problem at hand can be expressed as in Fig. 2 of Ref. <cit.>. We are now in the position to discuss the success probability of the procedure similarly to Ref. <cit.> we can define the following quantityη^2 = ⟨Φ| O^2|Φ⟩,where we have defined the operator O=∑_k=-m_0^m_0α_k U^k. The success probability of postselecting all 0s on the ancillary qubit is P_s = η^2/α^2.
http://arxiv.org/abs/2310.17739v1
{ "authors": [ "Ang Li", "Alessandro Baroni", "Ionel Stetcu", "Travis S. Humble" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231026191058", "title": "Deep Quantum Circuit Simulations of Low-Energy Nuclear States" }
Random Fields from Quenched Disorder in an Archetype for Correlated Electrons: the Parallel Spin Stripe Phase of La_1.6-xNd_0.4Sr_xCuO_4 at the 1/8 Anomaly B. D. Gaulin January 14, 2024 =========================================================================================================================================================== For 1-dimensional applications, Budé's method <cit.> has been shown to be capable of accurately solving the all-FLR (Finite Larmor Radius) integro-differential wave equation as a high-order differential equation allowing to represent all physically relevant (fast, slow and Bernstein) modes upon making a polynomial fit that is accurate in the relevant part of k⃗ space. The adopted fit is superior to the Taylor series expansion traditionally adopted to truncate the series of finite Larmor radius corrections, while the differential rather than integro-differential approach allows for significant gain in required computational time when solving the wave equation.The method was originally proposed and successfully tested in 1D for radio frequency (RF) waves and in absence of the poloidal field <cit.>. In the present paper, the derivation of the extension of that procedure to 2D and for finite poloidal field - semi-analytically yielding the coefficients of the relevant high-order partial differential equation - is discussed in preparation of future numerical application.§ INTRODUCTORY NOTE In the past decades, many authors have devoted time to finding speedy but accurate ways to solve the wave equation reigning the wave propagation and damping in magnetic confinement machines. A number of bottlenecks were encountered and - at least partially - solved.* The first is that finite temperature (and hence finite Larmor radius) effects are crucial to understand wave damping and to determine how the wave polarisation - itself a key quantity in determining the absorption efficiency - changes under the influence of the presence of the plasma. Traditionally, a Taylor series expansion in k_⊥ρ_L (where k_⊥ is the perpendicular wave number and ρ_L is the Larmor radius) truncated at second order terms is used <cit.>, <cit.>, <cit.>, <cit.>. Although perfectly suitable for describing the fast magnetosonic wave in the ICRH (Ion Cyclotron Resonance Heating) domain for plasmas at moderate temperature, it is insufficient to correctly describe the fate of short wavelength branches (in the RF domain typically the ion Bernstein wave excited at the confluence near the ion-ion hybrid layer and hence routinely present at or close to the main damping region). Moreover, since RF heating typically creates high-energy tails and since fusion-born populations typically are highly energetic, the small-FLR assumption's validity needs to be checked a posteriori even for the fast wave in many relevant conditions. Luckily, the typically exploited truncation allows to at least get a first idea of what the actual wave absorption's strength is for most experimentally exploited regimes. But a more general procedure is nevertheless highly desirable. To date, the AORSA code <cit.> is likely the most general ICRH wave solver available that does not suffer from the small-FLR assumption routinely made. Both its strength and weakness is that it solves the all-FLR integro-differential wave equation by writing the electric field in its Fourier form but replaces the continuous integrals over k⃗ space by discrete sums. This makes the code very general but requires massive computer time and memory to invert the full matrix resulting from retaining the couplings between all modes in the k⃗ spectrum. Cutting down on computational needs is a useful task while keeping the description as general as possible is a worthwhile exercise.* In inhomogeneous plasmas, the wave vector is not a constant and hence dispersion equation solving is insufficient to get a full grip on the wave dynamics. Because of the poloidal as well as toroidal periodicity of magnetic confinement devices, Fourier expansions are often used to describe the dynamics in the periodic directions and an associated set of variables (ρ,θ,φ) - where ρ is the minor radius, θ is the poloidal and φ the toroidal angle -is routinely adopted. This introduces an artificial singularity in the associated coordinate system: although the magnetic axis in a tokamak (ρ=0) physically is a perfectly regular point, the mathematical singularity introduced requires one to carefully treat the region close to the axis, in particular when kinetic effects are important so that the wave flux is not only carried electromagnetically but also by particles in coherent motion with the waves. For an example of the evaluation of the kinetic flux - be it in 1-D - see e.g. <cit.>. Physically nor the Poynting nor the kinetic flux has to be zero at the magnetic axis; the total flux vector should simply be continuous when crossing this point.For finite element representations where the weak formalism is exploited - and to the exception of finite element formulations relying on base functions for which the continuity across individual finite element borders is automatically guaranteed by construction (see the brief section on fluxes) -that requires the kinetic flux terms are properly known and accounted for in the surface terms of the variational formalism to ensure finite fluxes across the mathematical singularity are allowed and continuity assured.In view of the very large difference in magnitude between the parallel and perpendicular components of the electric field - itself the consequence of the huge difference in magnitude of the parallel and perpendicular conductivity - the wave equation isoften expressedin terms of (E_⊥,1, E_⊥,2,E_//) to avoid the (electron) damping involving E_// is inaccurately estimated.That will also be done here.In recent years, gradually more powerful - commercial as well as freeware - partial differential equation solvers became available; see e.g. <cit.>. Grid refinement techniques, higher order polynomial approximations and optimised solvers allow to achieve higher accuracy and/or faster integration. To be able to exploit such tools the reigning equation typically needs to be of (partial) differential form. Finite Larmor radius corrections being important but typically requiring an integro-differential equation, Budé proposed a technique to approximate an integro-differential equation by a higher order differential equation <cit.> and found the solutions are hardly distinguisable from the solutions of the actual integro-differential equation. He illustrated its use solving the 1D wave equation corresponding to the "full hot" dielectric tensor of Swanson <cit.>. Budé's idea was to step away from a Taylor series expansion (which typically breaks down fairly quickly when the adopted "small parameter" k_⊥ρ_L fails to be small ) and adopt a modest order polynomial fit in k⃗-space.From a mathematical point of view, the steps after the fitting are identical to the usual steps: j-th powers of ik_α in k⃗-space become jth order partial differential operators. The essence of the Budé's procedure proposed goes back to the works of Fuchs et al. <cit.> who - when looking into mode conversion physics - proposed to avoid deriving and solving the full wave equation but just concentrated on the main wave interaction physics: if the evaluation of the dispersion equation D(x,k)=0 shows that mode conversion occurs near a reference wave number k̃, then a Taylor series expansion in k⃗-space yields an approximate dispersion equation D(x,k) ≈ D(x,k̃) + dD/dk(k-k̃)+1/2d^2D/dk^2(k-k̃)^2=0.neark=k̃. This expression can immediately be transformed into a second order differential equation by making the Fourier inversion which consists in replacing k by the differential operator -id/dx. Although that equation does not fully rigorously describe the whole physics of the 2 interacting modes, it allows to crudely compute how the waves communicate with each other. Budé started from this idea to solve the set of 3 coupled wave equations describing the interplay between the 3 electric field components locally exploiting the homogeneous hot plasma dielectric tensor for Maxwellian distributions <cit.> and adopting k̃_⊥=0 while retaining - as is classically done - only up to second order finite Larmor radius terms. Solving the obtained differential system relying on the finite difference method, he managed to elegantly describe the excitation of short wavelength Bernstein wave branches. He realised he could extend the Taylor series expansion in terms of FLR corrections to higher order derivatives without doing much more algebra by numerically fitting the tensor locally in k⃗-space to a higher order polynomial. At the price of augmented efforts needed to obtain the required coefficients of the polynomial fits and of finding the finite difference expressions adopted in the solving scheme he adopted,this permitted him to subsequently solve the wave equation relevant for ion heating at an arbitrary cyclotron harmonic. Solving differential or integro-differential equations relying on the finite element or finite difference technique transforms the initial equation into a linear system yielding the approximate solution. The importance of Budé's work is that he managed to solve the integro-differential equation for all physically relevant modes but without needing a full system matrix. The subdomain in k⃗-space where dispersion equation roots appear can be assessed by all-FLR dispersion equation solvers (see e.g. <cit.>). Beyond this domain the Fourier amplitudes are expected to negligibly small or zero and do not contribute to Fourier integrals. The Fourier amplitudes of physically acceptable modes only being nonzero in a finite domain justifies adopting a fit of the relevant functions appearing in the dielectric response provided this fit is sufficiently accurate in the domain of interest. Since the computer time and memory required to invert a matrix relying on sophisticated linear system solvers depends sensitively on the amount of nonzero coefficients of the system matrix, this technique potentially allows significantly pushing down the computational requirements compared to solving the actual integro-differential equation. On top of that, the fit being superior to the Taylor series expansion, Budé's method allows to extend the set of relevant scenarios that can be modelled. His idea was exploited to formulate an extension of the TOMCAT code <cit.> to RF heating scenario modelling requiring an all-FLR integro-differential approach; Maxwellian and bi-Maxwellian distributions can be handled semi-analytically, and - at the price of requiring more computer time - non-Maxwellian distributions can be treated using a purely numerical approach. The difference between the original (differential) TOMCAT approach and the usual FLR expansion is that it does not make a Taylor series expansion of the dielectric tensor itself but of the Kennel-Engelmann operator <cit.> acting both on the electric field and on the test function in a variational formulation of the wave equation. Just like is usually done, the original TOMCAT's description relies on a truncation of the dielectric response in terms of FLR corrections at second order but since the operator appears as a product in that response, the equation has up to 4th order Larmor radius corrections. The code's main selling point - and that of its integro-differential as well as Budé upgrades - is that it guarantees a positive definite power balance for any of the wave modes when the distribution function is Maxwellian, as is expected from first principles but cannot locally be guaranteed when exploiting the usual expansion for all modes; a fair amount of attention was devoted to studying and settling this issue in the 80's <cit.>, the discussion originating from the observation that negative absorption could erroneously occur when a significant amount of the wave energy is carried by short wavelength branches, more specifically by branches that carry their energy via particles in coherent motion with the wave i.e. via the kinetic flux, rather than electromagnetically i.e. via the Poynting flux. Supplementary advantages are that the description allows to go beyond second harmonic heating (N=2), and that the wave absorption operator is fully compatible with the quasilinear diffusion operator in the Fokker-Planck equation so a fully self-consistent wave+Fokker-Planck description (in which the same wave-particle interaction description is used) is possible. It is tempting to check whether Budé's appealing idea allows to solve the wave equation in 2 or 3 dimensions, generalising the Budé-upgraded TOMCAT philosophy to more than 1 dimension. The present paper aims at starting that exercise, providing the basic algebra required and discussing specific points of attention, highlighting both the method's assets and potential limitations/drawbacks. The symmetry of the expression for the dielectric response as exploited in the TOMCAT solver is a simplified "quasi-homogeneous" version of the philosophy originally due to Kaufman <cit.>, who demonstrated this symmetry is present on a much deeper level adopting an action-angle approach when assuming the tokamak is axisymmetric. As is also the case in the Kennel-Engelman description, the guiding center rather than the particle position takes central stage in Kaufman's work, removing the need to know the distribution function at the particle position and significantly simplifying the interpretation and algebra. The orbit is fully determined by 3 constants of the motion along this orbit while the periodic aspects of the motion are described by 3 associated angles: one tracking the Larmor gyration around the guiding centre, one describing the poloidal bounce motion (the motion ensuring the magnetic moment is conserved when the particle is moving into a region of higher or lower magnetic field strength forcing the v_⊥ to change, and - the energy being conserved as well so v_//=0 can occur - introducing trapping) and one describing the toroidal precession of a toroidal reference point in a toroidal cut. Because of the periodic nature of the particle motion, Fourier analysis is a suitable way of describing the motion. The orbits being closed poloidally, the bounce spectrum consists of a set of discrete modes (labeled as bounce modes) rather than of a continuous spectrum. This has important consequences: constructive and destructive interference isolates the dominant contributions of individual bounce modes to specific poloidal positions when performing the (bounce) integrals on a poloidally closed orbit and allows to reconcile 2 seemingly opposite notions namely (i) the fact that individual bounce modes have global rather than local resonances while (ii) the usual wave-particle interaction relies on the local interaction between wave and particle to explain a net acceleration or deceleration when satisfying the resonance condition ω=NΩ+k_//v_// (or more generally ω=NΩ+k⃗.v⃗_D - where v⃗_D is the guiding centre velocity - when accounting for the deviations of guiding centres from magnetic surfaces). The present paper takes the TOMCAT plasma model "as is" and concentrates on preparing exploitation of Budé's method for numerically solving the integro-differential wave equation in a 2-dimensional space, hereby sidestepping details (such as inhomogeneity corrections resulting from acceleration and deceleration along the orbit) that constitute a very rich research topic in their own right. The expressions provided in the present paper also assume the magnetic surface labeling coordinate and the toroidal angular momentum can be confused. It was illustrated in <cit.> that accounting for the actual constants of motion is a technical matter increasing the amount of algebra required but that does not pose particular issues.The present paper is structured as follows:Section 2 describes the general starting equation the description relies on. In section 3 the general formalism is applied to a Maxwellian distribution and explicit expressions for the needed functions are provided. Section 4 generalises these results to bi-Maxwellians with a parallel drift. Section 5 briefly mentions how pushing further to arbitrary distributions is possible, at the price of increased computing time.A note on how the general kinetic flux term can be computed is provided in section 6. Section 7 is devoted to commenting on a simplified - but limited - description of the parallel dynamics; it also highlights why accounting for the parallel dynamics is challening.The final form of the wave equation is presented in section 8.To allow exploring the potential of the Budé method prior to considering all finite Larmor radius corrections, the 2D equivalent of the operator defined in <cit.> is provided in Section 9. It retains up to second order finite Larmor radius corrections in the operator acting both on the electric field and the test function vector and hence yields up to 4th order partial derivatives. Section 10, finally, sums up the conclusions and comments on the next steps to take towards actual exploitation of the presented expression.§ STARTING EQUATION The original TOMCAT equations were formulated for Maxwellian plasmas. Ichimaru writes the dielectric tensor for an arbitrary distribution function in an elegant form <cit.>. In view of later generalisation intended, we will initially follow his approach.In its most general form, the wave equation for the electric field can be written combining Faraday's and Ampere's law,∇×∇×E⃗= iωμ_o J⃗ +k_o^2E⃗=k_o^2 ϵ.E⃗where E⃗ is the electric field,ϵ is the hot plasma dielectric tensor and k_o=ω/c, where ω is the driver frequency and c the speed of light. In the above J⃗ is the RF perturbed current density J⃗=∑_α q_α∫ dv⃗v⃗ f_RF,αin which f_RF,α is the RF perturbed distribution function, itself related to the non-perturbed distribution function F_o and to the RF electric and magnetic fields E⃗ and B⃗ via Vlasov's equation:f_RF,α=-q_α/m_α∫_-∞^tdt' [E⃗+v⃗×B⃗].∇_v⃗F_o.Intending later exploitation relying on finite elements, it is suitable to solve the wave equation in variational form. Multiplying the wave equation with the complex conjugate of the test function vector F⃗, integrating over a region of interest and performing a partial integration on the ∇×∇×E⃗ to bring out the Poynting flux explicitly, the resulting equation can be written ∫ dx⃗ [k_o^2 F⃗^*. ϵ.E⃗ -(∇×F⃗)^*.(∇×E⃗)]=-∫_S dS⃗.F⃗^*×∇×E⃗. Following Ichimaru and adopting cylindrical coordinates (R,Z,φ) - see Fig. <ref> - the term involving the dielectric tensor can be written asF⃗^*.ϵ.E⃗=[1-∑_αω_p^2/ω^2] F⃗^*.1.E⃗ - 2π∑_αω_p^2/ω^2∑_N=-∞^+∞∫_0^∞ dv_⊥∫_-∞^+∞ dv_//∫ dk'_R dk'_Z ∫ dk_R dk_Z∑_n,n' e^i([k_R-k'_R]R+[k_Z-k'_Z]Z+[n-n']φ)[ N Ω_α∂ F_o/ ∂ v_⊥ +k_//v_⊥∂ F_o/ ∂ v_// ]/ [ N Ω_α + k_//v_//-ω] L(F⃗_k⃗')^*L(E⃗_k⃗)where n and n' are the toroidal mode numbers of E⃗ and F⃗, respectively; F⃗ is the test function vector, E⃗ is the electric field and L is the Kennel-Engelmann operator <cit.> L(H⃗)=v_⊥/2 [H_-J_N+1exp[i(N+1)ψ]+H_+J_N-1exp[i(N-1)ψ]] +v_// H_//J_Nexp[iNψ]in which H±=H_⊥,1± iH_⊥,2 and ψ =tan^-1( k_⊥,2/k_⊥,1 ); the argument of the Bessel functions is ζ=k_⊥ρ_L=k_⊥ v_⊥/Ω_α and we will henceforth assume the tokamak is axisymmetrical so that the various toroidal modes can be treated one by one as finite contributions require n=n'. Note that formally the treatment can easily be extended to 3D by retaining the full sum on n and n' and hence treating all toroidal couplings when projecting the ultimately obtained equation on a full set of toroidal mode test functions rather than just one.The argument ζ contains k_⊥= (k_⊥,1^2+k_⊥,2^2)^1/2. Via the angle ψ the dielectric tensor accounts for directionality of the wave. It has been assumed that the slowly varying distribution function does not depend on the cyclotron gyro-phase. More generally and thinking beyond 1D application, the slowly varying distribution F_o does not vary on any of the 3 oscillatory fastly varying aspects of the motion: cyclotron gyro-motion, poloidal bounce motion or toroidal precession drift motion.We put the expression in matrix form for easy manipulation.F⃗^*.ϵ.E⃗=[1-∑_αω_p^2/ω^2] F⃗^*.1.E⃗ - 2π∑_αω_p^2/ω^2∑_N=-∞^+∞∫_0^∞ dv_⊥∫_-∞^+∞ dv_//∫ dk'_R dk'_Z ∫ dk_R dk_Z e^i([k_R-k'_R]R+[k_Z-k'_Z]Z+[n-n']φ)[ N Ω_α∂ F_o/ ∂ v_⊥ +k_//v_⊥∂ F_o/ ∂ v_// ]/ [ N Ω_α + k_//v_//-ω] ( [ F_⊥,1 F_⊥,2F_// ] )_k⃗'^* . M_diel. ( [ E_⊥,1; E_⊥,2;E_// ] )_k⃗HereM_diel= ( [G_N,1(k⃗) G_N,1^*(k⃗') G_N,2(k⃗)G_N,1^*(k⃗')G_N,3(k⃗) G_N,1^*(k⃗'); G_N,1 G_N,2^*(k⃗') (k⃗)G_N,2(k⃗) G_N,2^*(k⃗') G_N,3(k⃗)G_N,2^*(k⃗');G_N,1(k⃗) G_N,3^*(k⃗')G_N,2(k⃗) G_N,3^*(k⃗') G_N,3(k⃗)G_N,3^*(k⃗') ] ) andG_N,1(k⃗)=v_⊥/2[+J_N+1e^iψ+J_N-1e^-iψ]e^iNψ G_N,2(k⃗)=iv_⊥/2[-J_N+1e^iψ+J_N-1e^-iψ]e^iNψ G_N,3(k⃗)=v_//J_N e^iN ψ It is important to underline that the dielectric operator ϵ in Eq.<ref> and the above are actually not identical: In the symmetrical weak variational principle formulation, ϵ not only operates to the right on the electric field E⃗ but also to the left on the test function F⃗, while F⃗ does not even appear in Eq.<ref> so that the operator can only act on E⃗.For that reason and despite the fact that the Kennel-Engelman operator was derived for a homogeneous plasma, the above expression is not just a generalisation of the uniform plasma expression, the distinction lying in the fact that the present expression for the dielectric response has one Kennel-Engelman operator acting on the test function (so that k⃗' appears in it)while the other is acting on the electric field (where k⃗ appears). The uniform plasma expression would have both operators having k⃗ so that the test function can simply be moved in front (as in the first equation) while that is no longer possible in the above. Unlike what is expected from first principleswhen the populations are in thermal equilibrium and as noticed early on by various authors, assembling a wave equation by adopting the homogeneous plasma limit and merely substituting k⃗'s by -i∇'s does not guarantee that energy exchange between particles and waves always flows from the waves to the particles. The proper placing of the differential operators in the expression dates back from a discussion in the 80's on the meaning of what we understand under "heating" of a population that is confined by a strong magnetic field <cit.> and yields a dielectric tensor operator that contains derivatives w.r.t. background quantities. The adopted model retains leading order terms in ρ_L/L (Larmor radius on equilibrium scalelength) while ensuring positive definiteness of any wave mode the plasma admits but sidestepping deriving basic equations to account for supplementary - less crucial - corrections due to the inhomogeneity of the background. The choice of the grid is a matter of discussion. The traditional procedure is to use a grid aligned with the magnetic surfaces, which is also convenient for treating the wall (which can then be treated as a coordinate surface when details of the vessel's shape are neglected). This suggests to use the poloidal angle as one of the variables and would allow Fourier analysis in that direction, which has the major advantage that k_// remains a well defined algebraic quantity and allows to keep using the plasma dispersion function when introducing the poloidal field effects. It has the drawback that there is a singularity of the coordinate system at the magnetic axis, although physically the axis is a perfectly regular point. The other option is to use a "Cartesian" (R,Z) grid in a toroidal cut. This allows taking over almost all of the machinery developed for the 1D application - up to upgrading the algebra to permit 2D exploitation - but has the drawback that the wall is not a coordinate surface. It is - however - perfectly feasible to have a complicated wall structure when imposing the wall boundary conditions via Lagrange multipliers <cit.>. An intermediate procedure seems most appropriate: adopting a "Cartesian" (R,Z) grid in a toroidal cut but exploiting triangles as elementary finite elements in which the wave equation is solved exploiting a set of base functions of sufficiently high order for the equation at hand. Powerful finite element packages are available allowing exactly that; e.g. MFEM <cit.> supports a wide variety of finite element spaces in 2D and 3D, including arbitrary high-order representation. Exploiting them, wall or plasma edge surfaces that are not coordinate surfaces can be treated elegantly while all finite element algebra to construct and exploit the proper base functions is done internally in MFEM routines. When the poloidal magnetic field is accounted for, the mathematical and numerical effort needed is increased.A rotation matrix ℛ now connects the locally adopted (e⃗'⃗_⊥,1,e⃗'⃗_⊥,2,e⃗_//) and the global or geometrical (e⃗_R,e⃗_Z,e⃗_φ) frames. Although for the perpendicular direction the procedure is essentially unchanged, the parallel direction requires more attention since k_// now has a poloidal component. Labeling Θ to be the angle between the toroidal and the parallel direction one hask_//=cosΘn/R + sinΘ k_θ=cosΘ k_φ + sinΘ k_θin which the poloidal wave vector component k_θ itself is a function of k_R and k_Z in general. The 2 perpendicular directions defined via the unit vectorse⃗'⃗_⊥,1ande⃗'⃗_⊥,2can be defined to be "as close as possible" to e⃗_R and e⃗_Z; see Fig. <ref>.This requires d/dβ [e⃗'⃗_⊥,1 . e⃗_R]=d/dβ [cosβe⃗_⊥,1 . e⃗_R+sinβe⃗_⊥,2 . e⃗_R]=0where e⃗_⊥,1=∇ρ / |∇ρ| and e⃗_⊥,2=[∂θ/∂x⃗] / |∂θ/∂x⃗| (with ρ the magnetic surface labeling parameter and θ the poloidal angle) so sinβ =-cosΘsinα/[cos^2α +(cosΘsinα)^2 ]^1/2, cosβ =cosα/[cos^2α +(cosΘsinα)^2 ]^1/2.in which α is the angle between e⃗_R and e⃗_⊥,1. This elegant procedure was already exploited in AORSA <cit.>.There is a supplementary subtlety: k_// appears in the factor introducing the resonant denominator. Depending on the approximations used, k_//=k_//(k⃗) or k_//=k_//(k⃗,k⃗'). The former is most frequently used in modelling while the latter is aligned with the leading order contribution picked up when accounting for the bounce dynamics but subsequently omitting the sum on the bounce modes by only retaining the dominant contribution of the relevant discrete bounce sum transformed into the corresponding bounce integral (in that case the relevant k_// to use in the resonant denominator is k_//=cosΘ n/R + sinΘ (k_θ+k'_θ)/2; see e.g <cit.>). Aligned with more general principles (formulated in the visionary paper by Kaufman <cit.>), the latter approach guarantees a positive definite power absorption for any wave a plasma composed of species in thermal equilibrium(i.e. having Maxwellian distribution functions) admits.We will henceforth adopt the notation𝒫_0=[1-∑_αω_p^2/ω^2] 1and will label the remainder of the expression for the operator ϵ as 𝒫_1in which the latter - following the procedure proposed by Budé - is subsequently fitted using a multipolynomial fit 𝒫_1= ∑_I,J,K,L𝒫_1,I,J,K,L k_R^Ik_Z^J(k'_R)^K(k'_Z)^Lwhere all sums are truncated at a predetermined order so that the dielectric response term in the wave equation can be written as a sum of partial differential equation contributions F⃗^*.ϵ.E⃗=F⃗^*.𝒫_0.E⃗-∑_I,J,K,L(-i)^I+J-K-L∂^K+LF⃗^* /∂ R^K ∂ Z^L.𝒫_1,I,J,K,L.∂^I+JE⃗/∂ R^I ∂ Z^Jin which the vectors are expressed in terms of their cylindrical components. The Fourier integrations have been done relying on ∂^I+JH/∂ R^I ∂ Z^J(x⃗) = (+i)^I+J∫ dk_R dk_Z exp[i(k_R R + k_Z Z +nφ)] k_R^I k_Z^J H_k⃗.based on the Fourier representationH(x⃗) = ∫ dk_R dk_Z exp[i(k_R R + k_Z Z +nφ)] H_k⃗.In the particular case that the poloidal field is zero, the computation is much simplified: In that case k_// only depends on n, R and Z,yielding expressions of the form∑_N=-∞^∞[ N Ω_α∂ F_o/ ∂ v_⊥ +k_//v_⊥∂ F_o/ ∂ v_// ]/ [ N Ω_α + k_//v_//-ω]G_N,i(k⃗_⊥(k_R,k_Z))G_N,j^*(k⃗'_⊥(k'_R,k'_Z))in which G_N,i(k⃗_⊥) and G_N,j(k⃗_⊥') can be fitted separately i.e. in which only 2D fits in terms ofeither (k_R,k_Z)or (k_R',k_Z') - rather than 4D fits of products of G_N,... and including the resonant denominator in terms of both (k_R,k_Z) and (k_R',k_Z') - need to be made.§ DERIVATION OF PRACTICAL EXPRESSIONS FOR MAXWELLIAN F_O In case the distributions are Maxwellian i.e. F_o=1/[2π]^3/2v_t^3exp[-(v_⊥^2+v_//^2)/[2v_t^2]]then the expression for 𝒫_1 becomes𝒫_1= -2π∑_αω_p^2/ω^2∑_N=-∞^+∞∫_0^∞ dv_⊥∫_-∞^+∞ dv_//v_⊥ F_o/v_t^2 [ 1+ω/[ N Ω_α + k_//v_//-ω] ] M_dielThe following expression for evaluating the integrals of Bessel function products is handy <cit.>:ℐ_1(ν,a,b,p)=∫_0^∞ t J_ν(at)J_ν(bt)e^-p^2t^2dt=1/2p^2exp[-a^2+b^2/4p^2]I_ν(ab/2p^2)which holds for Re[ν]>-1 and |arg(p)|<π/4. For the latter, no action is required since p is real for our application. The former requires exploiting the relations of the Bessel functions to obtain the suitable expressions. The identity J_-N=(-)^NJ_N readily states that it suffices to use |N| when evaluating the expressions, as also follows from I_-N=I_N. Taking the derivative w.r.t. a yieldsℐ_2(ν,a,b,p)=∫_0^∞ t^2 J'_ν(at)J_ν(bt)e^-p^2t^2dt=1/4p^4exp[-a^2+b^2/4p^2][-a I_ν(ab/2p^2)+bI'_ν(ab/2p^2) ]and taking the derivative w.r.t. both a and b yieldsℐ_3(ν,a,b,p)=∫_0^∞ t^3 J'_ν(at)J'_ν(bt)e^-p^2t^2dt=1/4p^4exp[-a^2+b^2/4p^2] [ab/2p^2(I_N+I_N”)+(1-(a^2+b^2)/2p^2)I'_N]in which the quotes are the derivatives w.r.t. the argument of the respective functions. The following expressions are useful:I'_ν=1/2[I_ν+1+I_ν-1] I”_ν(z)=[z^2+ν^2]I_ν -z I'_ν/z^2It was checked both analytically and numerically that the latter does not diverge at z=0.The finite ν^2 I_ν term compensates z I'_ν term so that the denominator going to zero does not cause a problem. For evaluating these functions numerically it is needed to evaluate the expressions slightly away from z=0 to avoid overflows. We write the matrix in terms of the Bessel functions and their derivatives. which yields the specific values ν=N, t=v_⊥, a=k_⊥/Ω, b=k'_⊥/Ω and p=1/[2^1/2v_t]. The required parallel integrals are of the formI_//,j(ξ)=∫_-∞^+∞dt t^j/t-ξ e^-t^2where I_//,0(ξ)=π^1/2𝒵(ξ)with 𝒵 the plasma dispersion function <cit.>. The recursionI_//,M=Ĩ_M-1+ξ I_//,M-1whereĨ_M=∫_-∞^+∞dt e^-t^2t^Mallows to find the needed expressions. The integrals Ĩ_M themselves are zero for odd M and can be found from the recursionĨ_M=[M-1]/2Ĩ_M-2for even M. Starting from Ĩ_0=π^1/2 we get Ĩ_2=π^1/2/2. Then I_//,1=Ĩ_0+ξ I_//,0=π^1/2[1+ξ𝒵(ξ)] and I_//,2=Ĩ_1+ξ I_//,1=π^1/2ξ (1+ξ𝒵(ξ)). We can now perform all velocity space integrals. The result for contributions to -𝒫_1 and hence directly to be summed to the [1-ω_p^2/ω^2] 1 term is 2π∑_αω_p^2/ω^2∑_N=-∞^+∞∫_0^∞ dv_⊥∫_-∞^+∞ dv_//v_⊥ F_o/v_t^2 [ 1+ω/[ N Ω_α + k_//v_//-ω] ] G_N,1(k⃗) G_N,1^*(k⃗')= ∑_αω_p^2/ω^2v_t^4∑_N=-∞^+∞ [1+ω/k_//2^1/2v_t𝒵_N] [(NΩ)^2/k_⊥ k'_⊥cosψcosψ' ℐ_1(ν,a,b,p)+iNΩ/k_⊥cosψsinψ' ℐ_2(ν,b,a,p) -iNΩ/k'_⊥sinψcosψ' ℐ_2(ν,a,b,p)+ sinψsinψ' ℐ_3(ν,a,b,p)]] e^iN[ψ-ψ']where ξ_N=[ω-NΩ_α]/[k_//2^1/2v_t].Introducing the notationR_α,β=2π∑_αω_p^2/ω^2∑_N=-∞^+∞∫_0^∞ dv_⊥∫_-∞^+∞ dv_//v_⊥ F_o/v_t^2 [ 1+ω/[ N Ω_α + k_//v_//-ω] ] G_N,α(k⃗) G_N,β^*(k⃗') so that R is the transpose of P_1 we similarly getR_1,2 = ∑_αω_p^2/ω^2v_t^4∑_N=-∞^+∞ [ 1+ω/k_//2^1/2v_t𝒵_N] [ -iΩ N /k_⊥cosψcosψ' ℐ_2(ν,b,a,p)+(NΩ)^2/k_⊥ k'_⊥cosψsinψ' ℐ_1(ν,a,b,p) -ℐ_3(ν,a,b,p)sinψcosψ' - i N Ω/k'_⊥sinψsinψ' ℐ_2(ν,a,b,p)] e^iN[ψ-ψ'] R_1,3= ∑_αω_p^2/ω^2v_t^4∑_N=-∞^+∞ [ ω/k_// [1+ξ_N 𝒵_N]] [NΩ/k_⊥cosψℐ_1(ν,a,b,p) -i ℐ_2(ν,a,b,p) sinψ] ] e^iN[ψ-ψ'] R_2,1= ∑_αω_p^2/ω^2v_t^4∑_N=-∞^+∞ [1+ω/k_//2^1/2v_t𝒵_N ] [ iNΩ/k'_⊥cosψcosψ' ℐ_2(ν,a,b,p)+(NΩ)^2/k_⊥ k'_⊥sinψcosψ' ℐ_1(ν,a,b,p) -cosψsinψ' ℐ_3(ν,a,b,p)+iNΩ/k_⊥sinψsinψ' ℐ_2(ν,b,a,p) ]e^iN[ψ-ψ'] R_2,2= ∑_αω_p^2/ω^2v_t^4∑_N=-∞^+∞ [ 1+ω/k_//2^1/2v_t𝒵_N] [ ℐ_3(ν,a,b,p) cosψcosψ' -iN Ω/k_⊥ℐ_2(ν,b,a,p)sinψcosψ' +iNΩ/k_⊥'cosψsinψ' ℐ_2(ν,a,b,p)+(NΩ)^2/k_⊥ k'_⊥sinψsinψ' ℐ_1(ν,a,b,p)] e^iN[ψ-ψ'] R_2,3= ∑_αω_p^2/ω^2v_t^4∑_N=-∞^+∞ [ ω/k_//[1+ξ_N 𝒵_N]][ iℐ_2(ν,a,b,p) cosψ+NΩ/k_⊥ℐ_1(ν,a,b,p)sinψ ] e^iN[ψ-ψ'] R_3,1 = ∑_αω_p^2/ω^2v_t^4∑_N=-∞^+∞ [ ω/k_// [1+ξ_N 𝒵_N]][ NΩ/k'_⊥cosψ' ℐ_1(ν,a,b,p)+iℐ_2(ν,b,a,p) sinψ']e^iN[ψ-ψ'] R_3,2 = ∑_αω_p^2/ω^2v_t^4∑_N=-∞^+∞ [ ω/k_// [1+ξ_N 𝒵_N]][ -iℐ_2(ν,b,a,p) cosψ' +NΩ/k'_⊥ℐ_1(ν,b,a,p) sinψ']e^iN[ψ-ψ'] R_3,3 = ∑_αω_p^2/ω^2v_t^2∑_N=-∞^+∞ [ 1+2ω/k_//2^1/2v_tξ_N [1+ξ_N 𝒵_N ]]ℐ_1(ν,b,a,p)e^iN[ψ-ψ'] Realising that the leading order term in the asymptotic expansion of both I_N and its derivative is exp[z]/[2π z]^1/2, the exponential terms in the integrals combine to yield exp[-(a - b)^2/(2p)^2] = exp[-(k_⊥ - k'_⊥ )^2ρ^2_L/2] showing that contributions are small when k_⊥ and k'_⊥ differ significantly.§ UPGRADE TO BI-MAXWELLIAN DISTRIBUTIONS WITH PARALLEL DRIFT In case the distribution is a bi-Maxwellian with perpendicular temperature T_⊥, parallel temperature T_// and parallel velocity drift v_//,d so that F_o=1/[2π]^3/2v_t,⊥^2v_t,//exp[-v_⊥^2/2v_t,⊥^2] exp[-(v_//-v_//,d)^2/2v_t,//^2]where v_t,⊥=[kT_⊥/m]^1/2 and v_t,//=[kT_///m]^1/2, the termv_⊥ F_o/v_t^2 [ 1+ω/[ N Ω_α + k_//v_//-ω] ]in Eq. <ref> needs to be upgraded tov_⊥ F_o/v_t,//^2 [ 1+ω+NΩ_α[(v_t,///v_t,⊥)^2-1]-k_//v_//,d/[ N Ω_α + k_//v_//-ω] ].Since - aside from the upgraded F_o - no new dependence on v_⊥ or v_// is introduced but only the coefficients in the above differ from the already obtained results, the upgrade to this more general type of distribution merely requires to replace v_t by v_t,⊥ in the perpendicular integrals (a factor 1/v_t,⊥^2 already appearing from the expression of F_o and further contributions related to the ℐ_... integrals), v_t by v_t,// in the parallel integrals, to upgrade the argument of theplasma dispersion function ξ_N=[ω-NΩ]/[k_//2^1/2v_t,//] to ξ_N,d=[ω-NΩ-k_//v_//,d]/[k_//2^1/2v_t,//] generalising the notation 𝒵_N to 𝒵_N,d and adjusting the elementary parallel integrals to account for the extra drift while upgrading the proper coefficients 1 for the nonresonant and ω for the resonant contributions in the straight brackets together with the 1/v_t^2 in front of it in Eq.<ref> to 1/v_t,//^2 and [ω+NΩ_α[(v_t,///v_t,⊥)^2-1]-k_//v_//,d]/v_t,//^2. While the perpendicular integrals are untouched except for introducing the proper thermal velocity, the parallel ingredients are mildly upgraded using the splitting v_//=[v_//-v_//,d]+v_//,d and relying on partial integration. LabelingI_NR //,d,m=∫ dv_// v_//^m exp[-(v_//-v_//,d)^2/2v_t,//^2]=(2^1/2v_t,//)^m+1∫ dq (q+q_d)^m exp[-q^2]=(2^1/2v_t,//)^m+1I_NR,mwe can easily find that I_NR,m=m-1/2I_NR,m-2+q_dI_NR,m-1so that we get the 3 needed nonresonant parallel integrals I_NR//,d,0=2^1/2v_t,//π^1/2, I_NR//,d,1=2v_t,//^2q_dπ^1/2 and I_NR//,d,2=2^3/2v_t,//^3π^1/2[q_d^2+1/2] where q_d=v_//,d/[2^1/2v_t,//]. For the resonant integrals and labeling I_R//,d,m=∫ dv_//v_//^m/[N Ω_α + k_//v_//-ω] exp[-(v_//-v_//,d)^2/2v_t,//^2] =(2^1/2v_t,//)^m/k_//∫ dq (q+q_d)^m/[q-ξ_N,d] exp[-q^2]=(2^1/2v_t,//)^m/k_//I_R,mfor which the recursion I_R//,d,m=I_NR//,d,m-1+(ξ_N,d+q_d)I_R//,d,m-1holds, we get I_R//,d,0=π^1/2𝒵_0,d/k_//,I_R//,d,1=[2^1/2v_t,///k_//]π^1/2[1+ξ_1,d,0𝒵_1,d] andI_R//,d,2=[2v_t,//^2/k_//]π^1/2[ξ_2,d,1+ξ_2,d,0^2𝒵_2,d] in which we introduced the extra convention ξ_N,d,j=[ω-NΩ +j k_//v_d,//]/[2^1/2v_t,//k_//]. Once the expressions for the dielectric response for a prescribed k⃗ are found, Budé's procedure requires to fit these functions. There are various ways of doing this. The brute-force method - exploiting (R,Z,φ) as independent variables and recalling that the relevant parallel wave number involves both k⃗ and k⃗' - is to perform a 4-dimensional fit in terms of k_R, k_Z, k'_R and k'_Z for a given set of toroidal mode numbers; fitting 1 direction at the time reveals itself to be more accurate than performing a single 4-D fit.§ UPGRADE TO ARBITRARY DISTRIBUTIONS In <cit.> or even more simply in <cit.>, the procedure was illustrated on how - at the price of supplementary computational time - arbitrary distribution functions F_o can be accounted for. Representing F_o using a piece-by-piece linear-by-linear representation for an F_o that is known on a sufficiently refined grid of points (either for an analytically known distribution or the numerical solution of a Fokker-Planck equation) the velocity space integral is broken up into a double sum of elementary integrals that can be solved by hand. The needed integrals are of the general form∫_v_⊥,i^v_⊥,i+1dv_⊥∫_v_//,j^v_//,j+1 dv_//[v_⊥-v_⊥,i]^m[v_//-v_//,j]^n/k_//v_//+NΩ-ω=1/k_//I_NR,m,Δ_⊥I_R,n,Δ_//with q_res=[ω-NΩ-k_//v_//,j]/k_//. The nonresonant integralsI_NR,m,Δ areI_NR,m,Δ=Δ^m+1/m+1while the resonant ones can be found from the recursionI_R,n,Δ=I_NR,n-1,Δ+q_resI_R,n-1,Δwith I_R,0,Δ=ln (q_res-Δ/q_res)=ln(v_//,j+1-v_//,res/v_//,j-v_//,res)=ln |v_//,j+1-v_//,res/v_//,j-v_//,res |+iπ [β̃_j+1-β̃_j].where β̃_j is the argument of v_//,j-v_//,res. When the resonance is crossed and accounting for causality (replacing ω by ω+iν where ν is infinitesimal but positive as ν is physically associated with weak but finite collisionality) the logarithm picks up an imaginary part iπ |k_//|/k_//: for positive k_// the pole needs to be encircled from below and the arguments of v_//,j+1-v_//,res and v_//,j-v_//,res are 0 and -π; for negative k_// it has to be encircled from above so the arguments are 0 and +π.§ FLUX TERMS It should be reminded that the here proposed variational method - suitable for finite element exploitation - does not express the wave equation in terms of the actual dielectric tensor but rather exploits an operator acting both on the electric field and the test function vector. Unless one wants to know what the wave fluxes are, the expression of the corresponding dielectric tensor (an operator solely acting on the electric field and not on the test function) thus strictly is not required. For a dielectric response operator truncated at low order finite Larmor radius corrections, the corresponding dielectric tensor expression can be readily derived by repeated partial integrations to remove all derivatives from the test function vector and transferring them to the electric field (see <cit.> for an example). In practice, the actual computation of this tensor quickly becomes cumbersome, in particular - as is the case for the Budé method - if high order derivatives are needed. The total flux and its derivatives are, however, continuous so a suitable choice of equally smooth base functions ensures no net flux terms appear when assembling the linear system corresponding to the projection of the wave equation on these functions (again, see the above cited example for explicit expressions) eliminating the need for an explicit expression of the kinetic flux. By reverse engineering, the radial evolution of the total flux S⃗ can numerically be inferred from the power balance even when no explicit expression of the kinetic flux is available: By definition, ∇.S⃗+P_abs=0 with P_abs=∫ dx⃗ k_o^2 E⃗^*.ϵ.E⃗/[iωμ_o] (with ϵ the here adopted dielectric operator acting both on E⃗ and F⃗, the latter being replaced by E⃗ when evaluating the absorption), which can locally be evaluated to determine the total flux S⃗ crossing a magnetic surface, starting from the known source at the antenna and the wall while the integral over a magnetic surface infinitesimally close to the magnetic axis needs to approach zero. The radial component of the kinetic flux S⃗_kinetic integrated over the magnetic surface can be found by subtracting the radial component of the magnetic surface integrated Poynting flux S⃗_Poynting=E⃗^*×∇×E⃗/[i ωμ_o] from the total flux. A 1D illustration of this procedure is provided in <cit.>. § BUDÉ'S METHOD AND PARALLEL DYNAMICSSince the angle Θ between the parallel and toroidal directions is small near the core where heating is typically taking place and because of the adopted choice of perpendicular unit vectors, adopting a truncated Taylor series expansion of the plasma dispersion function 𝒵(ξ) is sometimes justified. Colestock and Kashuba used this approximation to illustrate how the poloidal field affects the wave dynamics <cit.>. The validity of such an expansion needs to be carefully checked, though. If allowed, this permits to make the poloidal k-component k_θ (and hence the k_R and k_Z components it depends on) explicit, the Taylor expansion together with the fits of the Kennel-Engelmann operator then in turn allowing to adopt the simplified fitting of the perpendicular dynamics part of the dielectric response functions even in presence of a finite poloidal magnetic field: 𝒵(ξ) ≈𝒵(ξ_tor)+∂𝒵/∂ k_Rk_R+∂𝒵/∂ k_Zk_Z+∂^2 𝒵/∂ k_R^2k_R^2/2+∂^2 𝒵/∂ k_R∂ k_Zk_Rk_Z+∂^2 𝒵/∂ k_Z^2k_Z^2/2.Here all partial derivatives are to be evaluated at k_R=k_Z=0 for a given cyclotron harmonic N and ξ=[ω-NΩ]/[2^1/2v_t k_//] while ξ_tor=[ω-NΩ]/[2^1/2v_t k_tor].The partial derivatives in the above are determined by the chain rule, so e.g. ∂𝒵(ξ)/∂ k_γ≈d𝒵/dξ(ξ_tor) ∂ξ/∂ k_//∂ k_///∂ k_γin which the first term is evaluated using d𝒵(ξ)/dξ=-2[1+ξ𝒵 ], the second is simply ∂ξ/∂ k_//=-ξ/k_// while the last is either ∂ k_/// ∂ k_R =-sinΘsinα or ∂ k_/// ∂ k_Z =sinΘcosα; α is the angle between e⃗_ρ and e⃗_R. An obvious drawback of the above is that the parallel dynamics is not properly modeled: the k_//- up- or downshift due to the presence of the poloidal field is inadequately modeled since the k_θ-dependent terms have been removed from the argument of the plasma dispersion function, an effect that is particularly visible when the toroidal mode number is small and/or the poloidal component of the wave vector significant so that the poloidal corrections to k_// are important. The fact that the parallel gradient remains a differential operator - as opposed to an algebraic one - is a drawback of the Budé procedure since the above computed simple Taylor series expansion is thus not necessarily a good representation. Budé himself illustrated his elegant and powerful procedure in absence of the poloidal field and in 1D <cit.>. He stated that exploitation of his procedure beyond 1D would actually yield computational benefits rather than extra bottlenecks compared to solving the full integro-differential equation. In absence of poloidal field effects it is certainly true that his procedure is indeed much less time-consuming. In presence of a finite poloidal field this is no longer necessarily the case. In case a procedure could be found that decouples the parallel from the perpendicular dynamics (i.e. a procedure that directly reasons in terms of x_// to capture the isolated parallel dynamics in some way) a further speed-up could possibly be realised.Deriving the hot plasma conductivity tensor for a tokamak, Svidzinski <cit.> explored a route to include the parallel dynamics by expressing the electric field at the position (R',Z') in the orbit integral yielding the dielectric response in terms of its values at the grid points (R_i,Z_j) later adopted for the actual numerical solving of the wave equation. Adopting a sufficiently refined grid, this allows him to locally represent the E⃗(R',Z') using a low order polynomial (he uses second order Lagrange polynomials) and define a nonlocal conductivity tensor accounting for the poloidal field. He performs the needed integrals semi-analytically and labels the resulting procedure as computationally expensive, requiring parallel processing.Applying Svidzinski's philosophy to Budé's method as a means to seek to push down the computational requirements by performing some of the integrals by hand amounts to introducing a grid-defined local approximation of the integrand: Whereas Svidzinski's approach makes a Taylor series expansion of the electric field and expresses the local contribution in terms of the electric field values at the 4 corner points of a local finite element, the here adopted approach cannot make such a distinction since it dominantly operates in k⃗ space so the electric field (here actually its Fourier component E⃗_k⃗) is common to all 4 corner points; it is via the exponential factor exp[i (k⃗-k⃗').x⃗]that the location where E⃗ is evaluated is specified. Adopting a sufficiently refined (R,Z) grid and realising all functions are locally smooth (the least smooth function appearing being the causality smoothed logarithm when performing the parallel velocity integral in the case of an arbitrary distribution function and requiring the most tight gridding) we can locally perform a Taylor series expansion in x⃗ for fixed (k_R,k_Z), however: the reason why a Taylor series expansion in the discussion in this section so faris not always suitable is because of the wide range of k⃗, not the mild variation due to local changes of the equilibrium quantities or the rotation matrix R. For any given k⃗ a proper grid can be chosen to ensure the variation of the integrand is mild in the local finite element. If the shortest wavelength wave expected in the solution has a wave vector with magnitude k_max=|k⃗| then the grid spacings Δ R = R_i+1-R_i and Δ Z = Z_j+1-Z_j should be chosen such that both k_maxΔ R << 1 and k_maxΔ Z <<1. These conditions whatsoever need to be satisfied to ensure the finite element procedure is sufficiently accurate for the expected wavelengths.The type of integrals to perform for a given toroidal mode number is ∫_R_i^R_i+1∫_Z_j^Z_j+1 dRdZ ∫ dk_Rdk_Z ∫ dk'_Rdk'_Z e^[i(k⃗-k⃗').x⃗] F_k⃗',α^*𝒫(ζ,ζ',ψ,ψ',R,Z)𝒬(k_//,R,Z)E_k⃗,βin which the first 2 integrals are on a small finite element, ζ is the argument of the Bessel functions (ζ=k_⊥ρ_L) in which k_⊥(k_R,k_Z,R,Z) and similar for k_//, k'_⊥ and k'_// but not for ψ nor ψ'. Inside the finite element we can adopt the local variables ζ_R and ζ_Z varying from -0.5 to 0.5.For a sufficiently refined grid, we can then represent the various functions in each interval making use of a suitable set of low order polynomial (finite-element) base functions e.g. 𝒫≈𝒫_i,j+∂𝒫/∂ R |_i,jΔ Rζ_R+∂𝒫/∂ Z |_i,jΔ Zζ_Z+∂^2 𝒫/∂ R ∂ Z |_i,jΔ RΔ Zζ_Rζ_Z+...and similar for 𝒬 as well as the exponential factor, the partial derivatives in which make extra powers of k_R and k_Z appear via partial derivatives of ζ=k_⊥ρ_L (Larmor radius corrections) and k_// (parallel gradient corrections) on R and Z for given k_R and k_Z.In particular we get expressions involving the plasma dispersion function𝒵 (ξ) ≈𝒵(ξ_i,j)+∂𝒵/∂ξ [∂ξ/∂ RΔ R ζ_R + ∂ξ/∂ ZΔ Z ζ_Z]in which the second and third term in the right hand side is - by construction - an as small correction as required.The final result after integrating over ζ_R, ζ_Z (which just requires integrating low order polynomials over the small intervals) and performing the inverse Fourier transform - after adopting Budé's procedure to fit the various functions to a high order polynomial - is an expression solely in terms of the electric field, the test function vector and their derivatives at a series of reference points (R_mid,Z_mid).The advantage of considering a volume rather than an individual point when evaluating the dielectric response in k⃗ space is that this has a smoothing effect and hence potentially allows reducing the order of the polynomials needed to fit the response. This can most easily be illustrated when considering the integration of the parallel integral for arbitrary F_o (see section 5), which also represents the toughest case where the resonance crossing - represented by the argument of the logarithm crossing zero - locally yields a discontinuous jump of the imaginary part of the logarithm, traditionally smoothed by introducing a finite collisionality. The relevant type of integrals is of the form∫ dζ_R dζ_Z ζ_R^m ζ_Z^n ln([ v_//,i+1-v_//,res ]/[ v_//,i-v_//,res]) which can be evaluated repeatedly using the elementary expression <cit.>∫ dz z^m ln z=z^m+1/m+1[ln z-1/(m+1)]. Mathematically, Budé's fitting-based method is on firm ground but it requires high-order derivatives, the accuracy of the numerical evaluation of which may be difficult. Note that because of this fitting - or (provided all degrees of freedom are exploited) equivalent differencing -procedure the road to 3D application is - mathematically speaking - immediately open, be it that extensive computational time needs to be devoted to compute the needed functions for a series of k⃗ values at all grid positions, allowing to find the required fits. As already mentioned, the bottlenecks deciding on the usefulness of the Budé procedure are expected to be of practical (computer time and memory requirements) rather than of mathematical nature. Rather than a single toroidal mode number (or a set of coupled toroidal mode numbers and retaining the coupling between them, as was mentioned earlier), a polynomial fit now also involving the toroidal mode number then appears and can - just like for the other k⃗'s - be transformed back to higher order derivatives in the toroidal direction. At that stage, not even the differential operators accounting for the curvature are needed: exploiting the rotation matrices, all can simply be expressed in terms of the basic cartesian (X,Y,Z) coordinates rather than the here adopted (R(X,Y),Z,φ(X,Y)) to account for the toroidal curvature and be written exploiting the basic Cartesian differential operators. For example, applying the Budé equivalent of Svidzinski's procedure now involves a set of given fixed (k_X,k_Y,k_Z) rather than the fixed cylindrical (k_R,k_Z) used in the present paper. § THE WAVE EQUATION: PRACTICAL FORM In variational form and assuming the base functions exploited by the finite element exploitation are properly chosen so that internal flux terms are unnecessary, the wave equation can be reduced to2π∫ dR dZ R[[k_o^2 F⃗^*. ϵ.E⃗ ]-( ∇×F⃗)^*.(∇×E⃗)]=0 where F⃗^*.ϵ.E⃗=F⃗^*.𝒫_0.E⃗-∑_I,J,K,L(-i)^I+J-K-L∂^K+LF⃗^* /∂ R^K ∂ Z^L.𝒫_1,I,J,K,L.∂^I+JE⃗/∂ R^I ∂ Z^J.which can be expressed in terms of (E_R,E_Z,E_φ) when including the rotation matrix ℛ and its inverse or in terms of (E_⊥,1,E_⊥,2,E_//)when omitting it.In matrix form, the expression for ( ∇×F⃗)^*.(∇×E⃗) in terms of the components parallel and perpendicular to B⃗_o is( ∇×F⃗)^*.(∇×E⃗)=([ F_⊥,1^* F_⊥,2^*F_//^* ] ) .[ C_oo + ∂_R C_Ro+ ∂_Z C_Zo+ C_oR∂_R + C_oZ∂_Z + ∂_R C_RR∂_R+ ∂_R C_RZ∂_Z + ∂_Z C_ZR∂_R + ∂_Z C_ZZ∂_Z ] . ( [ E_⊥,1; E_⊥,2;E_// ] )for which the explicit expression of the various matrices can be found in <cit.>. The arrows above the partial differential operators in the above indicate in which direction the operator acts, to the left on the test function vector or to the right on the electric field. Like the plasma term we write the volume terms as a sum of contributions referring to the various derivatives( ∇×F⃗)^*.(∇×E⃗)=∑_I,J,K,L∂^K+LF⃗^* /∂ R^K ∂ Z^L.𝒞_I,J,K,L.∂^I+JE⃗/∂ R^I ∂ Z^Jwhere all introduced notations can be identified by looking at the corresponding matrix expressions provided. The whole wave equation can now be written in matrix form as2π∫ dR dZ R[∑_I,J,K,L(-i)^I+J-K-L∂^K+LF⃗^* /∂ R^K ∂ Z^L. [ k_o^2[𝒫_0,I,J,K,L-𝒫_1,I,J,K,L] -𝒞_I,J,K,L ] . ∂^I+JE⃗/∂ R^I ∂ Z^J ] =0. When exploiting nonlinear regression to define a proper fit in k⃗-space allowing to recast the integro-differential wave equation into a higher order partial differential equation following the technique proposed by Budé, the choice of the order of the polynomial is a balance between ensuring a suitably correct fit (which suggests choosing M as large as possible) and avoiding the reigning partial differential equation has unpractically high partial derivatives (which suggests choosing M as modest as possible). Nonlinear regression minimises the summed "distance" between a set of prescribed values of a functionf(ζ) to a polynomial approximation f_fit=∑_m=0^M f_m (ζ-ζ_ref)^m. This philosophy can easily be extended to multiple dimensions, finding the fits for 1 dimension at the time. Practical exploitation will require optimisation to push down the required computational effort while guaranteeing sufficient accuracy. § REDUCED FINITE LARMOR RADIUS EXPANSION FOR MAXWELLIAN PLASMAS The expressions provided in this paper allow to treat the full integro-differential wave equation. To cut down the amount of algebra in the preparatory phase preceding the actual solving of the wave equationbut still being able to treat the key physics, the expressions of the dielectric response have often been simplified by exploiting truncated Taylor series expansions of the various Bessel functions at terms of second order in the Larmor radius. Thinking about applying Budé's method beyond the so far existing 1D explorations, one may wonder whether it makes sense to include an intermediate step and adopt truncated finite Larmor radius expressions rather than accounting for the fully kinetic description valid at any temperature and for any wavelength. Whereas traditionally (see e.g. <cit.>) the dielectric tensor itself was expanded in terms of k_⊥ρ_L (with k_⊥ is the perpendicular wave vector component and ρ_L is the Larmor radius), the variational approach adopted here leans on the Kennel-Engelmann operator which appears twice in the expression of the dielectric response, once acting on the electric field E⃗ and once acting on the test function vector F⃗. In <cit.>, the 1D version of the wave equation in absence of poloidal field effects was presented leaning on this approach. Since it retained up to second order finite Larmor radius terms of the Taylor series expansion of the Kennel-Engelmann operator, it yielded a 12th order differential equation rather than the traditional 6th order system. In the present paper 2D application is prepared and the poloidal field effects are no longer neglected. As a consequence, the (2D) expressions presented in the Appendix of <cit.> can not directly be exploited but the corresponding Taylor series expansion in the perpendicular directions still holds. Conform with the here adopted notation (in which the sign of the cyclotron harmonic index N has been flipped w.r.t. the expressions in the original paper) the dielectric response in terms of the (+,-,//) components of the wave equation and the electric field read P_–≈ [-_+k^'2] ^*3/8Ã̃_1 [-k_+]^2+ [-i_+k']^*Ã_0 [-ik_+] + A_-1 + [-_⊥ k^'2]^* Ã_-1 + Ã_-1[ -k^2_⊥] + [-_⊥ k^'2]^*3/2Ã̃_-1 [-k^2_⊥] + [-i_-k']^*Ã_-2 [-ik_-] + [-_-k^'2]^*3/8Ã̃_-3 [-k_-^2]P_-+≈ - [[-_+k^'2]^* 1/2Ã_1 +[-_+k^'2]^*3/4Ã̃_1 [-k^2_⊥] + [-i_+k']^* Ã_0 [-ik_-] + 1/2Ã_-1 [-k_-^2] + [-_⊥ k^'2]^*3/4Ã̃_-1 [-k_-^2]]P_-//≈ i[ [-_+k^'2]^* 1/2B̃̃̃_1[-ik_+] + [-i_+k']^* B̃_0 + [-i_+k']^*B̃̃̃_̃̃̃0̃̃̃ [-k^2_⊥]+B̃_-1[-ik_-] +[ -_⊥ k^'2 ]^* B̃̃̃_-1 [-ik_-] +[-i_-k']^* 1/2B̃̃̃_-2 [-k_-^2] ] P_+-≈ - [1/2Ã_1[-k_+^2] +[-_⊥ k^'2]^*3/4Ã̃_1 [-k_+^2] + [-i_-k']^* Ã_0 [-ik_+] + [-_-k^'2] 1/2Ã_-1+ [_-k ^'2]^* 3/4Ã̃_-1 [-k^2_⊥]]P_++≈[-_+k^'2]^*3/8Ã̃_3 [-k_+^2] + [-i_+k']^*Ã_2 [-ik_+] + A_1 + [-_⊥ k^'2]^* Ã_1 + Ã_1[ -k^2_⊥] + [-_⊥ k^'2]^*3/2Ã̃_1 [-k^2_⊥] + [-i_-k']^*Ã_0 [-ik_-]+[-_-k^'2]^*3/8Ã̃_-1 [-k_-]^2P_+//≈ - i[+[-i_+k']^* 1/2B̃̃̃_2 [-k_+^2] ] +B̃_1[-ik_+] +[ -_⊥ k^'2 ]^* B̃̃̃_1 [-ik_+] + [-i_-k']^* B̃_0 + [-i_-k']^*B̃̃̃_̃̃̃0̃̃̃ [-k^2_⊥] + [-_-k^'2]^*1/2B̃̃̃_-1[-ik_-] P_//-≈ -i[ [-i_+k']^* 1/2B̃̃̃_1[-k_+^2] +B̃_0 [-ik_+]+ [-_⊥ k^'2]^*B̃̃̃_̃̃̃0̃̃̃ [-ik_+]+[-i_-k']^* B̃_-1+[ -i_-k']^* B̃̃̃_-1 [-k^2_⊥] +[-_-k^'2]^* 1/2B̃̃̃_-2 [-ik_-] ] P_//+≈ i[ +[-_+k^'2]^* 1/2B̃̃̃_2 [-ik_+] +[-i_+k']^*B̃_1 +[ -i_+k' ]^* B̃̃̃_1 [-k^2_⊥] + [-_⊥ k^'2]^*B̃̃̃_̃̃̃0̃̃̃ [-ik_-] +B̃_0[-ik_-] + [-i_-k']^* 1/2B̃̃̃_-1[-k_-^2]] P_// //≈ [-_+k^'2]^* 1/4C̃̃̃_2[-k_+^2] + [-i_+k']^* C̃_1 [-ik_+] + 2 C_0 +[-_⊥ k^'2]^* C̃_0 +C̃_0 [-k^2_⊥] + [-_⊥ k^'2]^* C̃̃̃_0 [-k^2_⊥] + [-i_-k']^* C̃_-1 [-ik_-] + [-_-k^'2]^* 1/4C̃̃̃_-2[-k_-^2] in k⃗-space and where _...k' refers to wave vector components of the test function and k_... to those of the RF electric field, and similar for _⊥ k^'2 and k^2_⊥; the plasma contribution to the wave equation is then F⃗^*.P.E⃗.This can immediately be recast in terms of the (⊥_1,⊥_2,//) componentsvia the transformation( [w_+;w_-; w_// ] )=( [1i0;1 -i0;001 ] ).( [ w_⊥,1; w_⊥,2;w_// ] ) and then further via the already discussed ℛ to the cylindrical (R,Z,φ). The coefficients in the above are A_N=k_0^2 ω_p^2 𝒵_N/2^3/2v_tk_//ω B_N=k_0^2 ω_p^2 (1+ξ_N 𝒵_N)/2v_tk_//ω C_N=k_0^2 ω_p^2 ξ_N(1+ξ_N 𝒵_N)/2^1/2v_tk_//ω Ã_N=ρ_L^2A_N B̃_N=ρ_LB_N C̃_N=ρ_L^2C_N Ã̃_N=ρ_L^4A_N B̃̃̃_N=ρ_L^3B_N C̃̃̃_N=ρ_L^4C_N in which 𝒵_N is the plasma dispersion function with argument ξ_N. In case the impact of the poloidal field is omitted, k_⊥,1=k_R and k_⊥,2=k_Z so it suffices to replace the above k_+ by -i ∇_+ and k_- by -i ∇_- acting to the right on E⃗ and similarly to substitute _+k' by -i _+∇ and _-k' by -i _-∇ acting to the left on F⃗ to obtain the expressions of the 2D version of the dielectric response operator adopted in <cit.>. But when the poloidal field is accounted for there is also k_R and k_Z dependence in k_// so this simple procedure no longer holds.Avoiding the complication of retaining the full expression of the Kennel-Engelmann operators in the dielectric tensor by adopting the here presented truncated Taylor series expansion constitutes an intermediate way to check if the Budé method - or its upgrade including the Budé variant of the Svidzinski approach - has potential in the here presented form accounting for the finite poloidal magnetic field, which was neglected in <cit.>. If this test proves unsuccessful, one possible alternative is to return to the (ρ,θ,φ) representation, which makes the parallel gradient an algebraic operator rather than a differential one (∂ ... /∂θ=im ... where m is the poloidal mode number of the electric field or the test function vector) but forces one to cope with the mathematical singularity at the magnetic axis. § FUTURE PLANS AND CONCLUSIONS In the present paper, the semi-analytical expressions required to solve the 2D all-FLR integro-differential wave equation reigning the wave dynamics in the ICRH domain while adopting the Budé method have been derived. Including all finite Larmor radius effects normally requires solving an integro-differential equation. Whereas the usual procedure is to rely on a Taylor series expansion of the Bessel functions appearing in the expression of the dielectric response in k⃗-space - a procedure strictly speaking limiting the application to wave modes that do not violate the smallness assumption of k_⊥ρ_L (with k_⊥ the perpendicular wave vector component and ρ_L the Larmor radius) - Budé proposed to solve that equation as a high order partial differential equation by invoking a fitting procedure allowing to catch the dependence in k⃗ space for both long and short wavelength modes but still keeping the order of the fitting polynomial (and hence the order of the differential operators when transforming back to x⃗ space) modest. Future work involves implementing the presented procedure to actually solve the 2D wave equation. Although the proposed partial differential equation is mathematically well defined, a critical assessment will be needed to make sure the fitting and solving procedure is sufficiently fast and accurate to make exploitation practical.The fact that the poloidal magnetic field needs to be accounted for makes that not only the dynamics perpendicular but also that parallel to the static magnetic field needs to be accounted for in the fitting procedure, making the parallel gradient an actual differential operator. This may require polynomial fits of too high order to be practical, in particular when the argument of the plasma dispersion functions passes zero or infinity.Provided the polynomial order of the fits in k⃗ space can be kept sufficiently modest to stay practical, the Budé method allows to use off-the-shelf finite element solvers such as MFEM <cit.>, exploiting grid refinement techniques and exploring the benefits of higher order polynomial representations of the base functions exploited in finite element equation solving schemes.30 BudeR. Budé, Accelerating Simulations of Electromagnetic Waves in Hot, Magnetized Fusion Plasmas, Master Thesis Technische Universiteit Eindhoven, 2019; Budé et al, Plasma Phys. Control. Fusion 63 (2021) 035014 AllFLRBudeDVE D. Van Eester & E. Lerche, Nucl. Fusion 61 (2021) 016024 Fukuyama A. Fukuyama et al., Global waves in hot plasmas, Computer Physics Reports 4 (1986) 137-181, https://doi.org/10.1016/0167-7977(86)90028-6. LamalleThesis P.U. Lamalle, Nonlocal theoretical generalization and tridimensional numerical study of the coupling of an ICRH antenna to a tokamak plasma, PhD Thesis, LPP-ERM/KMS Report 101 Université de Mons (1994) BrambillaM. Brambilla, Plasma Phys. Contr. Fusion 41 (1999) 1 DumontEVE R.J. Dumont, Variational approach to radiofrequency waves in magnetic fusion devices, Nucl. Fusion 49(2009) 075033AORSA E.F. Jaeger et al., Phys. Plasmas 8 (2001) 1573 TOMCAT D. Van Eester and R. Koch, Plasma Phys. Contr. Fusion 40 (1998) 1949; extended version: D. Van Eester and R. Koch, LPP-KMS/ERM Report 109 (1997) NET contract 95-397 TOMCAT-U1 D. Van Eester and E.A. Lerche, Plasma Phys. Control. Fusion 55 (2013) 055008 COMSOLCOMSOL software package, https://www.comsol.com FENICSFENICS project, https://fenicsproject.org MFEMMFEM software package, https://mfem.org FreeFEMFreeFEM software package, https://freefem.org Swanson D.G. Swanson, Plasma Waves , Academic Press, San Diego (2012) Fuchs-et-al V. Fuchs et al., Physics of Fluids (1958-1988) 24, 1251 (1981); doi: 10.1063/1.863528 Stix T.H. Stix, Waves in Plasmas, AIP, New-York, (1992) KochPHD R. Koch, Etudes des descriptions des plasmas linéaires, inhomogènes et Maxwelliens issues de l'équation de Vlasov. Application à la propagation transverse dans une colonne de plasma non-uniforme, magnétisé, décrit à l'ordre un des températures électronique et ionique, PhD thesis presented at Université de l'Etat de Mons (1977) VanEester_raytracing_allFLRD. Van Eester, Y. Louis & R. Koch, Plasma Phys. Control. Fusion 35 (1993) 1189-1206. KennelEngelmann1966 C.F. Kennel & F. Engelmann Phys. Fluids 9 (1966) 2377 RomeroScharer H. Romero and J. Scharer, Nucl. Fusion 27 (1987) 363McVey B.D. McVey et al., Phys. Rev. Lett. 55 (1985) 507 Kaufman A.N. Kaufman, Phys. Fluids 15 (1972) 1093 Ichimaru1973S. Ichimaru, Basic Principles of Plasma Physics. A statistical approach, W.A. Benjamin Inc., Reading Massachusetts (1973) LagrangeMultipliers I. Babuska, Numerische Mathematik 20 (1973) 179-192 LamalleBIGpaper P.U. Lamalle, Plasma Phys. Control. Fusion 39 (1997) 1409-1460 VanEesterPolBounceD. Van Eester, J. Plasma Physics 60 (1998) 627-671 VanEesterPolBounce2 D. Van Eester, J. Plasma Physics 65 (2001) 407-452 FriedConte B.D. Fried and S.D Conte, The Plasma Dispersion Function, Academic Press, New York and London (1961)DVE_arbitraryFo D. Van Eester, Plasma Phys. Control. Fusion 35 (1993) 1309-1319 Colestock P.L. Colestock and R.J. Kashuba, Nucl. Fusion 23 (1983) 763Svidzinski V.A. Svidzinski et al., Phys. Plasmas 23 (2016) 112101; doi: 10.1063/1.4966638 VanEester_2DBude_LabReport D. Van Eester & E.A. Lerche (2023) Semi-analytical derivation of the 2D all-FLR ICRH wave equation as a high-order partial differential equation: extended version, LPP-ERM/KMS Laboratory report # 148, BrusselsAbramowitz M. Abramowitz and A. Stegun, Handbook of Mathematical Functions, Mc Graw Hill, USA (1960)
http://arxiv.org/abs/2310.18214v1
{ "authors": [ "Dirk Van Eester", "Ernesto Lerche" ], "categories": [ "physics.plasm-ph" ], "primary_category": "physics.plasm-ph", "published": "20231027154128", "title": "Semi-analytical derivation of the 2D all-FLR ICRH wave equation as a high-order partial differential equation" }
plain theoremTheorem lemma[theorem]Lemma corollary[theorem]Corollary proposition[theorem]η_roposition conjecture[theorem]Conjecturedefinition definition[theorem]remark *remarkRemark exampleExample [email protected]@iitj.ac.in^1Indian Institute of Technology Jodhpur, Rajasthan-342037, India ^2Central University of Punjab Bathinda, Punjab-151001, IndiaA classical decision tree is completely based on splitting measures, which utilize the occurrence of random events in correspondence to its class labels in order to optimally segregate datasets. However, the splitting measures are based on greedy strategy, which leads to construction of an imbalanced tree and hence decreases the prediction accuracy of the classical decision tree algorithm. An intriguing approach is to utilize the foundational aspects of quantum computing for enhancing decision tree algorithm. Therefore, in this work, we propose to use fidelity as a quantum splitting criterion to construct an efficient and balanced quantum decision tree. For this, we construct a quantum state using the occurrence of random events in a feature and its corresponding class. The quantum state is further utilized to compute fidelity for determining the splitting attribute among all features. Using numerical analysis, our results clearly demonstrate that the proposed algorithm cooperatively ensures the construction of a balanced tree. We further compared the efficiency of our proposed quantum splitting criterion to different classical splitting criteria on balanced and imbalanced datasets. Our simulation results show that the proposed splitting criterion exceeds all classical splitting criteria for all possible evaluation metrics.Quantum-inspired attribute selection algorithm: A Fidelity-based Quantum Decision Tree Atul Kumar^1 January 14, 2024 ======================================================================================§ INTRODUCTIONThe decision tree algorithm emulates human behavior to make a conclusion out of the provided information. It constructs an upside-down tree-like structure by placing the crucial attribute at the root node and recursively reaches at leaf node to draw a conclusive remark out of the provided dataset. The pivotal point for constructing a decision tree is its splitting criterion, which optimally segregates the whole dataset. Conventionally, there are two main splitting criteria- information gain and gini index. Both of these splitting criteria select a feature using their respective measures to divide the whole dataset into sub-datasets for developing a tree-like structure. Information gain uses Shannon's entropy to extract the stored information between a feature and the class attributes and recursively divides the dataset on a feature corresponding to the highest gain value <cit.>. Whereas, gini index uses conditional probability of a misclassified instance and therefore chooses the lowest value to split a dataset <cit.>. Recently, the applications of quantum computing are showing its importance in diverse domains including machine learning. The fundamental laws of quantum computation, entanglement, and quantum parallelism are observed to be evolving sub-routines of machine learning algorithms. The fusion of classical machine learning and quantum computing has shown significant benefits in supervised and unsupervised algorithms such as quantum support vector machines <cit.>, quantum k-nearest algorithms <cit.>, k-means clustering algorithms <cit.>, and also for quantum neural networks <cit.>. In the nutshell, quantum computation assists machine learning in the refinement of space and time complexities. In the context of quantum decision trees, Farhi and Gutman <cit.> proposedformulating computational problems into a decision tree that can be further penetrated using time-independent Hamiltonian by evolving a quantum state through nodes of decision tree to reach nth node or solution. They further claimed that decision trees can be penetrated exponentially faster by quantum evolution than classical random walk. On similar lines, Burhman and Wolf <cit.> proposed that a complex problem can be subdivided into multi-sub-problems, where the ith query depends on the previous queries. This process can be thought of as a decision tree where the complexity of a problem is determined by depth of the tree. They computed a boolean function by making queries such that the computational complexity of the function is determined by a minimum number of queries required to make on any input. They also computed the complexity using deterministic, randomized and quantum computation. However, the authors find no benefit of one approach over the other. Therefore, these two studies used quantum computation to solve complex problems which can be sub-divided into the form of a decision tree. Later, Lu and Braustein proposed the use of von Neumann entropy as a splitting criterion for quantum decision tree <cit.>. For this, they constructed a quantum state where amplitude values correspond to feature values and similarly generated a quantum state for class attribute, however, the attribute selection criterion requires a much more involved analysis <cit.>. Heese et al. <cit.> proposed the idea of using a classical binary decision tree to construct a quantum circuit utilizing a genetic algorithm. The constructed quantum circuit, also called a Q-tree, is used for determining the class label for query data by measurement of the quantum circuit. However, it requires qubits as per the depth+features+labels of a classical decision tree and numerous measurements. Khadiv et al. <cit.> proposed a quantum version of the classical C5.0 decision tree algorithm. As per the proposal, searching for a maximum or minimum value corresponding to features is the only factor that can further enhance the time complexity of the classical decision tree and therefore they replaced the searching algorithm with Durr and Hoyer's minimum search algorithm <cit.> to refine the time complexity. Further, Khadiv and Safina <cit.> suggested using an amplitude amplification algorithm based on, Grover's search algorithm <cit.> to predict the class label for an unknown data point out of an ensemble of machine learning classifiers. Although these algorithms proposed different approaches, there is no formulation for the quantum decision tree that considers the evolution of balanced tree starting from a classical dataset. In this work, we propose a quantum splitting criterion based on fidelity and compare its efficacy with classical information gain and gini index criteria. § MOTIVATIONAs compared to other classical supervised algorithms, a decision tree is used widely for classification problems because of its ability to work with different measurement scales and therefore classifies a dataset including discrete and continuous features <cit.>. It efficiently predicts the correlation between a feature and class without any prior statistical calculations. Moreover, the decision tree algorithm is interpretive and greedy in nature. Considering these advantages, it has applications in anomaly detection <cit.>, feature engineering <cit.>, priority prediction and is used as a basic building block for ensembling techniques <cit.>. However, the greedy nature of the decision tree algorithm also leads to selecting a feature that ends up creating an imbalanced tree. An intuitive approach is to select an attribute that has a maximum correlation between occurrence of random events of a feature and corresponding class labels.In this scenario, the fundamentals of quantum computing such as examining the correlation among states and the probabilistic nature can assist in achieving a balanced and efficient algorithm for the decision tree. Although there are a few instances of algorithms incontext of quantum decision trees, the authors either proposed the idea to approach a complex problem by formulating it as a classical decision tree and traversing it using quantum computing or refined the time complexity of classical decision algorithms using quantum algorithms. Here, we propose an efficient algorithm to construct and analyze a quantum decision tree for balanced as well as imbalanced datasets.For this, we propose to use fidelity as a measure for choosing a feature that optimally splits a dataset. In order to analyze our algorithm, we first define a two-qubit quantum state from the classical dataset using correlations between occurrence of class labels and the corresponding random events of a feature. The constructed quantum state is further utilized to compute fidelity between a feature and class attributes. This procedure of choosing a splitting feature includes the advantage of amplitude embedding which is analyzed to assist in constructing a comparatively balanced tree.§ QUANTUM FIDELITY DECISION TREEIn this section, we discuss the quantum state generation from a classical dataset examine the quantum splitting criteria for constructing a decision tree. In order to facilitate the discussion, we first briefly go through the required terminology. In general, a machine-learning dataset, 𝔻, consists of a set of features X and corresponding class labels 𝕐 such that 𝔻 = {X,𝕐} where the cardinality of features defines dimensions of the dataset. Further, a dataset contains n-instances represented as {(X_1,𝕐_1),(X_2,𝕐_2), ..., (X_i,𝕐_j), ..., (X_n,𝕐_n)} where X_i is i-th instance of the feature and 𝕐_j is a corresponding j-th instance for the class label. In the following subsections, we will use these notations to describe the quantum splitting criterion and construction of a quantum decision tree. §.§ Feature EmbeddingIn general, all splitting criteria of a classical decision tree are influenced by the function of a probability distribution of a random event for feature X_i. The features in a dataset are segregated into two or more classes leading to a probability distribution corresponding to different classes which can be further used to obtain the information gain <cit.>. Although, there are different ways to embed classical data into a quantum state, here, we will emphasize on amplitude embedding as a preferred embedding technique. In general, the amplitude embedding is used with respect to features, however, we use a modified amplitude embedding method to process the classical probability distribution into quantum states as an amplitude value. For this, we construct a quantum state using a combined probability distribution using feature and associated class labels. Algebraically, such a state can be represented as |ψ_i,j⟩ = ∑_i,j p_j|i|X_i𝕐_j⟩where p_j|i is the probability distribution of class 𝕐 with the occurrence of a random event for feature X.§.§ Quantum Splitting CriteriaFor a classical decision tree, classical information gain and gini index are used extensively as splitting criteria <cit.>. In order to represent a composite quantum system and to understand the properties and dynamics of subsystems, density operator can be used as an efficient mathematical tool instead of a wave function representation.For example, if the state of a system is specified by |ψ_i⟩ where i is the index for the data point occurring with a probability p_i in an ensemble {p_i, |ψ_i⟩}, then the spectral decomposition of the correspondingdensity operator can be expressed asρ = ∑_i p_i|ψ_i⟩⟨ψ_i|here ρ represents a density operator. Therefore, the density operator of a composite system for feature X_i and corresponding class label 𝕐_j can be further expressed using Eqs. <ref> as:ρ_X_i𝕐_j = ∑_i,j p_i,j|ψ_i,j⟩⟨ψ_i,j|For the splitting criterion, we use the concept of fidelity using reduced density operators ρ_X and ρ_𝕐 for individual subsystems, i.e., feature X and class-label 𝕐, respectively. The proposed splitting criterion for our purpose can therefore be depicted asF(ρ_X,ρ_𝕐) = tr(√(√(ρ_X)ρ_𝕐√(ρ_X)))The fidelity in Eq. <ref> is bounded by 0 ≤ F(ρ_X,ρ_𝕐) ≥ 1, where the zero fidelity corresponds to the maximum distance between the two states or dissimilarity between a feature and the associated class label. Similarly, if the fidelity is unity, it represents the perfect overlap between the two states. Therefore, we calculate the fidelity for each feature in a dataset 𝔻 and then select the maximum fidelity value for splitting the dataset. Algorithm <ref> below depicts the quantum node splitting criteria.As per algorithm <ref>, we first compute probability distribution for a class-label conditioning on each random event in a feature. Using this probability distribution for each event, we further construct a quantum state to computeρ_X𝕐, ρ_X, and ρ_𝕐. For the splitting criterion, our algorithm used reduced density operators ρ_X, and ρ_𝕐 to evaluate fidelity for each feature.Finally, we utilize Grover's search algorithm to extract the maximum fidelity value among all evaluated values. The feature corresponding to the maximum fidelity will be used for splitting the dataset. §.§ Constructing Quantum-Classical Decision TreeWe now proceed to construct the full decision tree, which is based on quantum splitting criteria. For a dataset 𝔻 containing X_i^k features and 𝕐_j labels, we embed amplitude values into quantum Hilbert space using the feature mapping. Further, using Algorithm <ref>, we create a node by analysing the following three conditions:* if attributes in a dataset are empty then label the root node (R) with majority class-label in dataset* if all instances belong to the same class-label then assign that class-label to node* if instances are empty then label of the node is decided by the label of previously seen examplesElse if all the above conditions are not satisfied then we compute the fidelity for each feature using Algorithm <ref> and choose an attribute with the highest value.In the above algorithm, l is the leaf node to which we assign a class-label 𝕐 as per the mentioned conditions and R_I is the descendent node of R. § NUMERICAL ANALYSISIn order to demonstrate the effectiveness of the proposed fidelity-based quantum decision tree algorithm, we first analyse ournumerical results for a dataset shown in the table below: For the dataset in Table <ref>, we first evaluate three conditions as specified in Algorithm <ref>. Since none of the three conditions satisfy, we proceed towards selecting an attribute for splitting. Table <ref> clearly demonstrates that the set of random events for all features are either 0 or 1; and similarly class-labels are also either 0 or 1. Therefore, using Table <ref>, we express a two-qubit quantum state for each feature and corresponding class label as|ψ_X1⟩ = 1/2|00⟩ + 1/2|01⟩ + 1/2|10⟩ + 1/2|11⟩ |ψ_X2⟩ = 1/2|00⟩ + 1/2|01⟩ + 1/2|10⟩ + 1/2|11⟩ |ψ_X3⟩ = 1/√(6){|00⟩ + |10⟩ + 2 |11⟩}Eq. <ref> shows that the occurrence of the events in states |ψ_X1⟩ and |ψ_X2⟩ are equally probable. Alternately, one can visualize that a splitting with respect to the feature X1 or X2 will result in a balanced decision tree. For constructing the decision tree, we now compute the two qubit density operators using Eq. <ref> and then further evaluate fidelity withρ_X, and ρ_𝕐, the reduced density operators as described in Algorithm <ref>,Table: <ref> demonstrates fidelity and classical information gain corresponding to each feature.For the splitting criterion, fidelities corresponding to features X1 and X2 are the maximum therefore either of the features can be used as a splitting node- a quantum decision tree. Whereas, for the classical information gain, the highest value is achieved by X3 and therefore the feature X3 can be used as a splitting node- a classical decision tree. Figs. <ref> and <ref> represent the constructed quantum and classical decision trees for the selected root nodes, respectively. The process recursively repeats itself until it meets any of the three conditions specified in Algorithm <ref>. Hence, Figs. <ref> and <ref> demonstrate full-grown quantum and classical decision trees, respectively. As discussed earlier, the quantum decision tree being splitted at X1 or X2 is a completely balanced tree and the classical decision tree being splitted at X3 due to its greedy nature is an imbalanced tree. For analyzing the importance and effectiveness of the proposed algorithm, we further evaluate our criterion using different publicly available datasets in the next section. § SIMULATION RESULTSIn this section, we proceed to analyze the effectiveness of fidelity as a quantum splitting criterion. For this, we use datasets as indicated in Table <ref> <cit.>.Table <ref> describes datasets in terms of dimensions (number of features) and size of that particular dataset where Haberman's cancer survival dataset <cit.> analyzes whether a patient will survive for 5 years or will die within 5 years based on the age factor, number of nodes and year of operation performed. Similarly, Wisconsin breast cancer dataset <cit.> preditcs the breast tissues as benign or malignant based on nine different features , i.e., clump thickness, uniformity of cell size, uniformity of cell shape, marginal adhesion, single epithelial cell size, bare nuclei, bland chromatin, normal nucleoli, and mitoses. The Wheat seed dataset <cit.> further classifies Kama and Canadian seeds based on area, perimeter, compactness, length of kernel, width of kernel, asymmetry coefficient, and length of the kernel groove. Out of the three datasets, Wisconsin breast cancer and Haberman's cancer survival datasets are highly imbalanced for two classes with a ratio of 65.52 vs 34.48 and 73.53 vs 26.47, respectively. On the other hand, Wheat seed dataset is completely balanced. Accuracy = TP+TN/TP+FP+FN+TNPrecision(macroavg) = TP_class0 + TP_class1/TP_class0+TP_class1+FP_class0 + FP_class1 Recall(macroavg) = TP_class0 + TP_class1/TP_class0+TP_class1+FN_class0 + FN_class1F1-Score = 2*(Precision * Recall/Precision + Recall)In order to evaluate the performance of the proposed quantum splitting criteria (fidelity and quantum information gain) as against the classical splitting criteria (classical information gain and gini index), we train the data on 90 % of overall dataset and test the data on 10 % of overall dataset. The algorithms are assessed in terms of four important metrics, namely accuracy, precision, recall, and f1-score as represenetd in Eqs. <ref>, <ref>, <ref>, and <ref>, respectively where TP stands for True Positive, TN stands for True Negative, FP stands for False Positive and FN stands for False Negative. Since Wisconsin breast cancer and Haberman's cancer survival datasets are imbalanced,precision, recall and f1-score metrics are preferred instead of accuracy in dealing with these two datasets <cit.>. The efficiency of the proposed fidelity based quantum splitting criterion is evaluated by comparing it to classical information gain, gini index, and quantum information gain splitting criteria. Fig. <ref> demonstrates the advantages of using proposed quantum splitting criteria based on fidelity and quantum information gain (QIG). Betweeen gini index and CIG, gini index is found to be more efficient for Haberman's cancer survival dataset in terms of accuracy; however, for precision and recall metrics, the classical information gain performs better than gini index. Whereas, for Wisconsin and wheat seed datasets, the efficiency of classical splitting criteria is same.For quantum splitting criteria, the efficiency of fidelity and quantum information gain based splitting criteria is exactly the same for Wisconsin breast cancer and Haberman's cancer survival datasets. Interestingly, the performance of quantum splitting criteria is significantly better than classical splitting criteriain terms of all evaluation metrics for all datasets. For, the Wheat seed dataset, our results found fidelity based splitting criterion to be significantly better than the quantum information gain splitting criterion for all metrics. Specificity = TN/TN+FP Positive Predictive Value (PPV) = TP/TP+FP Negative Predictive Value (NPV) = TN/TN+FNConsidering the importance of true positive and true negative results in medical datasets, these are also analyzed based on some additional metrics such as specificity, positive predictive value and negative predictive value as represented in Eqs. <ref>, <ref>, and <ref>, respectively. Here, specificity represents true negative result out of all negative instance;, PPV represents true positive result out of actual positive and predicted positive; and NPV represents for true negative outcomes out of true negative and predictive negative. Table <ref> shows effects of different splitting criteria on imbalanced datasets. Clearly, for the Wisconsin breast cancer dataset, quantum splitting criteria show highest true negative predictions compared to classical splitting criteria suggesting that the model can classify the unknown data with high degree of accuracy. Surprisingly, for the Haberman's cancer survival dataset, the fidelity and quantum information gain splitting criteria significantly exceed classical splitting criteria in specificity, PPV and NPV, which shows that our models can efficiently and accurately classify the data into the two classes. For classical splitting criteria,gini leads to a better result in specificity in comparison to classical information gain.§ CONCLUSIONIn this work, we have proposed and analyzed a quantum splitting criterion, i.e., fidelity to construct a decision tree. For the proposed criterion, we have efficiently utilized the probability distribution from classical dataset to generate a quantum state, which is further used to compute fidelity for each feature. The numerical analysis showed that the fidelity splitting criterion selects the feature with a uniform probability distribution. This further assists in obtaining a balanced and more accurate decision tree. Our results demonstrated that the proposed fidelity-based criterion is able to provide a significant difference in terms of all evaluation metrics even for an imbalanced dataset. Furthermore, we also used quantum information gain as a criterion to achieve significantly better results in comparison to classical algorithms for all datasets. For comprehensive analysis, we have examined the efficiency of all quantum and classical splitting methods on precision and recall values, which play a crucial role in medical datasets. The obtained results clearly demonstrate the advantages of quantum splitting criteria over classical splitting criteria. unsrt
http://arxiv.org/abs/2310.18243v1
{ "authors": [ "Diksha Sharma", "Parvinder Singh", "Atul Kumar" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231027162942", "title": "Quantum-inspired attribute selection algorithm: A Fidelity-based Quantum Decision Tree" }
Justin, Aghaei, Gómez, and Vayanos Learning Optimal Classification Trees Robust to Distribution ShiftsLearning Optimal Classification Trees Robust to Distribution Shifts Nathan Justin Center for Artificial Intelligence in Society, University of Southern California, Los Angeles, CA 90089, USA, [email protected] Sina Aghaei Center for Artificial Intelligence in Society, University of Southern California, Los Angeles, CA 90089, USA, [email protected] Andrés GómezDepartment of Industrial and Systems Engineering, University of Southern California, Los Angeles, CA 90089, USA, [email protected] Phebe Vayanos Center for Artificial Intelligence in Society, University of Southern California, Los Angeles, CA 90089, USA, [email protected] consider the problem of learning classification trees that are robust to distribution shifts between training and testing/deployment data. This problem arises frequently in high stakes settings such as public health and social work where data is often collected using self-reported surveys which are highly sensitive to e.g., the framing of the questions, the time when and place where the survey is conducted, and the level of comfort the interviewee has in sharing information with the interviewer. We propose a method for learning optimal robust classification trees based on mixed-integer robust optimization technology. In particular, we demonstrate that the problem of learning an optimal robust tree can be cast as a single-stage mixed-integer robust optimization problem with a highly nonlinear and discontinuous objective. We reformulate this problem equivalently as a two-stage linear robust optimization problem for which we devise a tailored solution procedure based on constraint generation. We evaluate the performance of our approach on numerous publicly available datasets, and compare the performance to a regularized, non-robust optimal tree. We show an increase of up to 12.48% in worst-case accuracy and of up to 4.85% in average-case accuracy across several datasets and distribution shifts from using our robust solution in comparison to the non-robust one. robust optimization, distribution shift, robust machine learning, mixed-integer optimization, decision trees.[ Yongqian Zhang January 14, 2024 ====================§ INTRODUCTION Machine learningtechniques are increasingly being used to support decision-making in high-stakes domains, with potentially significant societal impacts. In such settings, the ability to gauge a machine learning model's trustworthiness is necessary to obtain stakeholder buy-in for deployment. To this end, simple and interpretable models are often preferred over black-box ones <cit.>.Some of the most interpretable machine learning models are classification trees, which are easily visualized and simple to deploy. A classification tree is a model which takes the form of a binary tree. At each branching node, a test is performed, which asks if a feature exceeds a specified threshold value. Given a data sample, if the answer is positive (resp. negative) the sample is directed to the right (resp. left) descendent. Thus, each data sample, based on its features, follows a path from the root of the tree to a leaf node. At each leaf, a label is predicted. A data sample is correctly classified if and only if its label matches the label predicted at the leaf where it lands <cit.>. Training a decision tree consists in deciding which tests to perform at each branching node and which labels to predict at each leaf node, based on some training dataset. A common criterion to evaluate the performance of a decision tree is accuracy, defined as the percentage of correctly classified datapoints (although other metrics are also often employed). In high-stakes settings (e.g., when training a tree to identify those at risk of suicidal ideation, or the most vulnerable among those experiencing homelessness), training to optimality is often critical to ensure that the best model is learned. As long as the data used to train the decision tree comes from the same distribution as that in deployment, then as the size of the training data grows, a tree trained to optimality in this fashion will likely perform as expected in deployment.Unfortunately, distribution shifts, where the distribution of the training and testing/deployment data are different, are common in real-world applications, falling in two broad categories. The first kind of shift corresponds to a change in the likelihood of each sample from the population and occurs when there is sampling bias in the training and/or testing population <cit.>, e.g., when using data from Amazon Mechanical Turk[https://www.mturk.com] to train a model that is then deployed on the entire population. The second kind of shift, which we focus on in the present paper, corresponds to a change in individual entries of the data, often manifesting as a shift in the covariates. This kind of shift occurs when the data collection mechanism changes between training and deployment, or when the environment changes the distribution of data over time, among other reasons <cit.>. Shifts that change the data entries are increasingly prevalent in modern machine learning applications, as full control of data collection in both training and deployment is rare. For example, in domains where data is collected from self-reported surveys, changing the way a question is phrased, the location where the survey is conducted or even the person collecting the information, may result in a shift in responses.Classification trees, similarly to most machine learning models, are susceptible to distribution shifts, implying that their performance may deteriorate in deployment compared to what was expected from training <cit.>. Thus, methods accounting for potential shifts during the training phase are needed. In this paper, we propose to learn classification trees that are robust to distribution shifts, in the sense that they will achieve the best performance under worst-case/adversarial shifts of the training data, resulting in controlled, reliable performance in deployment. In social sciences applications where interpretability, robustness, and optimality are often required, the available data frequently come from surveys and manually recorded observations, manifesting as integer and/or categorical data (e.g., gender and race). We thus focus our methodology of learning robust classification trees on the case of covariate shifts on integer and categorical data, a setting of special importance in high-stakes domains that also poses significant modeling and computational challenges. §.§ Problem StatementWe now formally define the problem of training a classification tree robust to distribution shifts, which is the focus of this paper.Let {𝐱^i,y^i}_i ∈ℐ be the training set, where ℐ is the index set for the training samples and any categorical features are one-hot encoded in each 𝐱^i. The vector 𝐱^i ∈ ℤ^|ℱ| collects the covariates of datapoint i, where the elements of ℱ index the features, and y^i ∈ K is a label drawn from a finite set 𝒦. We let x^i_f denote the value of covariate f∈ℱ for sample i∈ℐ and, with a slight abuse of notation, denote by 𝐱 the vector concatenation of 𝐱^i over all samples i ∈ I. Accordingly, we let 𝐲 := (y^1,y^2,…,y^| I|).In the presence of distribution shifts, the training set does not reflect the data at deployment. In most instances, the exact shift in the data at deployment is unknown. Thus, it is necessary to account for the performance on several perturbed training sets that reflect some potential distributions of the testing set. For a sample i∈ I, let ξ^i ∈ℤ^|ℱ| represent a possible perturbation of the covariates 𝐱^i, such that {𝐱^i + ξ^i, y^i }_i ∈ I is a potential perturbed dataset. With a slight abuse of notation, let ξ be the vector concatenation of ξ^i over all samples i ∈ I. Since ξ is unknown in training, we say that ξ lies in a set of all possible perturbations Ξ – often termed as an uncertainty set in the robust optimization literature <cit.>.Let Π_d be the set of all decision trees with depth at most d, where d is a nonnegative integer. The depth d is usually chosen to be small, e.g., less than four or so. Each element π∈Π_d is a binary decision tree classifier π:ℤ^| F|→ K. The problem of training an optimal robust classification tree consists in finding a binary tree that correctly classifies the most samples under worst-case realizations of the perturbed data. Mathematically, it is expressible as equation𝒫_Ξmax_π∈Π_d min_ξ∈Ξ∑_i ∈ I𝕀[ π(𝐱^i + ξ^i) = y^i ].Note that since the features are integer-valued, Ξ is a discrete set. Also, the summation of indicator functions in (<ref>) is nonlinear and discontinuous. These characteristics make problem (<ref>) difficult to solve. In this paper, we work to solve problem (<ref>) and address these difficulties. §.§ Background and Related Work Our paper relates closely to several literature streams, which we review in the following.§.§.§ MIO-Based Decision TreesTraditionally, classification trees are built using heuristic approaches since the problem of building optimal classification trees is 𝒩𝒫-hard <cit.>. Recently, motivated by the fact that heuristic approaches often return suboptimal solutions, mathematical optimization techniques such as MIO have been developed for training optimal trees. The first such method was proposed by <cit.>. To address the long run times required for learning optimal decision trees on large datasets, <cit.> propose a binary linear programming method whose size scales with the logarithm of the number of training samples. <cit.> propose a different MIO method with better relaxation quality to the aforementioned formulations, resulting in improved computational performance.<cit.> present present a logic-based Benders decomposition method for building optimal trees with multivariate splits. MIO approaches for constructing decision trees allow for several extensions. Building off of the model by <cit.>, <cit.> propose an MIO formulation to build optimal decision trees that minimize a “predict-then-optimize” loss rather than a classification loss to decrease model complexity while improving decision quality. <cit.> examines the setting where the covariates to a given tree ensemble are decision variables, formulating an MIO problem to find the covariates that maximize the ensemble's predicted value. <cit.> and <cit.> use MIO to create optimal and fair decision trees. <cit.> and <cit.> solve the problem of learning prescriptive trees via MIO, with the latter approach applying also to the case of observational data.§.§.§ Machine Learning Robust to Distribution Shifts Under the setting where there is uncertainty in the parameters of an optimization problem, robust optimization has been used to generate solutions immunized to uncertainty by hedging against adversarial realizations of the parameters <cit.>. Using robust optimization, several researchers have proposed models and algorithms for machine learning that are robust to distribution shifts in the data entries.<cit.> provide a learning framework under distributional perturbations defined by the Wasserstein metric, applying their method to distributionally robust linear regression, semi-supervised learning, and reinforcement learning. More closely related to this paper, there are works on building robust models for a variety of classification tasks. <cit.> provide a distributionally robust framework with proven convergence guarantees for smooth loss functions, and apply their framework to neural networks. <cit.> create a robust optimization framework for non-parametric models that is trained through a stochastic gradient method, and also apply their method to create artificial neural networks that are robust against adversarial perturbations of the data. Both robust and distributionally robust optimization approaches have been used to train robust support vector machines <cit.> and robust logistic regression models <cit.>.§.§.§ Robust Classification Trees Several authors have proposed methods for learning robust classification trees. <cit.> and <cit.> propose a modification of the standard greedy methods to train decision trees that incorporate robustness. More closely related to our work, <cit.> utilize MIO to learn robust classification trees. However, their approach does not solve problem (<ref>), but instead tackles a simpler (conservative) proxy. We defer to section <ref> for comparisons among these methods from the literature and the method proposed in this paper.§.§.§ Robust Optimization with Discrete Uncertainty SetsThe approaches discussed in sections <ref> and <ref> assume that the covariates, and therefore the data perturbations, are real-valued. However, real-valued perturbations are usually unrealistic in the high-stakes settings that motivate our work (e.g., in the social sciences and applications using administrative datasets). Additionally, there are very few works in the literature that study problems affected by discrete perturbations. In the case of real-valued perturbations, problem (<ref>) is typically handled by dualizing the inner minimization problem. With some exceptions <cit.>, such an approach is usually not possible with non-convex uncertainty sets (such as ours) due to the lack of strong duality. A possible approach is to approximate the discrete set with a convex relaxation <cit.> but such methods typically yield poor, conservative solutions <cit.>. In general, solving robust optimization problems with discrete uncertainty sets often requires iterative calls to a pessimization oracle and refinement of a candidate solution. In the context of problems with discrete uncertainty sets, the pessimization oracle typically requires the solution of an expensive mixed-integer optimization problem <cit.>.§.§ Contribution and Proposed Approach We develop the first method for learning optimal robust classification trees, i.e., for solving problem (<ref>). A noteworthy characteristic of our method is that, unlike most approaches in the machine learning literature, it allows uncertainty on discrete and/or categorical covariates. We now summarize the key features of our proposed approach. * We show that the single-stage mixed-integer nonlinear robust optimization problem (<ref>) can be formulated equivalently as a two-stage mixed-integer linear robust problem, where the second-stage (recourse) decisions decide on the path followed by each datapoint in the tree and are allowed to adjust to the realization of the data perturbations. We also detail the connections between this two-stage formulation and previous methods for learning robust trees. * We study the unique setting where the data is discrete, a setting that is of particular relevance in high-stakes applications. To address this problem, we devise a discrete uncertainty set through a cost-and-budget framework. We detail a connection between this proposed uncertainty set and hypothesis testing, giving an informed method for calibrating the uncertainty set.* We present a cutting plane approach that solves the two-stage formulation to optimality with the proposed uncertainty set, allowing the formulation to be implemented in existing off-the-shelf MIO solvers. * We evaluate the performance of the formulation on publicly available datasets for several problem instances and show the effectiveness of our proposed method in mitigating the adverse effects of distribution shifts both on average and in the worst-case. More specifically, in our computations we observe an increase of up to 14.16% in worst-case and 4.72% in average-case accuracy when using a robust tree compared to a non-robust tree in scenarios where the distribution shift is known. The computations also indicate that similar improvements are obtained even if the parameters used to calibrate the uncertainty set are not perfectly known. The remainder of this paper is organized as follows. Section <ref> provides the equivalent reformulation of (<ref>) as a two-stage robust linear MIO and describes the model of uncertainty. In section <ref>, we describe an algorithm to solve the two-stage formulation to optimality. Section <ref> provides a way to calibrate the uncertainty set based on hypothesis testing. We then extend our method to different distribution shifts and datasets in section <ref>. We also compare the proposed approach to other robust tree formulations in the literature in section <ref>. Lastly, we present experimental results in section <ref>.§ ROBUST TREE FORMULATION In problem (<ref>), the objective function is highly nonlinear and discontinuous, causing significant difficulties in devising computational solution approaches. We thus propose to reformulate (<ref>) equivalently as a two-stage problem with linear objective, which will be possible to solve using a conjunction of MIO solvers and delayed constraint generation, see section <ref>. §.§ Defining a Classification Tree To reformulate problem (<ref>) as a two-stage problem, we first express Π_d as a set of points satisfying a finite number of linear constraints over a discrete set. For a tree of maximum depth d, let 𝒩 be the set of internal nodes and ℒ be the set of leaf nodes. There are 2^d - 1 nodes in 𝒩 and 2^d nodes in ℒ and each node is numbered from 1 to 2^d+1 - 1 in a breadth-first search pattern. For any internal node n ∈ N, let the left and right children of n be l(n) and r(n), respectively. For any node n ∈𝒩 ∪ ℒ, let a(n) be the parent of node n and A(n) be the set of all ancestor nodes of n. Figure <ref> illustrates this notation on a depth 3 classification tree. Each node in ℒ can be a prediction node where the classification of a sample is made; each node in in 𝒩 can be a prediction node or a branching node where a binary test is performed. Any node that is not the root node is pruned (i.e., neither a branching nor prediction node) if any of its ancestors is a prediction node. At each node n ∈𝒩 where the tree branches, a test is performed on a chosen feature f ∈ F with selected threshold θ, asking whether the perturbed value of the feature of sample i is greater than θ, i.e., if x^i_f + ξ^i_f ≥θ + 1. If the test passes, then the data sample is directed right to r(n); otherwise the sample is directed left to l(n). We let c_f and d_f be the lower and upper bounds of values for feature f within the training data, respectively. The set of possible threshold values we can choose from if branching on feature f is given byΘ(f) := {θ∈ℤ |c_f ≤θ < d_f }. We now encode elements of Π_d as a discrete set of decision variables for the outer maximization of problem (<ref>). Let b_nfθ indicate whether node n ∈𝒩 is a branching node and the binary test is on feature f ∈ℱ with threshold θ∈Θ(f). Binary decision variable v_n indicates whether node n ∈𝒩 ∪ ℒ is a prediction node. Also, we let w_nk be a binary variable that equals one when node n ∈𝒩 ∪ ℒ is a prediction node with assignment label k ∈𝒦. Let 𝐛, 𝐯, and 𝐰 collect the b_nfθ, v_n, and w_nk decision variables, respectively. With these variables in hand, we can define a set 𝒮 whose elements have a one-to-one mapping to the trees in Π_d: 𝒮:={(𝐛, 𝐯, 𝐰) : ∑_f ∈ℱ∑_θ∈Θ(f) b_nfθ + v_n + ∑_m ∈𝒜(n) v_m = 1∀ n ∈𝒩v_n + ∑_m ∈𝒜(n) v_m = 1∀ n ∈ℒv_n = ∑_k ∈𝒦 w_nk ∀ n ∈𝒩 ∪ ℒb_nfθ∈{0,1} ∀ n ∈𝒩, f ∈ℱ, θ∈Θ(f)v_n∈{0,1} ∀ n ∈𝒩 ∪ ℒw_nk∈{0,1} ∀ n ∈𝒩 ∪ ℒ, k ∈𝒦 }. Constraints (<ref>) state that at each internal node, either a prediction is made at the node, a prediction is made at an ancestor, or a binary test is performed at the node. Constraints (<ref>) affirm that at each leaf node, a prediction is made at either the node or one of its ancestors. Constraints (<ref>) ensures that exactly one label in 𝒦 is predicted at each prediction node. §.§ The Uncertainty SetFor the time being, we assume that the covariates of all features are integer-valued. We also assume that all uncertain data may shift in either direction by increasing or decreasing in value. We defer to section <ref> for handling categorical features and one-sided shifts that only allow increases (or only decreases) in uncertain data values. With these assumptions in place for the inner minimization problem in (<ref>), the uncertainty set Ξ is defined asΞ := {ξ∈ℤ^|ℐ| × |ℱ| :∑_i ∈ℐ∑_f ∈ℱγ^i_f |ξ^i_f |≤ϵ},where γ^i_f ∈ℝ_+ is the cost of perturbing x^i_f by one (in either direction) and ϵ is the total allowable budget of uncertainty across data samples. Note that γ^i_f can take on different values for different samples i ∈ℐ and features f ∈ℱ, which is useful in domains where the likelihood of the covariate shift varies between different groups of samples and/or different features.As we show in section <ref>, there exists a connection between (<ref>) and hypothesis testing, which allows for calibration of the values of γ^i_f and ϵ based on domain knowledge about the possible distribution shifts. §.§ Counting the Number of Correctly Classified Data SamplesWe wish to reformulate the expression for the count of correctly classified data samples in (<ref>) to create a more computationally tractable formulation. To represent the objective, we use the idea outlined by <cit.> that the number of correctly classified samples can be represented as the optimal value of a sum of maximum flow problems, where we adapt this approach to account for perturbations in the data.§.§.§ Capacitated Flow Graph for Sample i To determine whether a data sample i is correctly classified for a given tree (𝐛, 𝐯, 𝐰) ∈ S and perturbation ξ∈Ξ, we define a capacitated flow graph for sample i based on (𝐛, 𝐯, 𝐰), 𝐱^i, and ξ^i:Let V := N ∪ L∪{s, t} for s a source node and t a sink node. Given tree (𝐛, 𝐯, 𝐰) ∈ S, the capacitated flow graph for sample i∈I is a graph (V, E) such that for n ∈V\{t} and m ∈V\{s}, edge (n,m) is in E and has capacity 1 if and only if one of the following is true: * n = s and m = 1;* n ∈N, m = l(n), and there exists an f ∈F and θ∈Θ(f) such that b_nfθ = 1 and x^i_f + ξ^i_f ≤θ;* n ∈N, m = r(n), and there exists an f ∈F and θ∈Θ(f) such that b_nfθ = 1 and x^i_f + ξ^i_f ≥θ + 1;* n ∈N ∪ L, m = t, and w_ny^i = 1.Furthermore, edge (n,m) is in E and has capacity 0 if and only if one of the following is true: * n ∈N, m = l(n), and there exists an f ∈F and θ∈Θ(f) such that b_nfθ = 1 and x^i_f + ξ^i_f ≥θ + 1;* n ∈N, m = r(n), and there exists an f ∈F and θ∈Θ(f) such that b_nfθ = 1 and x^i_f + ξ^i_f ≤θ;* n ∈N ∪ L, m = t, and w_ny^i = 0.For every branching node n with test on threshold θ of feature f, the capacitated flow graph has either capacity 0 or 1 for the edge leading to the sink node t and the edge that is traversed. Specifically, if the test x^i_f + ξ^i_f ≥θ+1 passes (resp. fails), edge (n,r(n)) (resp. (n,l(n))) has capacity 1 and edges (n,t) and (n,l(n)) (resp. (n, r(n))) have capacity 0. For every prediction node, the edge leading to t has capacity 1 only if the assigned class of that node is y^i, and all other edges leaving the prediction node have capacity 0. Lastly, as an entry point for a data sample, the edge from the source s to the root node 1 has capacity 1. This constructed capacitated flow graph for sample i has a maximum flow from s to t of 1 if and only if sample i, perturbed by ξ^i, is correctly classified. Figure <ref> illustrates the construction of the capacitated flow graph for sample i.§.§.§ Maximum Flow ProblemWe now introduce decision variables that represent the flow of point i from source s to t in the capacitated flow graph. Let the binary decision variable z^i_n,m indicate whether datapoint i flows down the edge between n and m and is correctly classified by the tree for m ∈𝒩 ∪ ℒ∪{t} and n := a(m) ∈𝒩 ∪ ℒ∪{s} under perturbation ξ.Also let 𝐳 collect all the z^i_a(m),m for all i ∈ I and all graph edges (a(m),m) for m ∈𝒩 ∪ ℒ∪{s}. Note that z^i_n,m are the decision variables of a maximum flow problem, where in the capacitated flow graph for sample i, z^i_n,m is 1 if and only if the maximum flow is 1 and the flow traverses arc (n,m). For fixed (𝐛,𝐯,𝐰) and ξ, maximizing the sum of z^i_n,t over all samples i ∈ I and all nodes n ∈𝒩 ∪ ℒ yields the count of correctly classified samplesmax_𝐳∈𝒵(𝐛, 𝐰, ξ) ∑_i ∈ℐ∑_n ∈𝒩∪ℒ z_n,t^i,where the set 𝒵 defines the maximum flow constraints for each sample's capacitated flow graph: 𝒵(𝐛, 𝐰, ξ) :={𝐳∈{0,1}^|ℐ| × (2^d+2 - 2) :z_n, l(n)^i ≤∑_f ∈ℱ∑_θ∈Θ(f)𝕀[x^i_f + ξ^i_f ≤θ]b_nfθ ∀ i ∈ℐ, n ∈𝒩,z_n, r(n)^i ≤∑_f ∈ℱ∑_θ∈Θ(f)𝕀[x^i_f + ξ^i_f ≥θ+1]b_nfθ ∀ i ∈ℐ, n ∈𝒩,z^i_a(n),n = z^i_n, l(n) + z^i_n, r(n) + z^i_n,t ∀ i ∈ℐ, n ∈𝒩,z^i_a(n),n = z^i_n,t ∀ i ∈ℐ, n ∈ℒ,z^i_n,t≤ w_n, y^i ∀ i ∈ℐ, n ∈𝒩 ∪ ℒ }. Problem (<ref>) maximizes the sum of flows over the capacitated flow graphs for all data samples, counting the number of correctly classified samples after perturbation. Constraints (<ref>) and (<ref>) are capacity constraints that control the flow of samples based on 𝐱 + ξ and the tree structure. Constraints (<ref>) and (<ref>) are flow conservation constraints. Lastly, constraint (<ref>) blocks any flow to the sink if the node is either not a prediction node or the classification at that node is incorrect. §.§ Two-Stage ReformulationWith the definition of 𝒮 in (<ref>), definition of Ξ in (<ref>), and reformulation of the number of correctly classified datapoints as the optimal value of the maximum flow problem (<ref>) in hand, we can rewrite problem (<ref>) equivalently as a two-stage linear robust optimization problem. In the first-stage, the variables (𝐛, 𝐯, 𝐰) ∈ S that encode the tree are selected, corresponding to the outer maximization in problem (<ref>). Once the tree is selected, an adversarial perturbation of the covariates ξ from the set Ξ is chosen, corresponding to the inner minimization in (<ref>). In the second-stage problem, given (𝐛, 𝐯, 𝐰) and ξ, the number of correctly classified perturbed samples in the data {𝐱^i + ξ^i, y^i}_i ∈ I is calculated as in (<ref>). This idea leads to the following equivalent reformulation of problem (<ref>) for learning optimal robust classification trees:max_(𝐛, 𝐯, 𝐰) ∈ S min_ξ∈Ξ max_𝐳∈𝒵(𝐛, 𝐰, ξ) ∑_i ∈ℐ∑_n ∈𝒩∪ℒ z_n,t^i.Problem (<ref>) is equivalent to problem (<ref>), but unlike formulation (<ref>), it has a linear objective with a linear set of constraints for each stage of the formulation. However, (<ref>) complicates the problem in that it introduces a second stage maximization. Yet in spite of problem (<ref>) being a two-stage problem, this equivalent reformulation of (<ref>) can be solved to optimality with the help of MIO solvers, as we show in section <ref>.§ SOLUTION METHOD We now present a method for solving problem (<ref>) (and therefore problem (<ref>)) through a reformulation that can leverage existing off-the-shelf MIO solvers.§.§ Reformulating the Two-Stage Problem To solve the two-stage optimization problem (<ref>), we reformulate it equivalently as a single-stage robust MIO. The first part of the reformulation is the dualization of the inner maximization problem (<ref>). Recall that the inner maximization problem is a maximum flow problem; therefore, the dual of the inner maximization problem is a minimum cut problem. Note that strong duality holds, thus replacing the inner maximization problem with its dual results in an equivalent reformulation with the same optimal objective value and same set of optimal trees.To write the dual, we define the dual variables corresponding to the minimum cut problem. Let q^i_n,m be the binary dual variable that equals 1 if and only if the edge that connects nodes m ∈𝒩 ∪ ℒ∪{t} and n = a(m) in the capacitated flow graph for sample i is in the minimum cut-set. We let 𝐪^i be the collection of q^i_n,m over tree edges (n,m) so that 𝐪^i represents a cut-set on the capacitated flow graph for data sample i. We also define p^i_n to be a binary variable that equals 1 if and only if node n ∈𝒩 ∪ ℒ∪{s} is in the source set corresponding to the capacitated flow graph of data sample i ∈ℐ. Let 𝒬 be the set of all possible cut-sets in a classification tree described in section <ref>, described as𝒬:={𝐪∈{0,1}^(2^d+2 - 2) :∃ p_n ∈{0,1},∀ n ∈𝒩 ∪ ℒ∪{s} s.t.q_n, l(n) - p_n + p_l(n)≥ 0∀ n ∈𝒩q_n, r(n) - p_n + p_r(n)≥ 0∀ n ∈𝒩 q_s,1 + p_1 ≥ 1 -p_n + q_n, t≥ 0n ∈𝒩 ∪ ℒ }. Constraints (<ref>)-(<ref>) ensure that if a given node n is in the source set and one of its children is in the sink set, then the arc connecting the nodes is in the cut-set. Moreover, let 𝐪∈𝒬^| I| be a collection of cut-sets 𝐪^i ∈𝒬 across data samples i ∈ℐ. Then, taking the dual of the inner maximization problem in (<ref>) gives the following single-stage formulation:max_(𝐛, 𝐯 ,𝐰)∈ Smin_𝐪∈𝒬^|ℐ|, ξ∈Ξ ∑_i ∈ℐ∑_n ∈𝒩∪ℒq^i_n, t w_n,y^i + ∑_i ∈ℐq^i_s, 1 + ∑_i ∈ℐ∑_n ∈𝒩∑_f ∈ℱ∑_θ∈Θ(f)𝕀[x^i_f + ξ^i_f ≤θ]q^i_n, l(n) b_nfθ + ∑_i ∈ℐ∑_n ∈𝒩∑_f ∈ℱ∑_θ∈Θ(f)𝕀[x^i_f + ξ^i_f ≥θ+1 ]q^i_n, r(n)b_nfθ.Equivalently, we can formulate (<ref>) using the hypograph reformulation max_𝐛, 𝐯, 𝐰,t ts.t. t ≤∑_i ∈ℐ∑_n ∈𝒩∪ℒq^i_n, t w_n,y^i + ∑_i ∈ℐq^i_s, 1 + ∑_i ∈ℐ∑_n ∈𝒩∑_f ∈ℱ∑_θ∈Θ(f)𝕀[x^i_f + ξ^i_f ≤θ]q^i_n, l(n) b_nfθ + ∑_i ∈ℐ∑_n ∈𝒩∑_f ∈ℱ∑_θ∈Θ(f)𝕀[x^i_f + ξ^i_f ≥θ+1 ]q^i_n, r(n)b_nfθ ∀𝐪∈ Q^| I|, ξ∈Ξ (𝐛, 𝐯, 𝐰) ∈ St ∈ℝ, where decision variable t ∈ℝ and constraints (<ref>) represent the hypograph of the objective function of (<ref>). Note that formulation (<ref>) is a linear MIO formulation for solving problem (<ref>), where the optimal value of t represents the optimal objective value of (<ref>) – that is, the number of correctly classified datapoints in training under a worst-case realization of the perturbed data. However, (<ref>) introduces an extremely large number of constraints (<ref>), one for each combination of cut-set and perturbation in Q^| I|×Ξ, and is impractical to solve directly using MIO solvers.§.§ Solving the Single-Stage Reformulation A common method for solving problems with a large number of constraints is to use a cutting plane approach: solving a simpler relaxation of the original problem as an initial main problem, then iteratively adding constraints as needed. This method typically avoids solving a problem with a prohibitive number of constraints. We propose such a delayed constraint generation approach to solve formulation (<ref>).§.§.§ The Main ProblemThe approach begins with a main problem that initially relaxes all constraints (<ref>) within formulation (<ref>): max_𝐛, 𝐯, 𝐰,t ts.t.t ≤ | I| (𝐛, 𝐯, 𝐰) ∈ St ∈ℝ, where constraint (<ref>) bounds t by its maximum value, i.e., the size of the training set.Formulation (<ref>) can be solved with a linear programming-based branch-and-bound algorithm: a standard approach used to solve MIOs in most solvers. At any integer solution in the branch-and-bound tree, we find and add a violated constraint of the form (<ref>) to (<ref>) if one exists. We describe such a process to find a violated constraint in the following section. By adding violated constraints to the main problem along the branch-and-bound process, we converge to the optimal solution of (<ref>), which occurs when no violated constraints can be found for the candidate solution and no better integer solutions exist in the branch-and-bound tree. Note that most MIO solvers allow adding constraints as indicated via callbacks.§.§.§ The Subproblem We now describe how to check whether a given solution to the relaxed main problem is feasible for (<ref>), and how to find a violated constraint (<ref>) when infeasible. Given an integer solution of the main problem (𝐛̅,𝐯̅,𝐰̅,t̅), we consider the following subproblem:min_𝐪∈ Q^| I|, ξ∈Ξ { ∑_i ∈ℐ∑_n ∈𝒩∪ℒq^i_n, tw̅_n,y^i + ∑_i ∈ℐq^i_s, 1 + ∑_i ∈ℐ∑_n ∈𝒩∑_f ∈ℱ∑_θ∈Θ(f)𝕀[x^i_f + ξ^i_f ≤θ]q^i_n, l(n)b̅_nfθ + ∑_i ∈ℐ∑_n ∈𝒩∑_f ∈ℱ∑_θ∈Θ(f)𝕀[x^i_f + ξ^i_f ≥θ+1 ]q^i_n, r(n)b̅_nfθ}.Note that the objective function of problem (<ref>) is the right-hand side of constraints (<ref>). Hence, if t̅ is greater than the optimal value of (<ref>), then the cut (<ref>) where (𝐪,ξ) are fixed to their values in an optimal solution (𝐪̃,ξ̃) of (<ref>) yields a violated constraint that can be added to the main problem.Note that problem (<ref>) corresponds to the inner minimization problem of the dual problem (<ref>), and therefore the solution of problem (<ref>) corresponds to the number of correctly classified samples under a worst-case perturbation given the tree (𝐛̅,𝐯̅,𝐰̅). So to solve (<ref>), we need to find the minimum cost perturbation that misclassifies each sample. To find how sample i can be misclassified in a given tree (𝐛̅,𝐯̅,𝐰̅), we define a decision path for sample i.Given a tree (𝐛̅,𝐯̅,𝐰̅) and sample i ∈ I, a decision path is a sequence of nodes (s, n_1, n_2, …, n_k, t) ⊆𝒩 ∪ ℒ∪{s,t} for k ∈{1, 2, …, d+1} such that in the capacitated flow graph for sample i, n_j = a(n_j+1) for all j ∈{2, …, k-1}, n_1 is the root node 1, and n_k is a prediction node (i.e., v_n_k = 1).We encode a decision path for sample i in our subproblem (<ref>) through a decision cut-set 𝐪^i, defined as follows.Given a decision path (s, n_1, n_2, …, n_k, t) ⊆𝒩 ∪ ℒ∪{s,t} and sample i, a decision cut-set 𝐪^i is the element of 𝒬 such that q^i_n,m = 1 if and only if n ∈{n_1, …, n_k} and m ∉{n_2, …, n_k}.Accordingly, the value of a decision cut-set for a sample i is defined as the sum of edge capacities of the cut-set in the capacitated flow graph for sample i. So, sample i is misclassified on a decision path if and only if the associated decision cut-set has a value of 0. Thus, we enumerate over all decision paths that misclassify sample i and identify the one with the lowest cost perturbation. Then, for each decision path with associated decision cut-set 𝐪̅^i, we identify the minimum cost perturbation that ensures the sample flows through the decision path by solving the following problem:min_ξ^i∈ℤ^| F|∑_f ∈ℱγ^i_f |ξ^i_f| s.t.∑_n ∈𝒩∑_f∈ F∑_θ∈Θ(f)𝕀[x^i_f + ξ^i_f ≤θ]b̅_nfθq̅^i_n, l(n) + ∑_n ∈𝒩∑_f∈ F∑_θ∈Θ(f)𝕀[x^i_f + ξ^i_f ≥θ+1 ]b̅_nfθq̅^i_n, r(n)=0.In this formulation, the objective minimizes the cost of perturbation. The left-hand side of the constraint corresponds to the objective of (<ref>). The requirement that it equals zero ensures that the choice of ξ^i follows a decision path associated with decision cut-set 𝐪̅^i that misclassifies i (i.e., a path ending at prediction node n such that w̅_n, y^i = 0). For a formal justification of why it is sufficient to solve (<ref>) at only decision cut-sets 𝐪̅^i, we refer to the Electronic Companion <ref>. Problem (<ref>) can be decomposed and solved easily. Indeed, the constraint in (<ref>) implies that if b̅_nfθq̅_n,l(n)^i>0, then 𝕀[x^i_f + ξ^i_f ≤θ]=0. Similarly, if b̅_nfθq̅_n,r(n)^i>0, then 𝕀[x^i_f + ξ^i_f ≥θ+1]=0. Thus, we can reformulate (<ref>) asmin_ξ^i∈ℤ^| F|∑_f ∈ℱγ^i_f |ξ^i_f| s.t.ξ_f^i≥θ+1-x_f^i ∀ n ∈𝒩,f∈ F,θ∈Θ(f):b̅_nfθq̅_n,l(n)^i>0ξ_f^i≤θ-x_f^i ∀ n ∈𝒩,f∈ F,θ∈Θ(f):b̅_nfθq̅_n,r(n)^i>0.This problem fully decomposes for each variable ξ^i_f, and can be solved by inspection.Let ψ^i be the minimum cost of (<ref>) across all possible decision cut-sets. Once {ψ^i}_i ∈ I has been obtained, we can impose the constraint in Ξ, see (<ref>), which caps uncertainty to a budget ϵ. Recall that ψ^i denotes the smallest cost of perturbation to misclassify sample i. Thus, the worst-case set of samples to perturb can be obtained by sorting all non-zero ψ^i in non-decreasing order, performing the perturbations in this order until the budget ϵ is saturated. These perturbations define (𝐪̃,ξ̃), where 𝐪̃ is the collection of decision cut-sets for the decision paths of every sample after perturbation, and ξ̃ is the collection of perturbations made (with any unperturbed samples i after the budget is saturated having perturbation ξ̃^i = 0). The solution (𝐪̃,ξ̃) then defines the constraintt≤∑_i ∈ℐ∑_n ∈𝒩∪ℒq̃^i_n, t w_n,y^i + ∑_i ∈ℐq̃^i_s, 1 + ∑_i ∈ℐ∑_n ∈𝒩∑_f ∈ℱ∑_θ∈Θ(f)𝕀[x^i_f + ξ̃^i_f ≤θ]q̃^i_n, l(n) b_nfθ + ∑_i ∈ℐ∑_n ∈𝒩∑_f ∈ℱ∑_θ∈Θ(f)𝕀[x^i_f + ξ̃^i_f ≥θ+1 ]q̃^i_n, r(n) b_nfθwhich is of the form (<ref>). The violated constraint (<ref>) is then added to the main problem. We summarize the procedure for the subproblem in algorithm <ref>. §.§ Improving the AlgorithmA direct implementation of the aforementioned delayed constraint generation algorithm may be slow in practice: a prohibitive number of constraints (<ref>) may need to be added before convergence. To improve the performance of the method, we represent the number of correctly classified points, corresponding to the inner mimization in (<ref>), in an extended formulation. Such representations of nonlinear functions using additional variables often yield stronger relaxations <cit.>, since each linear cut in this lifted space translates to a nonlinear and more powerful cut in the original space. In particular, let t^i ∈{0,1} be a decision variable that indicates whether sample i is correctly classified for a given tree (𝐛,𝐯,𝐰) and worst-case perturbation ξ̃ with corresponding minimum cut 𝐪̃, and let 𝐭 be the collection of t^i over i ∈ℐ.Since t is the total number of correctly classified points, t is equal to ∑_i ∈ℐ t_i. So, we reformulate problem (<ref>) equivalently as max_𝐛, 𝐯, 𝐰,𝐭∑_i ∈ℐt^is.t.∑_i ∈ℐt^i ≤∑_i ∈ℐ∑_n ∈𝒩∪ℒq^i_n, t w_n,y^i + ∑_i ∈ℐq^i_s, 1 + ∑_i ∈ℐ∑_n ∈𝒩∑_f ∈ℱ∑_θ∈Θ(f)𝕀[x^i_f + ξ^i_f ≤θ]q^i_n, l(n) b_nfθ + ∑_i ∈ℐ∑_n ∈𝒩∑_f ∈ℱ∑_θ∈Θ(f)𝕀[x^i_f + ξ^i_f ≥θ+1 ]q^i_n, r(n)b_nfθ ∀𝐪∈ Q^| I|, ξ∈Ξ (𝐛, 𝐯, 𝐰) ∈ St^i ∈ℝ ∀ i ∈ℐ. Since formulation (<ref>) is obtained from (<ref>) through the substitution t = ∑_i ∈ I t^i, both formulations have identical continuous relaxations. Our approach to further strengthen formulation (<ref>) relies on one observation: for any datapoint i ∈ℐ and fixed tree π∈Π_d such that π(𝐱^i) ≠ y^i, there exists an optimal solution to the inner minimization problem in (<ref>) such that π(𝐱^i + ξ^i) ≠ y^i. This is because the inner minimization problem aims to misclassify the most points, so no perturbation is needed on i to misclassify it, i.e., ξ^i = 0 is part of the optimal solution.It follows that we can enforce the condition that any missclassified point i in the nominal case cannot be correctly classified under a worst-case perturbation via the constraintt_i≤∑_n ∈𝒩∪ℒq̃^i_n, t w_n,y^i + q̃^i_s, 1 + ∑_n ∈𝒩∑_f ∈ℱ∑_θ∈Θ(f)𝕀[x^i_f ≤θ]q̃^i_n, l(n) b_nfθ + ∑_n ∈𝒩∑_f ∈ℱ∑_θ∈Θ(f)𝕀[x^i_f ≥θ+1 ]q̃^i_n, r(n)b_nfθ,where decision cut-set 𝐪̃^i is associated with the decision path of i without perturbation. A derivation of constraints (<ref>) can be found in <cit.>. Intuitively, the constraintsare analogous to constraints (<ref>), but without summing over all datapoints in I and fixing the perturbation ξ̃ to zero. Moreover, separation of inequalities (<ref>) is similar to the separation of (<ref>).This improves the solving times by an order-of-magnitude, which we detail further in section <ref>.§ CALIBRATION OF UNCERTAINTY SET PARAMETERSIn this section, we propose a method for calibrating the parameters of the uncertainty set (<ref>). We examine the cases where all features are either unbounded or bounded integers. Other types of features and datasets with mixed feature types are discussed in section <ref>. §.§ Unbounded Integer Data We first show how to calibrate the uncertainty set in the case where all features in the data are unbounded integers (or where no bound is known). We calibrate the parameters of (<ref>) with an assumed probability of certainty in the nominal value of for feature f of training sample i, denoted ρ^i_f ∈ (0,1]. Without any other knowledge of the distribution shift, we follow the principle of maximum entropy <cit.>, which chooses the distribution of the perturbations with greatest entropy and thus highest uncertainty subject to our assumption of the probability of certainty ρ^i_f. To this end, we select the geometric distribution with parameter ρ^i_f as the distribution of the magnitude of perturbation |ξ^i_f|, and select the direction of perturbation (ξ^i_f) uniformly. That is, for ζ^i_f the realization of the perturbation of x^i_f, the probability that x^i_f is perturbed by ξ^i_f is given by the symmetric geometric distributionℙ(ξ^i_f = ζ^i_f) := (0.5)^𝕀[|ζ^i_f|>0]ρ^i_f(1-ρ^i_f)^|ζ^i_f|and the magnitude of perturbation follows a geometric distribution such thatℙ(|ξ^i_f| = |ζ^i_f|) = ρ^i_f(1-ρ^i_f)^|ζ^i_f|.We now follow the idea of building uncertainty sets using hypothesis testing as in <cit.>. We set up a likelihood ratio test on the magnitude of the perturbation with threshold λ^|ℐ| for λ∈ (0,1], where we add the exponent |ℐ| to normalize across different datasets with varying number of training samples. Our null hypothesis is that the magnitude of a given perturbation ξ of our dataset comes from the distribution described by (<ref>). If this null hypothesis fails to be rejected, then ξ lies within our uncertainty set. Hence, ξ lies within our uncertainty set if it satisfies the constraint∏_i ∈ℐ∏_f ∈ℱρ^i_f(1-ρ^i_f)^|ξ^i_f|/∏_i ∈ℐ∏_f ∈ℱρ^i_f≥λ^|ℐ|.The numerator of the left hand side of (<ref>) is the likelihood under distribution (<ref>) of a perturbation of magnitude |ξ^i_f|, and thus the likelihood under the null hypothesis. The denominator of the left hand side is the likelihood of the most probable realization under (<ref>) (i.e., the likelihood of no perturbation). The test (<ref>) can be equivalently represented as∑_i ∈ℐ∑_f ∈ℱ|ξ^i_f| log(1/1-ρ^i_f) ≤ -|ℐ|logλ.Note that (<ref>) is of the form of the constraint in uncertainty set (<ref>) with parameters γ^i_f = log(1/1-ρ^i_f) and ϵ = -|ℐ|logλ.Therefore, a method of tuning the parameters of (<ref>) is to use the probabilities of certainty for each feature of each sample, which can be derived from domain knowledge. The value of λ is used to tune the size of the uncertainty set, and hence the level of robustness. For λ = 1, the budget of uncertainty ϵ is 0, meaning that no distribution shift occurs and our formulation is equivalent to the formulation from <cit.>. As λ decreases, more and more perturbations of the data are considered, and thus the model is robust to more scenarios. Hence, λ is a parameter that can be tuned to adjust the level of robustness, which can be chosen by methods such as cross validation. §.§ Bounded Integer Data In many applications, there exist known bounds on the values of the integer features. There may be a bound on only one end of an integer feature (e.g., age, income) or on both ends (e.g., integer rating on a bounded scale, binary features). Here, we discuss how to tune the uncertainty set parameters for datasets involving bounded integer features with known bounds.§.§.§ Integer Features with One-Sided BoundsWe first consider the case where all covariates admit only a one-sided bound. Without loss of generality, we assume that all bounds are lower bounds, and we denote the lower bound on feature f by L_f (a symmetric argument can be made if all features are upper bounded instead).To tune the hyperparameters of (<ref>), we assume that the probability of certainty in the nominal value of feature f for sample i, ρ^i_f, satisfies 0 < ρ^i_f < 1. Under this assumption, the perturbation ξ^i_f is distributed as the truncated symmetric geometric distribution conditioned on x^i_f + ξ^i_f ≥ L_f. That is, for ζ^i_f the realization of the perturbation of x^i_f, the probability that x^i_f is perturbed by ξ^i_f isℙ(ξ^i_f = ζ^i_f) := ρ^i_f (r^i_f)^|ζ^i_f|for 0 < r^i_f ≤ 1 some constant that makes ℙ a distribution over the support of all ξ^i_f that satisfy x^i_f + ξ^i_f ≥ L_f. To find the value of r^i_f for (<ref>), we utilize the following lemma.For 0 < ρ^i_f < 1, there exists a real-valued solution r^i_f to ρ^i_f(r^i_f)^x-L+1 - (ρ^i_f + 1)r^i_f + 1 - ρ^i_f = 0such that 0 < r^i_f < 1. Furthermore, when used in (<ref>), r^i_f defines a valid distribution over the support of all ξ^i_f∈ℤ that satisfy x^i_f + ξ^i_f ≥ L_f.Standard root-finding approaches (e.g., the bisection method) can be used to solve (<ref>) in order to find r^i_f. The proof of Lemma <ref> can be found in the Electronic Companion <ref>.Once r^i_f is found, we set up a hypothesis test with threshold λ of the form in a similar fashion as done in section <ref>, which yields the following condition on ξ_f^i:∏_i ∈ℐ∏_f ∈ℱρ^i_f(r^i_f)^|ξ^i_f|/∏_i ∈ℐ∏_f ∈ℱρ^i_f≥λ^|ℐ|,reducing down to∑_i ∈ℐ∑_f ∈ℱ |ξ^i_f|log(1/r^i_f)≤ - |ℐ| logλ.Therefore, the tuned hyperparameters for this case are γ^i_f = log1/r^i_f and ϵ = - |ℐ| logλ.§.§.§ Integer Features with Both Upper and Lower Bounds We now assume that there is both a lower bound L_f and an upper bound U_f on all features f. As is the case with the one-sided bounded features, we set up a hypothesis testing framework to tune the hyperparameters. We assume that 1/U_f - L_f + 1≤ρ^i_f ≤ 1 for all i ∈I and f ∈F, and utilize the same truncated symmetric geometric distribution characterized by (<ref>) with support [L_f - x^i_f, U_f - x^i_f]. Note that we place the lower bound 1/U_f - L_f + 1 on ρ^i_f as assuming ρ^i_f = 1/U_f - L_f + 1 under maximal entropy makes the perturbation of feature f at sample i uniformly distributed, meaning that there is maximal uncertainty on the value of x^i_f + ξ^i_f. For ρ^i_f > 1/U_f - L_f + 1, we find an 0 < r^i_f < 1 for each sample i by the following lemma:For 1/U_f - L_f + 1 < ρ^i_f < 1, there exists a real-valued solution r^i_f toρ^i_f(r^i_f)^max{U_f - x^i_f, x^i_f - L_f}+1 + ρ^i_f(r^i_f)^min{U_f - x^i_f, x^i_f - L_f}+1 - (ρ^i_f + 1)r^i_f + 1 - ρ^i_f = 0.such that 0 < r^i_f < 1. Furthermore,when used in (<ref>), r^i_f defines a valid distribution over the support of all ξ^i_f ∈ℤ that satisfy L_f ≤ x^i_f + ξ^i_f ≤ U_f.The proof of Lemma <ref> can be found in the Electronic Companion <ref>. Similar to section <ref>, the hypothesis test is set up with the found r^i_f values through Lemma <ref>. The hypothesis test is (<ref>), and the tuned parameters of uncertainty set (<ref>) are γ^i_f = log1/r^i_f and ϵ = - |ℐ| logλ.Note that binary features are a special case of integer features with lower bound 0 and upper bound 1. Using the truncated symmetric geometric distribution (<ref>) with 1/2≤ρ^i_f ≤ 1, a perturbation ξ^i_f is distributed asℙ(ξ^i_f = ζ^i_f) := ρ^i_f(1-ρ^i_f/ρ^i_f)^|ζ^i_f| = ρ^i_fif|ζ^i_f| = 0 1-ρ^i_fotherwise.Thus, it follows that the tuned hyperparameters for an uncertainty set (<ref>) with binary features (L_f = 0 and U_f = 1) is γ^i_f = logρ^i_f/1 - ρ^i_f and ϵ = - |ℐ| logλ.§ VARIANTS AND EXTENSIONS In this section, we describe how our model and solution approach can be adapted to handle one-sided distribution shifts, distribution shifts on categorical features, and distribution shifts on mixed datasets. §.§ One-Sided Distribution Shifts In some applications, the direction of the distribution shift may be known. For instance, say that only nonnegative shifts in the values of the covariates are possible between training and deployment phases. Such a scenario can occur when, for example, there is a change in the framing of a survey question between training and deployment that reduces the sensitivity of a question, skewing the distribution of answers in the positive direction. In such settings, a model that uses set (<ref>) hedges against infeasible shifts and the uncertainty setΞ_+ := {ξ∈ℤ_+^|ℐ| × |ℱ| :∑_i ∈ℐ∑_f ∈ℱγ^i_f ξ^i_f ≤ϵ}that only allows for nonnegative values of ξ is more appropriate. To solve [eq:single_setup](𝒫_Ξ_+), we can use the same approach as described in section <ref>, only changing the subproblem slightly by restricting the perturbations ξ^i considered to be in ℤ_+^|ℱ| for each i ∈ I (namely, in minimization problem (<ref>)). A similar uncertainty set can likewise be defined to allow only nonpositive shifts or to allow a mixture of different one-sided shifts, where (<ref>) with such an uncertainty set can be solved with a similar adaptation to the subproblem.For unbounded features, the uncertainty set calibration method in section <ref> can be applied in the same way as the two-sided perturbation method, as the hypothesis test is on the magnitude of perturbation regardless of direction of perturbation. For bounded features, the uncertainty set calibration method in section <ref> can also be used by setting L_f to the value of x^i_f. §.§ Distribution Shifts on Categorical FeaturesWith some modifications, the modeling and solution approaches in this paper can be applied to handle general (not necessarily binary) categorical features. Assuming the entire original dataset consists of categorical features, we index categorical features in the original data in the set 𝒞 and one-hot encode all features to obtain the new dataset features ℱ, wherein features indexed in the set ℱ_c ⊆ℱ are used in the one-hot encoding of categorical feature c (i.e., the sets {ℱ_c}_c ∈𝒞 constitute a partition of ℱ). Then only one of two scenarios can materialize for each sample i ∈ I and categorical feature c ∈ C: either the categorical feature c is not perturbed (in which case ξ^i_f = 0 for all f ∈ F_c) or it is, in which case the category value is changed from f' ∈ℱ_c to f̅∈ℱ_c \{f'}, i.e., ξ^i_f' = -1 for f' ∈ F_c such that x^i_f' = 1, ξ^i_f̅ = 1 for some f̅∈ F_c \{f'} and ξ^i_f = 0 ∀ f ∈ℱ_c \{f', f̅}. We characterize the values of ξ by the following uncertainty set:Ξ_cat := {ξ∈{-1, 0 , 1}^|I| × |F| : ∑_i ∈ℐ∑_f ∈Fγ^i_f|ξ^i_f| ≤ϵ, 0 ≤ x^i_f + ξ^i_f ∀i ∈ I, f ∈ F_k, c ∈𝒞, ∑_f ∈ F_c x^i_f + ξ^i_f = 1∀i ∈ I, c ∈𝒞 },where the first constraint of (<ref>) is the same as the other previously mentioned uncertainty sets, which penalize any change in feature f ∈ F with γ^i_f with a total budget of ϵ and the other constraints define values of ξ that either perturb or do not perturb each sample and categorical feature. The second and third constraints of (<ref>) ensure that 𝐱 + ξ maintains a one-hot encoding of each categorical feature.When solving problem (<ref>) in algorithm <ref> for each i ∈ I, the perturbations that lead down each path must satisfy the constraints on ξ in uncertainty set (<ref>) to ensure that the perturbation on each categorical feature corresponds to a change of value in the one-hot encoding. In other words, for each categorical feature c ∈ C that is perturbed in sample i, we must have ξ^i_f' = -1 and ξ^i_f̅ = 1 for two distinct f',f̅∈ F_c. So by the left-hand side of the first constraint of (<ref>), perturbing one categorical feature incurs a cost of perturbation of γ^i_f' + γ^i_f̅. All else in our solution method remains the same.To calibrate the uncertain parameters of (<ref>), we again note that due to the second and third constraints in (<ref>), any perturbation of one feature f' ∈ F_c will cause an equal and opposite perturbation in some other feature f̅∈ F_c \{f'}.We assume that the probability of no perturbation in categorical feature c ∈ C of sample i is 1/| F_c|≤ρ^i_c ≤ 1, and that the realizations of all other values are all equally likely. Similar to the bounded integer case (see section <ref>), we place the lower bound of 1/| F_c| on ρ^i_c – this allows for maximal uncertainty on the value of c. Then, the probability of the perturbation of categorical feature c ∈ C encoded by the collection of perturbations {ξ^i_f}_f ∈ F_c on the one-hot encoding of c, is ℙ({ξ^i_f}_f ∈ F_c = {ζ^i_f}_f ∈ F_c) := 𝕀[∑_f ∈ F_c |ζ^i_f| = 0]ρ^i_c + 𝕀[∑_f ∈ F_c |ζ^i_f| = 2](1 - ρ^i_c/|Φ_k| - 1).As done before, we set up our hypothesis test as a likelihood ratio test with threshold λ^|ℐ|:∏_i ∈I∏_c ∈Cℙ({ξ^i_f}_f ∈ F_c)/∏_i ∈I∏_c ∈Cℙ({0}_f ∈ F_c)≥λ^|ℐ|.Plugging in (29) into (30) yields∑_i ∈I∑_c ∈C -log( 𝕀[∑_f ∈ F_c |ξ^i_f| = 0] + 𝕀[∑_f ∈ F_c |ξ^i_f| = 2](1 - ρ^i_c/ρ^i_c(| F_c| - 1))) ≤ -|ℐ| logλ.Thus, if ∑_f ∈ F_c |ξ^i_f| = 0, i.e., no perturbation occurs, no cost is incurred; if ∑_f ∈ F_c |ξ^i_f| = 2, i.e., a perturbation occurs, then a cost of -log(1 - ρ^i_c/ρ^i_c(| F_c| - 1)) incurred. We can represent (<ref>) more compactly as∑_i ∈I∑_f ∈ℱ1/2log(ρ^i_c(|{ F_c}| - 1)/1 - ρ^i_c)|ξ^i_f| ≤ -|ℐ| logλ.It follows from the above that the tuned hyperparameters in the uncertainty set (<ref>) are γ^i_f = 1/2log(ρ^i_c(|{ F_c}| - 1)/1 - ρ^i_c) and ϵ = - |ℐ| logλ. We note that if c is binary and one-hot encoded into two features following the process described in this section, the tuned uncertainty set parameter values are equivalent to that described in section <ref>. §.§ Mixed Datasets and Distribution Shifts Often, datasets have a mixture of unbounded integer, bounded integer, binary, and categorical data, and known directions of distribution shifts vary across features. In such scenarios, we can adapt the models and calibration methods presented in sections <ref>, <ref>, <ref>, and <ref> to create uncertainty sets that capture this information.Consider the following uncertainty setΞ_mixed := {ξ∈ℳ :∑_i ∈ℐ∑_f ∈ℱγ^i_f |ξ^i_f |≤ϵ}where ℳ⊆ℤ^|I| × |F| can be use to place restrictions on the kinds of shifts allowed for each datapoint (e.g., to capture one-sided perturbations, bounded shifts, or categorical features).Letting ρ^i_f denote the probability of certainty in the nominal value of the data, the tuned hyperparameters are ϵ = -|ℐ| logλ andγ^i_f := log(1/1-ρ^i_f)for fan unbounded integer log(1/r^i_f)for fa bounded integer log(ρ^i_f/1-ρ^i_f)for fbinary 1/2log(ρ^i_f(|{Φ_k :f ∈Φ_k}| - 1)/1 - ρ^i_f)for fpart of a categorical feature,where 0 < r^i_f < 1 is found by Lemma <ref> for one-sided bounds or Lemma <ref> for two-sided bounds. The solution method described in section <ref> remains the same, only needing to adapt the subproblem algorithm <ref> to only allow perturbations from the set ℳ.§ COMPARISON TO STATE-OF-THE-ART METHODS In this section, we compare the uncertainty sets and notions of robustness used in state-of-the-art models from the literature and in our model for learning robust classification trees. In particular, we examine the approach of <cit.>, which also utilizes an MIO-based model but with a different uncertainty set and concept of robustness. We also compare and contrast to <cit.> and <cit.> who also use a different uncertainty set and employ a heuristic approach, but who use a similar notion of robustness to ours. §.§ Uncertainty Sets The model of uncertainty (<ref>) that we employ differs in several regards from that in previous works on robust decision trees. Indeed, <cit.>, <cit.>, and <cit.> employ a row-wise uncertainty set based on an p-norm, of the formΞ_p = {ξ∈ℝ^| I| × | F| :‖ξ^i ‖_ℓ≤ϵ∀ i ∈ I },where ϵ≥ 0 is a user selected budget of uncertainty parameter.<cit.> and <cit.> model uncertainty with Ξ_∞ specifically. On the other hand, <cit.> uses the uncertainty set (<ref>) with any choice of p-norm. We note that in <cit.>, due to the notion of robustness employed, the robust counterpart ends up taking the same form independently of the choice of p, resulting in the same sets of optimal trees independently of the norm used in the uncertainty set.We now argue that the uncertainty set in (<ref>) does not model distribution shifts in the datasets and applications that motivate us. First, the perturbation ξ is not integral, and thus the realization of the covariates may not be realistic if the covariates are integer or categorical. In addition, the set Ξ_p imposes the strong assumption that the distribution shift is rectangular across samples, resulting in an overly conservative model where the perturbations associated with all datapoints can all simultaneously take on their worst-case values. Lastly, the set (<ref>) assumes the same cost of shift (represented by γ^i_f in our model) across all datapoints i ∈ I and features f ∈ F, implying that the magnitude and direction of distribution shifts is constant for all samples and features.In contrast, our proposed uncertainty set (<ref>) fixes the aforementioned issues of uncertainty set (<ref>) by restricting the perturbations to be integer, having a single budget of uncertainty shared among the data samples, and introducing costs of perturbation that can differ for each feature and sample. Thus, our proposed model of uncertainty is more flexible, being able to capture shifts in integer covariates and, as discussed in section <ref>, extending to the cases of one-sided shifts and categorical features. §.§ Notions of Robustness In our problem, the tree structure must be decided before the perturbation of covariates is realized, and only after this realization is observed can we decide if a given datapoint is correctly classified or not.This is similar to the frameworks of <cit.> and <cit.>, who calculate their objective based on an adversarial perturbation of the data. But unlike our approach of optimizing accuracy over the whole tree, these methods use either information gain or Gini impurity as an objective at each node where a test is performed. These approaches, therefore, cannot guarantee optimal worst-case accuracy.On the other hand, <cit.> does have accuracy as an objective, but uses a different notion of robustness. Their model postulates that the robust trees created must maintain the same predictions for each data sample across all possible perturbations of the data. Mathematically, the problem solved by <cit.> is equivalent to equation<ref>_Ξmax_π∈Π_d ∑_i ∈ I𝕀[ π(x^i ) =y^i ] s.t.π(𝐱^i)=π(𝐱^i + ξ^i) ∀ i∈ I, ξ∈Ξ.It follows from the constraints of (<ref>) that it is equivalent to equation<ref>_Ξmax_π∈Π_d min_ξ∈Ξ ∑_i ∈ I𝕀[ π( x^i +ξ^i) =y^i ] s.t.π(𝐱^i)=π(𝐱^i + ξ^i) ∀ i∈ I, ξ∈Ξ.Note that (<ref>) has the same objective function as (<ref>). We formalize the relationship between problems (<ref>) and (<ref>) in the following proposition.Given an uncertainty set Ξ, all optimal solutions of (<ref>) are feasible in (<ref>), but all optimal solutions to (<ref>) may be infeasible in (<ref>).Proof. Note that problem (<ref>) is problem (<ref>) with the additional constraints in (<ref>). Therefore, problem (<ref>) has a feasible region that is a subset of the feasible region of problem (<ref>). Hence, any optimal solution of (<ref>) is a feasible solution of (<ref>).We now present an example where all optimal solutions to (<ref>) are infeasible in (<ref>). Consider the dataset with a single feature and nine datapoints given in table <ref>, and assume that the uncertainty set is Ξ = {ξ∈ℝ^| I| × | F| :‖ξ^i ‖_∞≤ 1 ∀ i ∈ I }.The only optimal solution of (<ref>) is given by the function π^⋆(x)=𝕀(x≥ 6), which in the worst case misclassifies the two datapoints with feature values x=5 and x=6 However, π^⋆(x) is not a feasible solution to (<ref>), as the fifth sample has π^⋆(5) = 0, but with perturbation ξ^5 = 1, π^⋆(5 + 1) = 1. This violates the constraints in (<ref>), and thus π^⋆(x)=𝕀(x≥ 6) is not a feasible solution to (<ref>). Thus, this example shows that all optimal solutions to (<ref>) may be infeasible in (<ref>). In the example from proposition (<ref>), the optimal solution of (<ref>), and thus (<ref>), is given by π̅(x)=𝕀(x≥ 2), resulting in the missclassification of the three datapoints with x∈{3,4,5}. We see that the two problems yield different optimal solutions, and that π̅(x) is a more conservative solution that misclassifies more samples than π^⋆(x). Because problems (<ref>) and (<ref>) are equivalent, by proposition <ref>, problem (<ref>) considers no less than the set of robust solutions that (<ref>) considers. Thus, problem (<ref>) may lead to potentially less conservative solutions in comparison to (<ref>), resulting in better performance as the example in the proof of proposition <ref> shows: (<ref>) considers additional trees with more branching decisions and fewer misclassifications in both the nominal and worst cases. Indeed as uncertainty ϵ grows, the only feasible tree for (<ref>) that can ensure that data samples always land at the same leaf no matter how the data is perturbed is a tree of depth zero (with no branching node).§ EXPERIMENTS In this section, we evaluate our method on various datasets and across uncertainty set parameters. We assess the effect of robustness on accuracy under shifts, accuracy under no shifts (a.k.a. price of robustness) <cit.>, sparsity, and computation times of our approach.We defer to the Electronic Companion <ref> for an empirical comparison of our method to the method of <cit.> for learning robust trees, where we show that our method overall outperforms the method of <cit.> in terms of worst-case and average-case accuracy and confirm empirically the theoretical observations made in section <ref>. §.§ Setup and InstancesWe conduct all our experiments in Python 3.6 using Gurobi 9.0.1 as our MIO solver. All problems are solved on a single core of an Intel Xeon Processor 2640v4 and 4GB of memory with a time limit of 7200 seconds.For instances that do not solve to optimality within the time limit, we return the best feasible tree found within the time limit and record the corresponding optimality gap reported by the solver. §.§.§ InstancesEach instance of our experiments consists of a choice of uncertainty set parameters, a maximum depth of tree, and a dataset. For each instance's uncertainty set, we utilize the hypothesis testing framework as described in section (<ref>): for each f ∈ℱ, we choose the probability of certainty ρ^i_f by sampling from a normal distribution with a standard deviation of 0.2, keeping ρ^i_f the same across all i ∈ℐ for each f, and assuming no bounds on any non-categorical features. Note that if the number sampled from the normal distribution is greater than 1 (resp. less than 0), then the value of ρ^i_f is set to 1 (resp. 0). For the means of this normal distribution, we create instances with means of 0.6, 0.7, 0.8, and 0.9. An instance is also created for different budgets of uncertainty ϵ by setting λ to be 0.5, 0.75, 0.85, 0.9, 0.95, 0.97, and 0.99. For every dataset and uncertainty set, we test with tree depths of 2, 3, 4, and 5. We evaluate on 12 publicly available datasets as listed in Table <ref>. Each dataset contains either integer-valued data, categorical data, or a mixture of both, as detailed in the table. We preprocess datasets with categorical data by one-hot encoding each categorical feature.The number of samples in the datasets range from 124 to 3196 and the number of features from 4 to 36. For each dataset, we randomly split it into 80% training data and 20% testing data once. In total, we experiment on 1536 instances (128 per dataset) each with different uncertainty sets and maximum depths of tree.§.§.§ Generating Shifted Test SetsTo test our method's robustness against distribution shifts, we generate 5,000 different perturbed test sets for each instance. To create each perturbed test set, we independently perturb the original test data based on expected perturbations. That is, for the collection of q^i_f values for every f ∈ℱ used to construct an uncertainty set based off of (<ref>), we perturb each test set based on the symmetric geometric distribution described in (<ref>).In order to measure the robustness of our method against unexpected shifts of the data, we repeat the same process of generating shifted test sets for each instance but with unexpected perturbations: perturbing the test data using values of ρ^i_f different than what we gave our model. We shift each ρ^i_f value down 0.2, then perturb our test data in 5,000 different ways based on these new values of ρ^i_f. We do the same procedure but with ρ^i_f shifted down by 0.1 and up by 0.1. In a similar fashion, we also uniformly sample a new ρ^i_f value for each feature in a neighborhood of radius 0.05 of the original expected ρ^i_f value, and perturb the test data in 5,000 different ways with the new ρ^i_f values. We do the same procedure for the radii of 0.1, 0.15, and 0.2 for the neighborhoods.§.§.§ Learning Non-Robust TreesFor comparison, we use our method to create a non-robust optimal tree by setting the budget of uncertainty to 0 (i.e., λ=1), and tune a regularization parameter for the non-robust tree using a validation set from the training set. The regularization term penalizes the number of branching nodes to yield the modified objective in problem (<ref>):max_𝐛, 𝐰,𝐭R∑_i∈ℐt_i - (1-R)∑_i∈ℐ∑_n∈𝒩∑_Θ(f)b_nfθ,where R ∈ [0,1] is the tuned regularization parameter. To tune the value R, we first randomly hold out 20% of the training set into a validation set. Then, we select various values of R from the set {0.6,0.7,0.8,0.9,0.95,1} and learn a tree using our method with the uncertainty set { 0} and the same specifications as learning our robust tree but instead with a time limit of 1 hour. We find the accuracy of the learned tree on the held-out validation set, and select R with the best accuracy. We then create the non-robust optimal tree using the entire training set and tuned R value.Note that we do not add a regularization parameter to our robust model, as we will empirically show that adding robustness itself has a regularizing (sparsity-promoting) effect. §.§ Effect of RobustnessWe now examine our method's robustness to distribution shifts across different uncertainty sets in comparison to a non-robust model.§.§.§ Accuracy Under Distribution Shifts For each instance, we measure the worst-case (lowest) and average accuracy across all perturbed test sets. Our results are summarized in Figure <ref>, which shows across all instances the distributions of the difference in worst-case and average accuracies between the learned robust and non-robust trees. From the figure, we see that the robust model in general has higher worst-case and average-case accuracies than the non-robust model when there are distribution shifts in the data. We also see that there is a range of values of λ (namely between 0.75 and 0.9) that perform well over other values of λ in terms of both worst-case and average case accuracy. This shows us that if the budget of uncertainty is too small, then we do not allow enough room to hedge against distribution shifts in our uncertainty set. But if the budget of uncertainty is too large, then we become over-conservative and lose some accuracy in both expected and unexpected perturbations of our test data. We also see that there is little difference between the gains in accuracy in instances where the perturbation of our data is as we expect versus when the perturbation is not as we expect. This indicates that even if we misspecify our model, we still obtain a classification tree robust to distribution shifts within a reasonable range of the expected shift. Overall, we see that an important factor in determining the performance of our model is the budget of uncertainty, which can be conveniently tuned to create an effective robust tree. §.§.§ Price of Robustness We also measure the decrease in accuracy from using a robust tree versus a non-robust tree under the case of no distribution shift in the test set, i.e., the price of robustness <cit.>, and summarize this metric in Figure <ref>. From the figure, we observe that in the range of λ values that perform well in terms of accuracy (i.e., between 0.75 and 0.9, see section <ref>), we have an average price of robustness between 0.05 and 0.1. As the level of robustness increases (with a decreasing λ), the higher the price of robustness. This is expected with our model because a larger budget of uncertainty leads to larger deviations away from the nominal test set considered, and thus the worst-case distribution shift may look very different from the nominal test set if more drastic shifts are considered. §.§.§ The Effect of Robustness on Sparsity To evaluate the sparsity of our model in relation to the robustness of the tree, we measure the number of branching nodes in each instance. Our results are summarized in Figure <ref> Overall, as the size of the uncertainty set increases with a smaller λ, the number of branching nodes decreases. Namely, the median number of branching nodes is 4 for instances with λ∈{0.95, 0.97, 0.99}, 3 for instances with λ = 0.9, 2 for instances with λ∈{0.75, 0.85}, and 0 for instances with λ =0.5 This regularizing behavior of robustness is expected with our model: with more branching nodes in the learned model, the more opportunities there are for a low-cost perturbation of each sample in a given tree, yielding a lower worst-case accuracy. Thus, as the number of perturbations allowed expand, the number of branching nodes in the learned model decreases to yield a more favorable worst-case accuracy.§.§.§ Computational Times We summarize the computation times across all instances in Figure <ref>. For any fixed λ, the variations in computation times and optimality gaps across instances are due to differences in the maximum depth of tree, the number of data samples, the number of features, and the range of values for each feature within each instance. In general, a larger uncertainty set (smaller λ) leads to a larger optimality gap for a fixed time limit, as there are more constraints to add to the master problem to reach optimality. In particular, the number of instances that have an optimality gap larger than 50% are 72, 32, 7, and 0 for λ = 0.5, 0.85, 0.95, and 1, respectively. We also observe that there is a large gap in number of instances solved fully to optimality within the time limit between the non-robust instances and the robust instances, with 48 more instances solved to optimality in the non-robust case in comparison to the robust case with λ = 0.95. In addition, the number of robust instances able to be solved to optimality within the time limit are fairly close, with only 7 instances between the λ = 0.95 case and the λ = 0.85 and λ = 0.5 cases.As mentioned in section <ref>, we add additional constraints of the form (<ref>) to our model in order to improve on computation times. We test the magnitude of this improvement by comparing our algorithm that adds constraints (<ref>) against the algorithm that does not add constraints (<ref>) at each subproblem. We run such algorithms on the instances with the , , anddatasets, and summarize the computation times and optimality gaps in Figure <ref>. We observe that 96 out of the 384 instances are solved within the time limit on the algorithm without added constraints (<ref>). Our algorithm with additional constraints (<ref>) is able to solve the same number of instances within 32 seconds, corresponding to a speedup of 7200/32 = 225 times.§ ACKNOWLEDGMENTSN. Justin is funded in part by the National Science Foundation Graduate Research Fellowship Program. P. Vayanos and N. Justin are funded in part by the National Science Foundation under CAREER grant 2046230. A. Gómez is funded in part by the National Science Foundation under grants 1930582 and 2152777. They are grateful for the support. informs2014E-Companion§ ITERATING OVER PATHS IN PROBLEM (<REF>) We will now show why the assumption that the cut-set 𝐪^i incident to the path of the data sample after perturbation minimizes (<ref>) for given ξ through the following proposition.For optimal solutions of (<ref>), one of the following two statements holds for each sample i ∈ℐ * The source set of a minimum cut is {s}, that is, q^i_s, 1=1 and q^i_n, l(n)=q^i_n, r(n)=0 for all n∈ N. In this case t^i=1.* The source set of a minimum cut is a path from s to a prediction node with a label other than y^i, that is, w̅_n,y^i=0. In this case t^i=0.Proof of Proposition <ref> First observe that setting q^i_s, 1=1 and all remaining variables to zero indeed satisfies constraints (<ref>)-(<ref>), and the associated objective value is t^i=1. Suppose the first statement in the proposition does not hold and this solution is not optimal. Since, for all values of ξ^i, the arc capacities are either 0 or 1, it follows that the objective values corresponding to optimal solutions is t^i=0.Note that if a prediction node n with label y^i (w̅_n,y^i=1) is in the cut (q_n,t^i=1), then t^i≥ 1, and no such solutions can be optimal. If a branching node n(b̅_nfθ=1 for some feature f and level θ) is in the cut but none of its descendants are (q_n,l(n)^i=q_n,r(n)^i=1), then 𝕀[x^i_f + ξ^i_f ≤θ]q^i_n, l(n)+𝕀[x^i_f + ξ^i_f ≥ 1+θ]q^i_n, r(n)=1 for all values of ξ_f^i. Thus in this case t^i≥ 1 and no such solutions can be optimal. If a branching node is in the cut and both of its descendants are as well (q_n,l(n)^i=q_n,r(n)^i=0), then depending on the value of ξ^i, either 𝕀[x^i_f + ξ^i_f ≤θ]=0 or 𝕀[x^i_f + ξ^i_f ≥ 1+θ]=0. In the first case, one may set q^i_n, l(n)=1 (and set additional cutset variables on the left subtree to zero), recovering a solution with the same (or less) cost; the second case is analogous. Therefore, we find that if t^i=0, then there exists an optimal solution that is a path. Due to proposition <ref>, we only need to solve (<ref>) over all possible paths 𝒪(| L|) for each sample to solve the subproblem and obtain a violated constraint (<ref>) to add back to the main problem.§ PROOFS OF LEMMAS <REF> AND <REF> In this section, we prove Lemmas <ref> and <ref>, which provide a method of finding an r^i_f that is used to tune the hyperparameters of the uncertainty set when there are known bounds to the integer features.Proof of Lemma <ref> To find an r^i_f that makes (<ref>) a probability mass function, we solve the infinite polynomial equation∑_ξ^i_f = L_f-x^i_f^∞ρ^i_f (r^i_f)^|ξ^i_f| = 1,and choose the solutions such that 0 < r^i_f < 1. We can regroup terms in the left hand side of the above sum to getρ^i_f + 2∑_ξ^i_f = 1^x^i_f - L_fρ^i_f (r^i_f)^ξ^i_f +∑_ξ^i_f = x^i_f - L_f + 1^∞ρ^i_f (r^i_f)^ξ^i_f = 1.Then, using the closed form of both a finite and infinite geometric series, an equivalent finite polynomial equation to solve problem (<ref>) isρ^i_f + 2(ρ^i_fr^i_f(1 - (r^i_f)^x^i_f - L_f)/1 - r^i_f) +(ρ^i_f(r^i_f)^x^i_f - L_f/1 - r^i_f)= 1⇔ ρ^i_f - ρ^i_fr^i_f + 2ρ^i_fr^i_f - 2ρ^i_f(r^i_f)^x^i_f - L_f+1 + ρ^i_f(r^i_f)^x^i_f - L_f + 1 = 1 - r^i_f⇔ ρ^i_f(r^i_f)^x-L+1 - (ρ^i_f + 1)r^i_f + 1 - ρ^i_f= 0which is exactly (<ref>).Now we show that a solution 0 < r^i_f < 1 exists in the above equation. Letf(r^i_f) := ρ^i_f(r^i_f)^x-L+1 - (ρ^i_f + 1)r^i_f + 1 - ρ^i_f,i.e. the function on the left-hand side of (<ref>) that we wish to find real roots of. Thus, f(0) = 1 - ρ^i_f and f(1) = -ρ^i_f. Since 0 < ρ^i_f < 1, f(0) > 0, and f(1) < 0, by Bolzano's Theorem, there exists a solution r^i_f to (<ref>) such that 0 < r^i_f < 1. Proof of Lemma <ref> To find an r^i_f that makes (<ref>) a probability mass function, we solve thepolynomial equation∑_ξ^i_f=L_f - x^i_f^U_f - x^i_fρ^i_f(r^i_f)^|ξ^i_f| = 1.Let M = max{U_f - x^i_f, x^i_f - L_f} and m = min{U_f - x^i_f, x^i_f - L_f}.We can regroup terms in the left hand side of the above sum to getρ^i_f + 2∑_ξ^i_f = 1^mρ^i_f(r^i_f)^ξ^i_f + ∑_ξ^i_f = m + 1^Mρ^i_f(r^i_f)^ξ^i_f = 1.Then, using the closed form of a finite geometric series, (<ref>) takes the simplified form ofρ^i_f + 2(ρ^i_fr^i_f(1 - (r^i_f)^m)/1 - r^i_f) + (ρ^i_f(r^i_f)^m+1(1 - (r^i_f)^M - m)/1 - r^i_f)= 1⇔ ρ^i_f - ρ^i_fr^i_f + 2ρ^i_fr^i_f - 2ρ^i_f(r^i_f)^m+1+ ρ^i_f(r^i_f)^m+1 - ρ^i_f(r^i_f)^M +1 = 1 - r^i_f⇔ ρ^i_f(r^i_f)^M+1 + ρ^i_f(r^i_f)^m+1 - (ρ^i_f + 1)r^i_f + 1 - ρ^i_f= 0.which is exactly (<ref>).We now show that the a real-valued r^i_f ∈ (0,1) exists. Let f(r^i_f) be the function on the left-hand side of (<ref>)f(r^i_f) := ρ^i_f(r^i_f)^M+1 + ρ^i_f(r^i_f)^m+1 - (ρ^i_f + 1)r^i_f + 1 - ρ^i_fthat we wish to find real roots of. Thus, f(0) = 1 - ρ^i_f and f(1) = 0. The first derivative of f isf'(r^i_f) = (M+1)ρ^i_f(r^i_f)^M + (m + 1)ρ^i_f(r^i_f)^m - (ρ^i_f + 1).Thus, f'(1) = (M+m+1)ρ^i_f - 1 = (U_f - L_f + 1) ρ^i_f - 1. For ρ^i_f > 1/U_f - L_f + 1,we see that f'(1) > 0. Thus, there must exist an r̅∈ (0,1) such that f(r̅) < 0. And since f(0) > 0, by Bolzano's Theorem, there must exist an r^i_f ∈ (0,1) that satisfies f(r^i_f) = 0. § COMPARISON TO ALTERNATIVE ROBUST METHOD We empirically analyze the performance of our method against the method of <cit.>, see section <ref> for comparisons between each model. For simplicity, we will refer to our method as and the method of <cit.> as  in this section.§.§ SetupFor fair comparison to the method of <cit.>, we only evaluateon instances corresponding to datasets in Table <ref> with only binary features (i.e.,and ).For datasets with non-binary features,cannot tackle problems with the uncertainty sets we consider in this paper.To adapt our uncertainty set to a comparable uncertainty set in theformulation, we first transformed the uncertainty set used in the above experiment (i.e. the uncertainty set (<ref>) tuned using (<ref>)) into the uncertainty set (<ref>) for p = ∞. For each instance of our original problem, let ρ be the average probability of certainty used to generate the ρ^i_f values in the experiments on our methods. Then, (<ref>) becomes for each i ∈ℐ∑_f ∈ F |ξ^i_f| ≤ -logλ/log(ρ/1-ρ).This matches the form of (<ref>) for p = 1, and therefore can be used to define the parameters of (<ref>) and have a more similar comparison to our methodology. Additionally, we apply no regularization term for all experiments and set a time limit of 7200 seconds, which is the same as in our formulation. §.§ Accuracy Under Distribution Shifts We analyze the worst-case and average-case accuracies on the same 5000 perturbed datasets for each instance ofand its analogous instance of . The results for the worst-case and average case on expected perturbations are shown in Figure <ref>. We see that on binary datasets, there is on average a decrease in worst-case performance in themethod across all values of λ in comparison to the non-robust tree, whereashas on average a favorable gain in worst-case performance on the same instances. Specifically,returns a higher gain in worst-case performance than that ofby up to about 15% on average in the worst-case. In addition, the average-case performance ofsuffers in comparison to the non-robust tree on average, whereas on average remains on par with the average-case performance of a non-robust tree in the instances where λ is 0.75 or above. These results match the conclusions from Proposition <ref>, whereis more conservative than , potentially causing a bad performance in testing.§.§.§ Price of Robustness For bothand , we measure the decrease in accuracy from using a robust tree versus a non-robust tree under the case of no distribution shift in the test set, i.e.,the price of robustness <cit.>, and summarize this metric in Figure <ref>. From the figure, we observe that for each value of λ,has an average a price of robustness less than 0.25. In contrast,has an average price of robustness of about 0.4 for all λ.§.§ The Effect of Robustness on Sparsity For each value of λ in Figure <ref>, we compare the number of branching nodes ofagainst the method of , aggregating across each comparable instance between the two methods. We first note that the number of branching nodes ofremains relatively consistent for values of λ above 0.75, up until the budget of uncertainty becomes too large (i.e., λ = 0.5) where many instances yield a tree with no branching nodes. As previously mentioned, the method of <cit.> constrains every sample to have the same predictions across all realizations of the covariates. So with a large enough uncertainty set, all samples are given the majority label regardless of the covariates, which is the same as yielding a tree with no branching nodes.In comparison, we see thatdoes not have the same behavior as , as the number of branching nodes drops smoothly from a smaller uncertainty set to a larger uncertainty set. The tree structure changes through variations in the parameters of uncertainty more consistently inin comparison to , suggesting that the trees generated byare closely tailored to the specific distribution shifts it hedges against. §.§ Computational Times Figure <ref> shows the computation times and optimality gaps of all instances ofin comparison to the corresponding instances of . We see that about two times more instances ofcan be solved within the time limit in comparison to . However, for instances that could not be solved to optimality in the time limit, the optimality gap overall is much smaller in instances ofthan instances of . There are several reasons for this. In , instances that resulted in no branching nodes were solved very quickly, as the only feasible trees were zero-depth ones. For other instances ofwhere the uncertainty set did allow for several feasible solutions, the convergence to optimality is noticeably less than that , where nearly all instances ofare within fifty percent of the optimality gap. This suggests that ininstances where a nonzero-depth tree is the optimal tree, there is a greater computational costs of learning trees withthan with .
http://arxiv.org/abs/2310.17772v1
{ "authors": [ "Nathan Justin", "Sina Aghaei", "Andrés Gómez", "Phebe Vayanos" ], "categories": [ "cs.LG", "math.OC", "stat.ML" ], "primary_category": "cs.LG", "published": "20231026203729", "title": "Learning Optimal Classification Trees Robust to Distribution Shifts" }
[email protected] International Research Centre Magtop, Institute of Physics, Polish Academy of Sciences, Aleja Lotników 32/46, PL-02668 Warsaw, [email protected] International Research Centre Magtop, Institute of Physics, Polish Academy of Sciences, Aleja Lotników 32/46, PL-02668 Warsaw, PolandInternational Research Centre Magtop, Institute of Physics, Polish Academy of Sciences, Aleja Lotników 32/46, PL-02668 Warsaw, Poland College of Science, Guilin University of Technology, Guilin 541004, People’s Republic of China. Dipartimento di Fisica ’E.R. Caianiello’, Universitá degli Studi di Salerno, via Giovanni Paolo II 132, I-84084 Fisciano (SA), Italy Dipartimento di Fisica ’E.R. Caianiello’, Universitá degli Studi di Salerno, via Giovanni Paolo II 132, I-84084 Fisciano (SA), Italy [email protected] Consiglio Nazionale delle Ricerche CNR-SPIN, UOS Salerno, I-84084 Fisciano (Salerno), Italy Dipartimento di Fisica ’E.R. Caianiello’, Universitá degli Studi di Salerno, via Giovanni Paolo II 132, I-84084 Fisciano (SA), Italy [email protected] International Research Centre Magtop, Institute of Physics, Polish Academy of Sciences, Aleja Lotników 32/46, PL-02668 Warsaw, Poland Using first-principle calculations, we investigate the electronic, topological and superconducting properties of Nb_3X (X = Ge, Sn, Sb) and Ta_3Y (Y = As, Sb, Bi) A15 compounds. We demonstrate that these compounds host Dirac surface states which are related to a nontrivial ℤ_2 topological value. The spin-orbit coupling (SOC) splits the eightfold degenerate R point close to the Fermi level enhancing the amplitude of the spin Hall conductance. Indeed, despite the moderate spin-orbit of the Nb-compounds, a large spin Hall effect is also obtained in Nb_3Ge and Nb_3Sn compounds. We show that the Coulomb interaction opens the gap at the R point thus making more evident the occurrence of Dirac surface states. We then investigate the superconducting properties by determining the strength of the electron-phonon BCS coupling. The evolution of the critical temperature is tracked down to the 2D limit indicating a reduction of the transition temperature which mainly arises from the suppression of the density of states at the Fermi level. Finally, we propose a minimal tight-binding model based onthree coupled Su-Schrieffer-Heeger chains with t_2g Ta- and Nb-orbitals reproducing the spin-orbit splittings at the R point among the π-bond bands in this class of compounds. We separate the kinetic parameters in π and δ-bonds, in intradimer and interdimer hoppings and discuss their relevance for the topological electronic structure. We point out that Nb_3Ge might represent a ℤ_2 topological metal with the highest superconducting temperature ever recorded.71.15.-m, 71.15.Mb, 75.50.Cc, 74.40.Kb, 74.62.FjDirac surface states, multiorbital dimerization and superconductivity in Nb- and Ta-based A15 compounds Carmine Autieri January 14, 2024 =========================================================================================================§ INTRODUCTIONTopological superconductivity is a captivating phase of condensed matter physics characterized by unique electronic properties such as the Majorana fermions<cit.>. These particles, distinct for their non-Abelian statistics, hold promise for fault-tolerant quantum computation. This enigmatic phase of matter has ignited a surge of research, with potential applications spanning quantum computing to quantum information storage. Understanding and harnessing topological superconductivity holds the key to obtaining new quantum technologies. Therefore, a growing interest in topological superconductivity has been rising in the last decade. <cit.> Topological superconductivity can arise from the coexistence of Bardeen–Cooper–Schrieffer (BCS) superconductivity and Dirac states which can lead to mixed pairing order parameters or topological superconductivity, therefore, the Dirac surface states of superconductors are platforms for investigating the interplay between superconductivity and topologically nontrivial Fermi surfaces<cit.>.The most common k-space topological phase is characterized by a non-zero ℤ_2 topological invariant. The ℤ_2 topological insulators have gapped bulk band structure and gapless surface states. These surface states are protected by time-reversal symmetry. ℤ_2 topological metals are conducting materials with gapless bulk band structures and gapless surface states<cit.>. In the last years, different families of materials have been proposed to be ℤ_2 topological metals with a superconductive ground state. We can mention some of the most representative members of these families as the kagome<cit.> CsV_3Sb_5, the non-symmorphic ZrOSSi<cit.> and KHgAs compounds<cit.> and the van der Waals Ta_2Pd_3Te_5 material<cit.>. Dirac surface states were also found in several undoped iron-based systems such as BaFe_2As_2 and LiFeAs<cit.>.The Nb-based A15 compounds were widely studied in the past due to their superconductivity with a high critical temperature (T_c). The superconductivity in Nb-based A15 compounds was found to be BCS-like <cit.>, namely the pairing of the superconducting electrons is via electron–phonon coupling. The Fermi level of these compounds is close to a peak in the density of states deriving from dimerized one-dimensional Nb chains. In silicides and germanides of transition metals, the highest T_c was found in V_3Si among all the known binary compounds <cit.>. The A15 claimed the title of the highest T_c superconductors in 1954 when T_c = 18 K was first observed in Nb_3Sn<cit.>. Other Nb-based superconductors were then found as for example Nb_3Alwith T_c=18.8 K<cit.>, Nb_3Ga with T_c=20.3 K in<cit.> and Nb_3Ge with T_c=22.3 K<cit.>. Recently, several Nb-based compounds were investigated for their exotic superconducting properties such as the van der Waals NbX_2(X=S, Se)<cit.> and the non-centrosymmetric NbRe<cit.> Both theoretical and experimental investigations have extensively explored the properties of A15 compounds <cit.> based on niobium and tantalum, due to their high critical temperature and high critical magnetic fields. Due to the discovery of the unconventional high T_c superconductivity in heavy fermions and cuprates, the superconductive phase of the A15 compounds has received less scientific attention in recent years. Recently, the A15 compounds regained attention due to the large degeneracy at the R point, these A15 compounds are multifold fermion metals<cit.> with a notable spin-Hall effect<cit.> and non-trivial band structure topology<cit.>. Dirac points are emergent along the R–M path due to the C_4 rotational symmetry.<cit.> The large spin-Hall conductivity in these compounds is due to the fact that they have bands close to the Fermi level that present crossings unprotected under the action of the spin-orbit coupling interaction (SOC) <cit.>. The Ta-based A15 compounds have the same filling as the Nb-based A15 compounds, with the Ta having a larger SOC. The Ta_3Sb compound with A15 crystal structure was proposed to be a topological superconductor<cit.>.Regarding the realization of devices, Nb_3Sn superconductors have significant applications in constructing high-field magnets<cit.>. Nb_3Sn can be used as a coating for producing superconducting surfaces<cit.> and for particle accelerators<cit.>.Nb_3Sn thin films are promising candidates for future applications in superconducting radio frequency cavities<cit.>.In this paper, we study the electronic, topological and superconductive properties of Nb_3X (X = Ge, Sn, Sb) and Ta_3Y (Y = As, Sb, Bi) A15 compounds and we demonstrate that all these compounds are ℤ_2 topological metals hosting Dirac surface states. We study the interplay between the electronic and topological properties with the spin-Hall and BCS superconductivity. These topological properties can be explained by a tight-binding model with three coupledSu–Schrieffer–Heeger (SSH) chains. Nb_3Sb and Ta_3Y (Y = As, Sb, Bi) have half-filling p- and d-orbitals, while Nb_3Ge and Nb_3Sn have one electron less. The paper is organized as follows. In the next Section, the results of our ab initio calculations are reported. In more detail, this Section is divided into many Subsections: in Subsection A the structural and electronic properties of Nb_3X and Ta_3Y are investigated, in Subsection B we study the Spin Hall conductivity, while in Subsections C and D we discuss the topological properties for the Ta-based and Nb-based compounds, respectively. Subsection E is devoted to the superconductivity, while Subsection G is dedicated to the thickness-dependent density of states. In Section III, we report our tight-binding model composed of the three coupled chains of the SSH model with t_2g orbital basis. Finally, Section IV is devoted to the discussion, conclusions and outlook. § RESULTS§.§ Structural and Electronic properties of Nb_3X and Ta_3Y A15 compounds are governed by the Pm3n (No. 223) space group which exhibits an intermetallic nature arising from a chemical composition of A_3B, where site A is occupied by a transition metal/d-block element and site B is occupied by the p-block element. The crystal structure presented in Fig. <ref>(a) is a typical unit cell of an A15 compound with inversion symmetry containing eight atoms with site A forming one-dimensional chains along the edges which are orthogonal to neighboring faces and the B site forming a body-centered cubic lattice. The presence of spatial inversion symmetry is due to the non-symmorphic space group governing the system which involves a screw axis in the [001] crystal direction. The A sites in A_3B composition occupy 6c Wyckoff positions (0.25,0.00,0.50) and the B sites occupy 2a Wyckoff positions (0.00,0.00,0.00). The crystal structure presents three dimers of the A atoms, along the a, b and c axes, as shown in Fig. <ref>(a). A typical slab of A15 compound periodic in [001] crystal direction is presented in Fig. <ref>(c) with surface Brillouin zone highlighted in Fig. <ref>(b). This exposes two unique surfaces, the top surface originates from the A_2 atomic arrangement and the bottom surface originates from the AB atomic arrangement. The optimized lattice constants (a) after structural relaxation for Nb_3Ge, Nb_3Sn and Nb_3Sb are 5.177 Å, 5.324 Å and 5.303 Å, respectively which are in agreement with literature.<cit.> While the optimized lattice constants (a) for Ta_3As, Ta_3Sb and Ta_3Sn are, 5.203 Å, 5.329 Å and 5.394 Å, respectively.The computational framework is described in Appendix B. In Figs. <ref> and <ref>the electronic structures of Ta_3Y and Nb_3X compounds are shown, respectively. Both band structures host fourfold rotational symmetries. This is due to the presence of non-symmorphic symmetry operations involving fourfold rotations and fractional translations with respect to the [001] crystal directions. Apart from the symmetries due to time-reversal and spatial inversion symmetry, we observe additional degeneracies in the momentum. Due to the non-symmorphic symmetries, we have 4-fold and 8-fold degeneracies at the R point. As we demonstrate in Appendix A, in the low-energy sector at the R-point two groups of bands are present: the bands related to the π and δ intradimer bonds. In Figs. <ref>(a,b,c) and<ref>(a,b,c) the band structures in the range between -1 eV and 1 eV are shown, at the R point we can see in this range the π-bond bands. The δ-bond bands are around 1.5 eV below the Fermi level. The parabolic band appearing at the Γ point for the Ta-based compounds is the 6s band of Ta. The 6s band crosses the p-d bands for Ta_3As and Ta_3Sb and increases the density of states.However, at the R point in the momentum space, we have mild variations in Ta_3Y and Nb_3X compounds with eigenvalues that are four and eight times degenerate as shown in Figs. <ref> and <ref> which is in agreement with the literature. While in Ta-based compounds, the strong SOC at the R-point opens the gap between π-orbital bands, as we can see in Fig. <ref> (a, b, c), in the Nb-based there is a smaller splitting at R but the bands with camelback shape around R are still crossing keeping conduction and valence sectors entangled, as it is shown in Figs. <ref> (a, b,c). The crystal symmetries are responsible for orbital hybridizations in the compounds, once we apply the SOC the anticrossings strongly contribute to the Berry curvature and spin Berry curvature. The multiple crossings observed in the electronic structures here give rise to large spin Berry curvatures which effectively translates to a large spin Hall effect in such compounds since the Fermi level lies within gaped crossings. At the Fermi level, we observe a high density of states with large contributions (due to multiple crossings) from transition metals Ta or Nb and minor contributions from other constituent elements of the A_3B composition, as shown in Appendix C. In the momentum space, the conduction band minima at Γ point originates from the s-orbitals of group III elements or pnictogens in the A_3B composition while the valence band maxima at Γ point originate from the d-orbitals of transition metals. Since these dispersions vary between the Ta_3Y and Nb_3X compositions, their relative positions define the density of states at the Fermi level and in turn the superconducting critical temperature (T_c) of the system.In Ta_3As and Ta_3Bi, the conduction band minima with s-orbital contributions are below the valence band maxima with d-orbital contributions at Γ point as evident from Fig. <ref> (a,c).However, in the case of Ta_3Sb and Nb_3X, the conduction bands and valence bands are well separated throughout the Brillouin zone as evident from Fig. <ref>(b) and Fig. <ref>, respectively. In this last case, the s-orbital contributions are farther away from the Fermi level as compared to the Ta_3Y family. The well-resolved band manifolds in Ta_3Sb and Nb_3X are accompanied by band inversions across the Fermi level which could produce topological properties. We will see that the presence of the s-orbitals band produces additional anticrossings and additional bands that increase the spin Hall conductivity (SHC) and T_c respectively. §.§ Spin Hall conductivity It is evident from the electronic structures of these compounds that they host multiple crossings and anticrossings, therefore we have a large change in the spin Berry curvature which indicates that the spin Hall effect should be large. It is known that the SHC is inversely proportional to the spin-orbit induced gap. Accordingly, in the case of Ta_3As, Ta_3Sb and Ta_3Bi (spin-orbit induced gap in electronic structure in increasing order) we find that the SHC at the Fermi level is - 1492.8 (ħ/e) Scm^-1, - 1423.86 (ħ/e) Scm^-1 and - 1320.2 (ħ/e) Scm^-1 respectively (which has decreasing trend as compared to the increasing order of the spin-orbit induced gap). The SHC of Ta_3As and Ta_3Bi close to the Fermi level are shown in Fig. <ref>. The SHC exhibits a wide peak that encompasses the energy range where the gapped crossings are located. In both cases of Ta_3As and Ta_3Bi the peak is very close to the Fermi level since the gapped crossings are located near the Fermi level, as shown in Fig. <ref>(a,c). A similar trend is observed in the case of Nb_3X (with group III elements) i.e., for Nb_3Ge and Nb_3Sn (with the spin-orbit induced gap in electronic structure in increasing order) we have SHC at the Fermi level of - 1691.4 (ħ/e) Scm^-1 and - 983.1 (ħ/e) Scm^-1 respectively. In the composition of Nb_3X, Nb_3Sb is an outlier with significantly low and positive SHC of 155.3 (ħ/e) Scm^-1 since it is a pnictogen substitution as compared to the other two which are group III elements. Large values of the SHC are usually associated with strong orbital texture. Indeed, the A15 systems host a strong orbital texture.<cit.> Even if the orbital moment is zero, since the compounds present inversion symmetry and highly symmetric crystal structure, the breaking of the inversion symmetry at the surface will generate an orbital magnetic moment. This makes the study of the surface states of A15 compounds interesting. §.§ Topological properties of Ta_3Y (Y = As, Sb, Bi) In this subsection, we discuss first the topological properties of the Ta_3Sb that is the ideal case, later, we discuss the topological properties of Ta_3As and Ta_3Bi with the presence of the s-band at the Fermi level. Since the Ta_3Sb compound shows well-resolved band manifolds, we compute the surface states projected on [001] crystal direction as presented in Fig. <ref>(e). Clearly, this compound hosts spin-momentum locked surface states with Dirac dispersions at Γ point. We also represent the corresponding slab band structure in Fig. <ref>(h), which shows that the Dirac dispersion at Γ originates from the top surface layers. Although these are slightly away from the Fermi level, one can realize them at the Fermi level in experimental conditions by varying the carrier concentrations. Albeit, as the band manifolds are well resolved in the case of Ta_3Sb, we compute the ℤ_2 topological invariants using the Wilson loop method around the Wannier charge centers. Therefore, for Ta_3Sb, the four ℤ_2 3D topological invariants are(ν_0,ν_1ν_2ν_3)=(1;000) indicating a strong topological insulator character. Since the conduction bands and valence bands are degenerate at the Γ point due to the presence of the 6s band in the case of Ta_3As and Ta_3Bi, we do not calculate the ℤ_2 invariants for these compounds. Although ℤ_2 is not well-defined, from the band structure we can see the Dirac surface states for Ta_3As and Ta_3Bi as shown in Figs. <ref>(d,g) and <ref>(f,i), respectively, where we show both the surface states and the slab band structures. However, when we include the 6s band in the tight-binding model with the Wannier basis, the Dirac surface states are blurred by the hybridization with the 6s band (see Appendix E for more details). The Dirac surface states would be difficult to detect in Ta_3As and Ta_3Bi, while they should be observable in all other compounds investigated in this paper. §.§ Topological properties of Nb_3X (X = Ge, Sn, Sb): effects of Coulomb repulsion Although ℤ_2 invariants are not well defined for Nb_3X compounds in the absence of Coulomb repulsion (U), from the band structures, we can see the Dirac surface states as shown in Fig. <ref>(d,g) for Nb_3Ge, <ref>(e,h) for Nb_3Sn and <ref>(f,i) for Nb_3Sb. To investigate further, we perform DFT + U calculations for Nb_3Ge, Nb_3Sn, and Nb_3Sb. The band structures within DFT + U are shown in Figs. <ref>, <ref> and <ref>. The Coulomb repulsion opens a global gap in the momentum space which is evident from the evolution of bulk band structures for different values of U as shown in Figs. <ref>(a) for Nb_3Ge, <ref>(a) for Nb_3Sn and <ref>(a) for Nb_3Sb. The corresponding surface states are instead shown in Figs. <ref>(b) for Nb_3Ge, <ref>(b) for Nb_3Sn and <ref>(b) for Nb_3Sb. As we can see from the surface states, for all three compounds, the Dirac point at Γ is buried in the bulk at U = 0 eV, while already at U=2 eV it is clearly visible and at U = 4 eV the band gap is opened making the calculation of four ℤ_2 invariants possible.After the opening of the gap, the calculation of ℤ_2 is well-defined and we obtain the four ℤ_2 topological invariants (ν_0,ν_1ν_2ν_3)=(1;000), showing that all these Nb-based compounds are ℤ_2 topological metals hosting Dirac surface states similarly to Ta_3Sb compound. This is a clear signature of non-trivial topological states appearing not only in heavy Ta-based compounds but also in Nb-based compounds.§.§ Bulk superconductivity in Nb_3X and Ta_3Y Generally, superconductivity requires metallic states at the Fermi level whereas topological insulators are gaped due to spin-orbit interactions, and they present conducting surface states. Hence, finding a stoichiometric composition where bulk superconductivity and topological surface states coexist is a tough task i.e., the surface states should lie at the Fermi level while the bulk remains fully gaped superconductor in the critical temperature regime. Typically in such systems the Dirac points exist farther from the Fermi level in the conduction bands making it challenging to be observed in experiments like Angle-resolved photoemission spectroscopy(ARPES).Several compounds have been investigated to this effect with A15 compounds not being an exception due to their metallic character.<cit.> Studies have been dedicated to Ta_3Sb as a potential candidate for topological superconductivity due to the presence of well-resolved spin-orbit induced band manifolds and a superconducting critical temperature of 0.7 K.<cit.> We revisited this compound and find that in agreement with the previous studies, the Dirac dispersions in surface states are around 500 meV away from the Fermi level in the conduction bands as evident in Fig <ref>(e) with a superconducting critical temperature of 0.81 K. However, these Dirac dispersions on the top surface merge with the s-bands of the pnictogens and on the bottom surface merge with the d-bands of Ta at the Γ point in the momentum space. Hence it is highly unlikely that Ta_3Sb will become a topological superconductor as has been observed in some noncentrosymmetric binary compound BiPd where the Dirac dispersions lie away from the Fermi level in the superconducting regime. This explanation holds true for Ta_3As and Ta_3Bi as well which have similar characteristics on the surface states (presented in Fig. <ref> (d,f)) with superconducting critical temperatures of 3.0 K and 1.16 K respectively. The s-bands in the conduction band minimum (CBM) of Ta_3Bi and Ta_3As are responsible for higher T_c as compared to Ta_3Sb since they increase the density of states near the Fermi level. On the other hand, the T_c for Nb_3Ge and Nb_3Sn compounds is higher as compared to Ta_3Y and Nb_3Sb. Owing to the electronic structure near the Fermi level, Nb_3Sb has a T_c of 2.21 K which is comparable to that of Ta_3As and Ta_3Bi. However, Nb_3Sb is an outlier when compared to Nb_3Ge and Nb_3Sn which have a T_c of 15.25 K and 15.66 K respectively. This distinction originates from the lattice dynamics which is evident from the phonon dispersion curves. As compared to Nb_3Sn which exhibits anomalous vibrational properties such as soft modes in the Γ→ X → M directions in the momentum space (as evident from Fig. <ref>(b)), the phonon softening is not observed in Nb_3Sn presented in Fig. <ref>(d) (see Appendix D). The anomaly of longitudinal acoustic modes of Nb_3Sn softening at lower temperature scales is accompanied by large neutron scattering linewidths which is a function of the electron-phonon coupling coefficient λ(ω), hence resulting in higher T_c.<cit.>We present the phonon dispersion curves alongside the Migdal-Eliashberg spectral functions α^2F(ω) and phonon density of states F(ω) for Nb_3Ge and Nb_3Sn are presented in Fig. <ref> and for Ta_3Y, Nb_3Sb are presented in Fig. <ref> (see Appendix D). The electron-phonon coupling coefficients λ(ω) for Nb_3Ge and Nb_3Sn are 1.41 and 2.03, respectively. The bulk superconductivity in Nb_3X compounds is quite sensitive to the method of crystal growth and various experimental conditions vary the experimental T_c. However, Nb_3Ge has been found to exhibit a maximum T_c of 23.2 K which along with Nb_3Sn gives further scope to explore the interplay between the topology and superconductivity owing to large SHC at the Fermi level.§.§ Thickness dependence of the electronic and superconducting properties in Nb_3Sn thin films We calculate the DOS as a function of the thickness to asses the superconducting properties of the Nb_3Sn thin film. The electronic properties reported in this subsection were calculated with the computational framework described in Appendix C. In Fig. <ref>(a-e), we show the DOS calculated for the bulk and for the stoichiometric slabs with different thicknesses. The DOS as a function of the thickness is reported in Fig. <ref>(f), we observe a decreasing value of the DOS when the thickness got reduced, this will reflect in a reduction of the critical temperature for the thin films of Nb_3Sn. For ultra-thin films, the DOS is reduced approximately by half with respect to the bulk value.These calculations show that Nb_3Sn has a different behavior due to its complex band structure properties compared to other BCS superconductors like V, Ta or Nb where one normally finds that, as soon as the thin film thickness is equal or greater than the coherence length, T_c is very close to the bulk T_c.The experimental superconducting critical temperature of the thin films strongly depends on the sample quality, of the substrate and usually tends to reduce with the thickness reduction.<cit.>§ TIGHT-BINDING MODEL WITH THREE SSH CHAINS FOR T_2G ORBITALS Here, we report a model that includes only the t_2g orbitals of the Nb/Ta atoms. This tight-binding model allows us to understand the character of the orbitals and the Fermi level and which hopping parameters tune the opening of the topological gap. If we want to produce a minimal model for the d-orbitals of the Nb/Ta atoms, we must include all the 6 Nb/Ta atoms per unit cell. If we could decouple p- and d-electrons, e_g are below in energy with respect to t_2g given the crystal field due to the position of other atoms, or anyway, we can assume this in first approximation. Since the charge transfer is zero, the Nb/Ta atoms are in d^5 electronic configuration. We developed a tight-binding model for the t_2g subset for the 6 Nb/Ta-atoms. Within the t_2g tight-binding model we can easily include the spin-orbit coupling.The crystal structure presents three dimers of Nb/Ta atoms, along the a, b and c axes, as shown in Fig. <ref>a). We consider in our model the intradimer hybridizations and the interdimer hoppings by including the first and the second nearest neighbours.The spinful tight-binding model is reported in more detail in Appendix A and it is composed of three coupled SSH chains with t_2g orbitals. The SSH chain along the b-axis is shown in Fig. <ref>. Very few recent examples of a three-dimensional SSH model have been reported in the literature<cit.>, with significant differences from the case proposed in this paper. The parameters of the model are described in detail in Appendix A, we have two onsite energies E_1 and E_2 for the π and δ-bonds, respectively. The hopping parameters t_1α, t_1β, t_2α, t_2β for the intradimer elements and t_3 and t_4 for the interdimer elements.The band structure reproduced of the model without including SOC is reported in Figs. <ref>. A magnification of the band structure at the R point is shown in <ref>a) and <ref>b), without and with SOC respectively. We include the SOC within the t_2g as the SOC of L=-1. We have a gap between the conduction and the valence band when the gap at R is open. The opening of the gap at R allows us to explicitly calculate the topological invariants. The opening of the topological gap at R is controlled by both the difference t_1α-t_1β and the spin-orbit coupling, also the Coulomb repulsion controls the gap as it was proved within DFT+U. The SSH model is a spinless and chiral 1D model, while the model that we have proposed is spinful and not chiral. Despite these differences regarding the material class, the relevant quantity for the topological properties is the difference between the hopping parameters due to the dimerization in both cases. § DISCUSSION, CONCLUSIONS AND OUTLOOKTa_3Sb shows strong orbital textures in the topological surface states<cit.>. Since the material class is the same and topological surface states are qualitatively the same, we expect the orbital texture even in Nb_3Ge and Nb_3Sn. The proximity-induced superconductivity in the Dirac surface states can generate Majorana zero mode localized at the vortex<cit.>. Further studies using the tight-binding model for the bulk and the surface of A15 could shed light on the interplay between topological and superconductive properties. In conclusion, using a first-principle approach, we provide an extensive description of the electronic, topological and superconductive properties of the Nb- and Ta-based A15 compounds by calculating band structure, SHC, superconducting T_c, and topological surface states.All compounds Nb_3X (X = Ge, Sn, Sb) and Ta_3Y (Y = As, Sb, Bi) have metallic band structures. Nb_3Ge and Nb_3Sn have one electron less respect to the other compounds, they have a larger density of states and high superconducting T_c. Ta_3As and Ta_3Bi host the Ta-6s band at the Fermi level producing larger T_c and larger SHC compared to Ta_3Sb due to the presence of additional DOS and additional anticrossings close to the Fermi level. The spin-Hall conductivity is relatively large also for the lighter elements due to the presence of several anticrossings in the Brillouin zone. All the superconducting T_c are sizeable. One of the most interesting results among our outcomes is the presence of Dirac surface states at the Γ point for all compounds at the same filling. The ideal case is the Ta_3Sb where we have ℤ_2 metallic compounds with net separation between conduction and valence bands. In all the other compounds, the conduction and the valence bands cross due to the Ta-6s for the Ta-compounds and due to the weaker SOC in the Nb-compounds. Despite the crossing between conduction and valence bands, the band inversion at the high-symmetry points persists in Nb-based compounds producing always the Dirac surface states at the Γ point. Unfortunately, the Dirac surface states are blurred when we include the hybridization with the s-bands in the Ta-based compounds.The surface Dirac points can be tuned by the Coulomb repulsion, in the case of transition metal termination the Dirac point is around the Fermi level.Even if the ℤ_2 topological invariant cannot be calculated in all cases, the presence of the Dirac surface states is persistent. Therefore, we can assume that the Nb_3X (X = Ge, Sn, Sb) and Ta_3Y (Y = As, Sb, Bi) compounds are all ℤ_2 topological metals. These Dirac surface states could explain the low resistivity of the Nb_3Sn surface observed experimentally<cit.>. Additionally, we provide a minimal tight-binding model composed of three coupled SSH chains and based on the t_2g orbitals. This tight-binding reproduces the relevant electronic and topological features for these compounds at the Fermi level. With a T_c of 23.2 K, Nb_3Ge could be the ℤ_2 topological metal with the highest T_c ever reported amongst the A15 compounds.The surfaces of the A15 will host an interplay between ℤ_2 topology, strong orbital texture, breaking of the inversion symmetry and BCS superconductivity with relatively large T_c. Therefore, the surfaces of A15 are a platform to search for exotic superconductivity. Once it will be grown the thin film of A15, it will be possible to construct superlattices, junctions, or heterostructures of superconductors and topological compounds in order to study the topological superconductivity via the proximity effect. The non-trivial surface properties of A15 thin films can also represent an interesting platform for the realization of gate-controllable superconducting devices, where recent studies have suggested that surface properties are key for the observation of the suppression of a critical current under an applied gate voltage <cit.>. § ACKNOWLEDGMENTS The work is supported by the Foundation for Polish Science through the International Research Agendas program co-financed by the European Union within the Smart Growth Operational Programme (Grant No. MAB/2017/1).C. A. acknowledges Erasmus+ for a training scholarship. We acknowledge the access to the computing facilities of the Interdisciplinary Center of Modeling at the University of Warsaw, Grant g91-1418, g91-1419 and g91-1426 for the availability of high-performance computing resources and support. We acknowledge the CINECA award under the ISCRA initiativeIsC99 "SILENTS”, IsC105 "SILENTSG", IsB26 "SHINY" and IsB27 "SLAM" grants for the availability of high-performance computing resources and support. We acknowledge the access to the computing facilities of the Poznan Supercomputing and Networking Center Grant No. 609.§ THREE COUPLED SSH CHAINS WITH A T_2G ORBITAL BASIS Our model includes just the t_2g electrons for the 6 Nb-atoms. The crystal structure presents three dimers of Nb atoms, along the a, b and c axes. We consider in our model the intradimer hybridizations and the interdimer hoppings by considering the first and the second nearest neighbours.The lattice constant is d=5.139 Å. The basis in the Hilbert space is given by the vector ϕ_i^†=(ϕ_1a^†,ϕ_2a^†,ϕ_3c^†,ϕ_4c^†,ϕ_5b^†,ϕ_6b^†), where ϕ_1a^†=(ϕ_1a,xz^†,ϕ_1a,yz^†,ϕ_1a,xy^†), the indices 1,.., 6 indicate the Nb atoms from the first to the sixth, the letters a, c, b indicate the fact that these sites belong to the dimers along the cell directions a, c and b, while xz, yz and xy indicate the t_2g orbitals d_xz, d_yz and d_xy.Our Hamiltonian is the following: H=[ H_aa H_ac H_ab; H_ca H_cc H_cb; H_ba H_bc H_bb ], where the subscript a indicates the dimer along the a direction, composed of Nb_1 and Nb_2 atoms,c the dimer along the c direction, composed of Nb_3 and Nb_4 atoms and b indicates the dimer along the b direction, namely the dimer of Nb_5 and Nb_6 atoms. The aa, bb and cc blocks of the Hamiltonian contain the intradimer hybridizations, while the off-diagonal blocks contain the interdimer hoppings.The intradimer submatrices are of this type: H_aa=[ H_1a1a H_1a2a; H_2a1a H_2a2a ], where H_1a1a=[ E_1a_xz 0 0; 0 E_1a_yz 0; 0 0 E_1a_xy; ],andH_1a2a=[ H_1a_xz2a_xz00;0 H_1a_yz2a_yz0;00 H_1a_xy2a_xy;], The on-site energies belong to two groups. The energy E_1 belongs to the orbitals that form intradimer π-bond, while the energy E_2 belongs to the orbitals that form intradimer δ-bond :E_1a_xz =E_1E_1a_yz=E_2E_1a_xy=E_1E_2a_xz =E_1E_2a_yz=E_2E_2a_xy=E_1E_3c_xz =E_1E_3c_yz=E_1E_3c_xy=E_2E_4c_xz =E_1E_4c_yz=E_1E_4c_xy=E_2E_5b_xz =E_2E_5b_yz=E_1E_5b_xy=E_1E_6b_xz =E_2E_6b_yz=E_1E_6b_xy=E_1The intradimer elements have a hopping form similar to the SSH model<cit.> as we can see in Fig. <ref>, therefore, the tight-binding model that describes the A15 is composed of three coupled SSH chains. The intradimer Hamiltonian elements have the following form: H_1a_xz2a_xz=t_1αe^-ik_xd/2+ t_1βe^ik_xd/2 H_1a_yz2a_yz=t_2αe^-ik_xd/2+ t_2βe^ik_xd/2H_1a_xy2a_xy=t_1βe^-ik_xd/2+ t_1αe^ik_xd/2H_3c_xz4c_xz=t_1βe^-ik_xd/2+ t_1αe^ik_xd/2 H_3c_yz4c_yz=t_1αe^-ik_xd/2+ t_1βe^ik_xd/2H_3c_xy4c_xy=t_2αe^-ik_xd/2+ t_2βe^ik_xd/2H_5b_xz6b_xz=t_2αe^-ik_xd/2+ t_2βe^ik_xd/2 H_5b_yz6b_yz=t_1αe^-ik_xd/2+ t_1βe^ik_xd/2H_5b_xy6b_xy=t_1βe^-ik_xd/2+ t_1αe^ik_xd/2 where t_1 are π-bonds hoppings and t_2 are δ-bonds hoppings. As expected, we have t_1 > t_2 if we extract the parameters from the wannierization of the DFT band structure. The configuration of α and β are the left and right hopping, their configuration is related to the symmetries of the system. The topological gap at the R-point is controlled by the difference between t_1α - t_1β and enhanced by the spin-orbit coupling. The interdimer submatrices are of the type:H_ac=[ H_1a3c H_1a4c; H_2a3c H_2a4c ],where we have:H_1a3c=[ H_1a_xz3c_xz H_1a_xz3c_yz H_1a_xz3c_xy; H_1a_yz3c_xz H_1a_yz3c_yz H_1a_yz3c_xy; H_1a_xy3c_xz H_1a_xy3c_yz H_1a_xy3c_xy;],Regarding the interdimer hybridizations, the intraorbital elements that are different from zero in our model are the following: H_1a_xz3c_xz=2t_3e^i(k_x+k_z)d/4cos(k_yd/2)H_1a_xz4c_xz=2t_3e^i(k_x-k_z)d/4cos(k_yd/2)H_1a_xy5b_xy=2t_3e^i(-k_x+k_y)d/4cos(k_zd/2)H_1a_xy6b_xy=2t_3e^i(-k_x-k_y)d/4cos(k_zd/2)H_2a_xz3c_xz=2t_3e^i(-k_x+k_z)d/4cos(k_yd/2)H_2a_xz4c_xz=2t_3e^i(-k_x-k_z)d/4cos(k_yd/2)H_2a_xy5b_xy=2t_3e^i(k_x+k_y)d/4cos(k_zd/2)H_2a_xy6b_xy=2t_3e^i(k_x-k_y)d/4cos(k_zd/2) H_3c_yz5b_yz=2t_3e^i(-k_y+k_z)d/4cos(k_xd/2) H_3c_yz6b_yz=2t_3e^i(k_y+k_z)d/4cos(k_xd/2) H_4c_yz5b_yz=2t_3e^i(-k_y-k_z)d/4cos(k_xd/2) H_4c_yz6b_yz=2t_3e^i(k_y-k_z)d/4cos(k_xd/2) and the interorbital Hamiltonian elements are the following:H_1a_yz3c_xy=2t_4e^i(k_x+k_z)d/4cos(k_yd/2) H_1a_yz4c_xy=-2t_4e^i(k_x-k_z)d/4cos(k_yd/2) H_1a_yz5b_xz=-2t_4e^i(-k_x+k_y)d/4cos(k_zd/2)H_1a_yz6b_xz=2t_4e^i(-k_x-k_y)d/4cos(k_zd/2)H_2a_yz3c_xy=-2t_4e^i(-k_x+k_z)d/4cos(k_yd/2)H_2a_yz4c_xy=2t_4e^i(-k_x-k_z)d/4cos(k_yd/2)H_2a_yz5b_xz=2t_4e^i(k_x+k_y)d/4cos(k_zd/2)H_2a_yz6b_xz=-2t_4e^i(k_x-k_y)d/4cos(k_zd/2) H_3c_xy5b_xz=-2t_4e^i(-k_y+k_z)d/4cos(k_xd/2) H_3c_xy6b_xz=2t_4e^i(k_y+k_z)d/4cos(k_xd/2) H_4c_xy5b_xz=2t_4e^i(-k_y-k_z)d/4cos(k_xd/2) H_4c_xy6b_xz=-2t_4e^i(k_y-k_z)d/4cos(k_xd/2) § COMPUTATIONAL DETAILS FOR THE QUANTUM ESPRESSO CALCULATIONS To compute material properties, we employed density functional theory based first-principles calculations as implemented within the Quantum ESPRESSO code.<cit.> We use generalized-gradient approximations with Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional <cit.> implemented in norm-conserving pseudopotential with optimized kinetic energy cut-off of 110 Ry for Nb_3X (X = Ge, Sb, Sn) and 70 Ry Ta_3Y (Y = Bi, As, Sb). A uniform Monkhorst-Pack grid of 20 × 20 × 20 was used in all the calculations which corresponds to 1342 special k-points in the irreducible Brillouin zone. In phonons of Nb_3Sn, we observed a numerical artifact at high symmetry point Γ with PBE pseudopotential, hence made additional calculations using local-density approximation with Perdew-Wang 91 gradient-corrected functional only for this system.<cit.> The optimized kinetic energy cut-off of 70 Ry was used for this purpose.The dynamical calculations pertaining to vibrational properties have been performed within the density functional perturbation theory <cit.> wherein the dynamical matrices were sampled within the irreducible Brillouin zone using a q-mesh of at least 3 × 3 × 3. The Migdal-Eliashberg spectral functions were calculated using Eq. <ref> presented below.<cit.> α^2 F(ω) = 1/2π N(E_F)∑_qvγ_qv/ω_qvδ(ω - ω_qv) where, γ_qv is phonon linewidth and ω_qv is the phonon eigen frequency. Integrating over these spectral functions as presented in Eq. <ref>, we obtain the electron-phonon coupling coefficient λ(ω). λ(ω) = 2 ∫_0^∞α^2 F(ω)/ω dω Following this, the superconducting critical temperature (T_c) was calculated using Allen-Dynes modification of the McMillan formula <cit.> presented in Eq. <ref> where μ^* is effective Coulomb repulsion parameter ω_ln is weighted logarithmic average phonon frequencies. T_c = ω_log/1.2 exp(-1.04(1+λ)/λ-μ^*(1+0.62λ))To compute the topological surface states, the ℤ_2 topological invariants and spin Hall conductivity, we used the Wannier90 <cit.> and WannierTools codes <cit.>.The basis of the tight-binding model was composed of the Wannier functions <cit.> obtained from the d-orbitals of Nb or Ta and the p-orbitals of the other atoms. The atomic orbital configuration is s^2d^3 for Nb and Ta, but it becomes d^5 in crystals, therefore Ta and Nb have half-filled d-orbitals. The tight-binding model is composed of 60 d-bands and 12 p-bands for a total of 72 bands. The topological gap is observed at half-filling. Of these 36 occupied bands, 6 are mainly p-bands and 30 are mainly d-bands. The momentum mesh used for calculating spin Hall conductivity σ_𝑥𝑦^spinz(ω) was 200 × 200 × 200. This was done using the Kubo-Greenwood formula as implemented in Wannier90 <cit.>.§ SLAB CALCULATION AND DENSITY OF STATES OF THE NB-BASED COMPOUNDS USING VASPFor the slab calculations, additional first-principles calculations are performed via the Vienna ab initio simulation package (VASP) <cit.> using density functional theory. The generalized gradient approximation with PBE form <cit.> and PBEsol <cit.> is adopted to calculate the lattice parameters and the density of states (DOS). An energy cutoff of 350 eV and a mesh of 16 × 16 × 1 k-points were chosen for the different thicknesses of Nb-based compounds. The calculations were converged with the convergence criteria of 0.01 eV/Å, and energy 10^-5 eV. We have constructed stoichiometric slabs of Nb_3Sn with Nb_2 and NbSn terminations as shown in Fig. <ref>(c) and the internal degrees of freedom were relaxed.The density of states (DOS) is a relevant quantity to examine the superconducting properties. We report the DOS Nb_3Ge, Nb_3Sn and Nb_3Sb in Fig. <ref>(a-c). From the DOS, we can see that we have the d-orbitals roughly between -3 eV and +5 eV sandwiched between p-orbitals spectral weight. In these cases, it is extremely challenging to decouple the d-orbitals from the p-orbitals due to their strong hybridization as shown for other materials with such p-d spectral weight<cit.>. To perform the wannierization, we must include the d-orbitals of Nb and the p-orbitals of Ge, Sn and Sb. The DOS results were obtained within the VASP code and matched with the results obtained with Quantum Espresso. We report the calculation of the equilibrium lattice constant within PBEsol and PBE. The system is cubic and does not have internal degrees of freedom, therefore, the only parameter to optimize is the lattice constant a.The lattice constant results were obtained within the VASP code and matched with the results obtained with Quantum espresso. The PBE exchange-correlation functional overestimates the experimental lattice constant by 1%, indeed the lattice constant for Nb_3Ge, Nb_3Sn and Nb_3Sb are 5.188 Å, 5.339 Å and 5.314 Å, respectively. The PBEsol gives slightly better results underestimating the lattice constant by 0.1% (i.e., lattice constant for Nb_3Ge, Nb_3Sn and Nb_3Sb are 5.135 Å, 5.280 Å and 5.259 Å, respectively. § PHONON DISPERSIONS AND MIGDAL-ELIASHBERG SPECTRAL FUNCTIONS FOR TA_3Y (Y = GE,SN,SB) AND NB_3SB COMPOUNDSIn this Section, we report the phonon dispersion curves, phonon density of states, Migdal-Eliashberg spectral functions and the electron-phonon coupling values for all the compounds except that for Nb_3Ge and Nb_3Sn which are described in the main text.The results are reported in Fig. <ref>. The electron-phonon coupling coefficient λ(ω) for Ta_3As, Ta_3Sb, Ta_3Bi and Nb_3Sb are 0.56, 0.41, 0.45 and 0.47, respectively. Correspondingly the T_c is slightly higher in Ta_3As, Ta_3Bi and Nb_3Sb as compared to Ta_3Sb (see the main text for the numerical values), this is essentially due to the density of states at the Fermi level in the electronic structure. Although, Ta_3Sb shows softening of the optical phonon modes which implies that the T_c should be as high as Ta_3Bi and Nb_3Sb, however, the density of states at the Fermi level in the electronic structure dominates the superconducting behavior leading to lower T_c.§ HYBRIDIZATION OF THE DIRAC SURFACE STATES WITH THE S-BANDS FOR TA-BASED SYSTEMS In this Section, we want to include the s-band in our tight-binding model and see the effect on the Dirac surface states. While the results of the main text are obtained with high numerical accuracy, the process of including the s-band in the tight-binding Hamiltonian is obtained within strong approximations in the wannierization process (just including the s-band in the frozen window). The presence of the s-band in the Ta-based system affects the Dirac surface states by submerging the Dirac points below them. We report in Fig. <ref> the surface states for the Ta_3Y which include the s-bands. In our calculations, we can observe the Dirac surface states are pushed below the s-bands and the Dirac surface states are blurred by the hybridization with the s-bands. The parity of the s-band and the parity of the other highest valence bands are both positive, therefore, we expect that this s-band would produce an adiabatic transition without affecting the topology<cit.>. Indeed, in the literature, it was shown that the Dirac surface states of Ta_3Sb can coexist with the s-bands<cit.>.
http://arxiv.org/abs/2310.18245v2
{ "authors": [ "Raghottam M. Sattigeri", "Giuseppe Cuono", "Ghulam Hussain", "Xing Ming", "Angelo Di Bernardo", "Carmine Attanasio", "Mario Cuoco", "Carmine Autieri" ], "categories": [ "cond-mat.supr-con", "cond-mat.mtrl-sci" ], "primary_category": "cond-mat.supr-con", "published": "20231027163205", "title": "Dirac surface states, multiorbital dimerization and superconductivity in Nb- and Ta-based A15 compounds" }
="2D P ℕ NP TFNP MHS DTIME 3SAT MA MA_1 AM NPDAG QMADAG yes no US FP PP C_=P coC_=P PH SAT QSATSPP GapP BQP QP StoqMA coNP A_0PP QMA QMA_1 clonableQMA QΣ coQMA BPP QCMA ^[log] ^ ^[2] ^[1] ^|| ^|| ^|| ^|| ^[log] ^[log] ^ QMASPACE ^(2)[log] ^[log] ^‖(2) ^ #P ^[1] PromisePP ≤_tt 𝖸𝖤𝖲 𝖭𝖮 PSPACE IP POLY DAG DAG CDAG CDAG_f CDAG_s CDAG_d CDAG_1 LOGS TAUTOLOGY SBQP SBP F_ F_ GSCON _exp _exp UQMA ℝ TREES APXSIM AWPP 𝒳 𝒴𝒵 ℤ H_prop H_in Π_in H_out Π_out H_stab Ł_ext BTW() BSN SN BD NPHYPERTREE H_ext H̃_prop H̃_in H̃_out EXP 𝒜 𝒰() DAGS ()AND S,TCONN CNF NEXP NPSPACE QCMASPACE BQPSPACE PCP BQ_UPSPACE (2) NQP PreciseQMA(2) _exp MIP (2) BellQMA (2) _exp 0.50.08334em
http://arxiv.org/abs/2310.18010v1
{ "authors": [ "Sevag Gharibian" ], "categories": [ "quant-ph", "cs.CC" ], "primary_category": "quant-ph", "published": "20231027093611", "title": "The 7 faces of quantum NP" }
Efficient Fully Bayesian Approach to Brain Activity Mapping with Complex-Valued fMRI Data Zhengxin Wang Clemson University Daniel B. Rowe Marquette University Xinyi Li Clemson University D. Andrew Brown Address for correspondence:D. Andrew Brown, School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC, USA. Email: [email protected] Clemson University=========================================================================================================================================================================================================================================================================================================== We introduce contextual stochastic bilevel optimization (CSBO) – a stochastic bilevel optimization framework with the lower-level problem minimizing an expectation conditioned on some contextual information and the upper-level decision variable. This framework extends classical stochastic bilevel optimization when the lower-level decision maker responds optimally not only to the decision of the upper-level decision maker but also to some side information and when there are multiple or even infinite many followers. It captures important applications such as meta-learning, personalized federated learning, end-to-end learning, and Wasserstein distributionally robust optimization with side information (WDRO-SI). Due to the presence of contextual information, existing single-loop methods for classical stochastic bilevel optimization are unable to converge. To overcome this challenge, we introduce an efficient double-loop gradient method based on the Multilevel Monte-Carlo (MLMC) technique and establish its sample and computational complexities. When specialized to stochastic nonconvex optimization, our method matchesexisting lower bounds. For meta-learning, the complexity of our method does not depend on the number of tasks. Numerical experiments further validate our theoretical results. § INTRODUCTION A contextual stochastic bilevel optimization (CSBO) problem differs from a classical stochastic bilevel optimization problem only in that its lower-level problem is conditioned on a given context ξ.min_x∈^d_xF(x):=_ξ∼_ξ, η∼_η|ξ [f(x,y^*(x;ξ);η, ξ)] (upper level) wherey^*(x;ξ) :=_y∈^d_y_η∼_η|ξ [g(x,y;η, ξ)] ∀ ξ andx.(lower level)Here ξ∼_ξ and η∼_η|ξ are random vectors, with _η|ξ denoting the conditional distribution of η for a given ξ. The dimensions of the upper-level decision variable x and the lower-level decision variable y are d_x and d_y, respectively. The functions f and g are continuously differentiable in (x,y) for any given sample pair (ξ,η). The function f(x, y;η, ξ) can be nonconvex in x, but the function g(x,y;η,ξ) must be strongly convex in y for any given x, η and ξ. Thus, y^*(x;ξ) is the unique minimizer of the strongly convex lower-level problem for any given x and ξ. Note that, on its own, the lower-level problem can be viewed as a contextual stochastic optimization problem <cit.> parametrized in x. We assume that the joint distribution of ξ and η is unknown. However, we assume that we have access to any number of independent and identically distributed (i.i.d.) samples from _ξ, and for any given realization of ξ, we can generate any number of i.i.d. samples from the conditional distribution _η|ξ. The bilevel structure generally makes the objective function F(x) nonconvex in the decision variable x, except for few special cases. Thus we aim to develop efficient gradient-based algorithms for finding an -stationary point of the nonconvex objective function F, i.e., a point x̂ satisfying the inequality ∇ F(x̂)^2≤^2.CSBO generalizes the widely studied class of stochastic bilevel optimization (SBO) problems <cit.> whose lower-level problem minimizes an unconditional expectation.min_x∈^d_x _ξ∼_ξ [f(x,y^*(x);ξ)] wherey^*(x) := _y∈^d_y_η∼_η [g(x,y;η)].Indeed, (<ref>) is a special case of CSBO if the upper- and lower-level objective functions are stochastically independent.Another special case of CSBO is the conditional stochastic optimization (CSO) problem <cit.> representable asmin_x∈^d_x_ξ∼_ξ [f(x,_η∼_η|ξ[h(x;η,ξ)];ξ)].Indeed, (<ref>) is a special case of CSBO if we set g(x,y;η,ξ) =y - h(x;η,ξ)^2, in which case the lower-level problem admits the unique closed-form solution y^*(x,ξ) = __η|ξ [h(x;η,ξ)]. Applications. Despite the wide applicability of SBO to various machine learning and game theory paradigms, SBO cannot capture two important cases. The first case involves the lower-level decision maker responding optimally not only to the upper-level decision x but also to some side information ξ like weather, spatial, and temporal information. The second case involves multiple lower-level decision makers, especially when the total number is large. CSBO well captures these two settings and encompasses various important machine learning paradigms as special cases, including meta-learning <cit.>, personalized federated learning <cit.>, hierarchical representation learning <cit.>, end-to-end learning <cit.>, Sinkhorn distributionally robust optimization (DRO) <cit.>, Wasserstein DRO with side , information retrieval <cit.>, contrastive learning <cit.>, and instrumental variable regression <cit.>.Below we provide a detailed discussion of meta-learning, personalized federated learning, and end-to-end learning. Meta-Learning and Personalized Federated Learning. Both applications can be viewed asspecial cases of CSBO. For meta-learning with M tasks or personalized federated learning with M users, the goal is to find a common regularization center θ shared by all tasks or all users. min_x  _i∼μ_D^test_i∼ρ_i [l_i(y_i^*(x),D^test_i) ](upper level) where y_i^*(x) =_y_i _D^train_i∼ρ_i[ l_i(y_i,D^train_i) + λ/2y_i - x^2], ∀ i∈[M],x. (lower level)Here, μ is the empirical uniform distribution on [M]. The upper-level problem minimizes the generalization loss for all tasks/all users by tuning the joint regularization center x, and the lower-level problem finds an optimal regularization parameter x_i close to x for each individual task or user. Note that M may be as large as (10^3) in meta-learning and as large as 𝒪(10^9) in personalized federated learning. Thus, it is crucial to design methods with complexity bounds independent of M.End-to-End Learning. Traditionally, applications from inventory control to online advertising involve a two-step approach: first estimating a demand function or the click-through rate, and then making decisions based on this estimation. End-to-end learning streamlines this into a single-step method, allowing the optimization to account for estimation errors, thereby enabling more informed decisions. This can be framed as a special case of CSBO, where the upper-level problem seeks the best estimator, while the lower-level problem makes optimal decisions based on the upper-level estimator and the contextual information ξ. For example, in online advertising, x represents the click-through rate estimator, and y^*(x;ξ) denotes the optimal advertisement display for a customer characterized by the feature vectorξ.For a comprehensive review, see the recent survey paper <cit.>.Challenges.Given the wide applicability of CSBO, it is expedient to look for efficient solution algorithms. Unfortunately, when extended to CSBO, existing algorithms for SBO or CSO either suffer from sub-optimal convergence rates or are entirely unable to handle the contextual information. Indeed, a major challenge of CSBO is to estimate y^*(x;ξ) for (typically) uncountably many realizations of ξ. In the following, we explain in more detail why existing methods fail.If the lower-level problem is strongly convex, then SBO can be addressed with numerous efficient single-loop algorithms <cit.>. Indeed, as the unique minimizer y^*(x) of the lower-level problem in (<ref>) depends only on the upper-level decision variable x, these algorithms can sequentially update the upper- and lower-level decision variables x and y in a single loop while ensuring that the sequence {y^t}_t approximates {y^*(x^t)}_t. Specifically, these approaches leverage the approximationy^*(x^t+1) - y^*(x^t)≈∇ y^*(x^t)^⊤ (x^t+1 - x^t),which is accurate if x is updated using small stepsizes.However, these algorithms generically fail to converge on CSBO problems because the minimizer y^*(x;ξ) of the lower-level problem in (<ref>) additionally depends on the context ξ, i.e., each realization of ξ corresponds to a lower-level constraint. Consequently, there can be infinitely many lower-level constraints.It is unclear how samples from _η|ξ corresponding to a fixed context ξ can be reused to estimate the minimizer y^*(x;ξ^') corresponding to a different context ξ^'. Since gradient-based methods sample ξ^t independently in each iteration t, no single sequence {y^t}_t can approximate the function {y^*(x^t,ξ^t)}_t. <cit.> and <cit.> analyze a special case of the CSBO problem (<ref>), in which ξ is supported on M points as shown in (<ref>). However, the sample complexity of their algorithm grows linearly with M. In contrast, we develop methods for general CSBO problems and show that their sample complexities are independent of the support of ξ.SBO problems can also be addressed with double-loop stochastic gradient descent (DL-SGD), which solve the lower-level problem to approximate optimality before updating the upper-level decision variable <cit.>. We will show that these DL-SGD algorithms can be extended to CSBO problems and will analyze their sample complexity as well as their computational complexity. Unfortunately, it turns out that, when applied to CSBO problems, DL-SGD incurs high per-iteration sampling and computational costs to obtain a low-bias gradient estimator for F. More precisely, solving the contextual lower-level problem to -optimality for a fixed ξ requires (^-2) samples from _η|ξ and gradient estimators for the function g, which leads to a (^-6) total sample and computational complexity to obtain an -stationary point of F. Methodology. Given these observations indicating that existing methods can fail or be sub-optimal for solving the CSBO problem, we next discuss the motivation for our algorithm design. Our goal is to build gradient estimators that share the same small bias as DL-SGD but require much fewer samples and incur a much lower computational cost at the expense of a slightly increased variance.To obtain estimators with low bias, variance, and a low sampling and computational cost, we propose here a multilevel Monte Carlo (MLMC) approach <cit.>, which is reminiscent of the control variate technique, and combine it with inverse propensity weighting <cit.>. We refer to the proposed method as random truncated MLMC (RT-MLMC) and demonstrate that the RT-MLMC estimator for ∇ F requires only (1) samples from _η|ξ.This is a significant improvement vis-à-vis DL-SGD, which requires (^-2) samples. Consequently, the sample complexity as well as the gradient complexity over g (i.e., the number g-gradient evaluations) of RT-MLMC for finding an -stationary point of F is given by (^-4). While the idea of using MLMC in stochastic optimization is not new <cit.>, the construction of MLMC gradient estimators for CSBO and the analysis of the variance of the RT-MLMC gradient estimators are novel contributions of this work.§.§ Our Contributions * We introduce CSBO as a unifying framework for a broad range of machine learning tasks and optimization problems. We propose two methods, DL-SGD and RT-MLMC, and analyze their complexities; see Table <ref> for a summary. When specialized to SBO and CSO problems, RT-MLMC displays the same performance as the state-of-the-art algorithms for SBO <cit.> and CSO <cit.>, respectively. When specialized to classical stochastic nonconvex optimization, RT-MLMC matches the lower bounds by <cit.>.* For meta-learning with M tasks, the complexity bounds of RT-MLMC are constant in M. Thus, RT-MLMC outperforms the methods by <cit.> and <cit.> when M is large. For Wasserstein DRO with side information <cit.>, existing methods only cater for affine and non-parametric decision rules. In contrast, RT-MLMC allows for neural network approximations. We also present the first sample and gradient complexity bounds for WDRO-SI.* For meta-learning and Wasserstein DRO with side information, our experiments show that the RT-MLMC gradient estimator can be computed an order of magnitude faster than the DL-SGD gradient estimator, especially when the contextual lower-level problem is solved to higher accuracy.PreliminariesFor any function ψ:ℝ^d_x×ℝ^d_y with arguments x∈ℝ^d_x and y∈ℝ^d_y, we use ∇ψ, ∇_1ψ and ∇_2ψ to denote the gradients of ψ with respect to (x,y), x and y, respectively. Similarly, we use ∇^2ψ, ∇^2_11ψ and ∇^2_22 to denote Hessians of ψ with respect to (x,y), x and y, respectively. In addition, ∇_12^2ψ stands for the (d_x× d_y)-matrix with entries ∂^2_x_i y_jψ. A function φ:ℝ^d→ℝ is L-Lipschitz continuous if |φ(x)-φ(x')|≤ Lx-x' for all x,x'∈^d, and it is S-Lipschitz smooth if it is continuously differentiable and satisfies ∇φ(x)-∇φ(x')≤ Sx-x' for all x,x'∈^d. In addition, φ is called μ-strongly convex if it is continuously differentiable and if φ(x)-φ(x')-∇φ(x')^⊤(x-x')≥μ/2x-x'^2 for all x,x'∈^d. The identity matrix is denoted by I.Finally, we use (·) as a variant of the classical (·) symbol that hides logarithmic factors. § ALGORITHMS FOR CONTEXTUAL STOCHASTIC BILEVEL OPTIMIZATION Throughout the paper, we make the following assumptions. Similar assumptions appear in the SBO literature <cit.>. The CSBO problem (<ref>) satisfies the following regularity conditions: (i) f is continuously differentiable in x and y for any fixed η and ξ, and g is twice continuously differentiable in x and y for any fixed η and ξ.(ii) g is μ_g-strongly convex in y for any fixed x, η and ξ.(iii) f, g, ∇ f, ∇ g and ∇^2 g are L_f,0, L_g,0, L_f,1, L_g,1 and L_g,2-Lipschitz continuous in (x,y) for any fixed η and ξ, respectively.(iv) If (η,ξ)∼ℙ_(η,ξ), then ∇ f(x,y;η, ξ) is an unbiased estimator for ∇_(η,ξ)∼ℙ_(η,ξ) [f(x,y;η, ξ)] with variance σ_f^2 uniformly across all x and y. Also, if η∼ℙ_η|ξ, then ∇ g(x,y;η,ξ) is an unbiased estimator for ∇_η∼_η|ξ [g(x,y;η,ξ)] with variance σ_g,1^2, and ∇^2 g(x,y;η,ξ) is an unbiased estimator for ∇^2 _η∼_η|ξ [g(x,y;η,ξ)] with variance σ_g,1^2 uniformly across Assumption <ref> ensures that problem (<ref>) is well-defined. In particular, by slightly adapting the proofs of <cit.> and <cit.>, it allows us to show that F is L_F-Lipschitz continuous as well as S_F-Lipschitz smooth for some L_F, S_F>0. Assumptions <ref> (i-iii) also imply that the gradients of f and g with respect (x,y) can be interchanged with the expectations with respect to (η,ξ)∼ℙ_η,ξ and η∼ℙ_η|ξ. Hence, Assumptions <ref> (i-iii) readily imply the unbiasedness of the gradient estimators imposed in Assumption <ref> (iv). In fact, only the uniform variance bounds do not already follow from Assumptions <ref> (i-iii). In order to design SGD-type algorithms for problem (<ref>), we first construct gradient estimators for F. To this end, we observe that the Jacobian ∇_1 y^*(x;ξ) ∈ℝ^d_x× d_y exists and is Lipschitz continuous in x for any fixed ξ thanks to <cit.>. By the chain rule, we therefore have∇ F(x) = _(η,ξ)∼_(η,ξ)[∇_1 f(x,y^*(x;ξ);η,ξ) + ∇_1 y^*(x;ξ)^⊤∇_2 f(x,y^*(x;ξ);η,ξ) ].By following a similar procedure as in <cit.>, we can derive an explicit formula for ∇_1 y^*(x;ξ) (for details we refer to Appendix <ref>) and substitute it into the above equation to obtain∇ F(x) =_(η,ξ)∼_(η,ξ)[∇_1 f(x,y^*(x;ξ);η,ξ) - ( _η'∼_η|ξ∇_12^2 g(x,y^*(x;ξ);η',ξ) )Λ(x, y^*(x;ξ); ξ)∇_2 f(x,y^*(x;ξ);η,ξ) ],where Λ(x, y; ξ)=( _η∼_η|ξ∇_22^2g(x, y; η, ξ))^-1.Thus, the main challenges of constructing a gradient estimator are to compute and store the minimizer y^*(x,ξ) as well as the inverse expected Hessian matrix Λ(x, y; ξ) for all (potentially uncountably many) realizations of ξ. Computing these two objects exactly would be too expensive. In the remainder of this section, we thus derive estimators for y^*(x;ξ) and Λ(x, y; ξ), and we combine these two estimators to construct an estimator for ∇ F(x). Estimating y^*(x;ξ). We estimate y^*(x;ξ) using the gradient-based method EpochSGD by <cit.>, which involves K epochs of increasing lengths. Each epoch k=1,…,K starts from the average of the iterates computed in epoch k-1 and then applies 2^k stochastic gradient steps to the lower-level problem with stepsize 2^-k (see Algorithm <ref>). In the following we use the output y^0_K+1 of Algorithm <ref> with inputs K, x, ξ and y_0 asan estimator for the minimizer y^*(x;ξ) of the lower-level problem.We use EpochSGD for the following two reasons. First, EpochSGD attains the optimal convergence rate for strongly convex stochastic optimization in the gradient oracle model <cit.>. In addition, it is widely used in practical machine learning training procedures. Note that y^*(x;ξ) could also be estimated via classical SGD. Even though this would lead to similar complexity results, the analysis would become more cumbersome.Estimating Λ(x, y; ξ). Following <cit.>, one can estimate the inverse of an expected random matrix A with 0≺ A≺ I using a Neumann series argument. Specifically, we have[_A∼ℙ_A A]^-1 = ∑_n^'=0^∞ (I-_A∼ℙ_A A)^n = ∑_n^'=0^∞∏_n=1^n^'_A_n∼ℙ_A(I-A_n) ≈∑_n^'=0^N∏_n=1^n^'_A_n∼ℙ_A(I-A_n).The truncated series on the right hand side provides a good approximation if N≫ 1. Assumption <ref> (iii) implies that 0≺∇_22^2 g(x,y;η_n,ξ )≺2L_g,1 I. Hence, the above formula can be applied to A=1/2L_g,1∇_22^2 g(x,y;η_n,ξ ), which gives rise to an attractive estimator for Λ(x, y; ξ) of the formΛ̂(x, y;ξ):= N/2L_g,1I if N̂=0, N/2L_g,1∏_n=1^N̂ ( I -1/2L_g,1∇_22^2 g(x,y;η_n,ξ ))if N̂≥ 1.Here, N̂ is a random integer drawn uniformly from {0,1,…,N-1} that is independent of the i.i.d. samples η_1,…, η_N̂ from _η|ξ. <cit.> showed that the estimator (<ref>) displays the following properties. Its bias decays exponentially with N, its variance grows quadratically with N, and its sampling cost grows linearly with N. Below we call N the approximation number.Estimating ∇ F(x) via DL-SGD. For any given K and N, we construct the DL-SGD estimator for the gradient of F by using the following procedure:(i) generate a sample ξ from _ξ, (ii)generate i.i.d. samples η^' and η^'' from the conditional distribution _η|ξ, (iii) run EpochSGD as described in Algorithm <ref> with an arbitrary initial iterate y_1^0 to obtain y^0_K+1, and (iv) construct Λ̂(x, y_K+1^0;ξ) as in (<ref>). Using these ingredients, we can now construct the DL-SGD gradient estimator asv̂^K(x) := ∇_1 f(x,y_K+1^0;η^'',ξ)-∇_12^2 g(x,y_K+1^0;η^',ξ) Λ̂(x, y_K+1^0;ξ) ∇_2 f(x,y_K+1^0;η^'',ξ).In Lemma <ref> below, we will analyze the bias and variance as well as the sampling and computational costs of the DL-SGD gradient estimator. We will see that a small bias [v̂^K](x)-∇ F(x)≤ can be ensured by setting K = (log(^-1)), in which case EpochSGD computes (^-2) stochastic gradients of g. From now on, we refer to Algorithm <ref> with v(x)=v̂^K(x) as the DL-SGD algorithm.§.§ RT-MLMC Gradient EstimatorThe bottleneck of evaluating the DL-SGD gradient estimators is the computation of y^0_K+1. The computational costs can be reduced, however, by exploiting the telescoping sum propertyv̂^K(x)= v̂^1(x)+∑_k=1^K [v̂^k+1(x) - v̂^k(x)]= v̂^1(x)+∑_k=1^K p_kv̂^k+1(x) - v̂^k(x)/p_k =v̂^1(x)+_k̂∼_k̂[v̂^k̂+1(x) - v̂^k̂(x)/p_k̂],where v̂^k̂ is defined as in (<ref>) with k=1,…,K replacing K, and where _k̂ is a truncated geometric distributionwith _k̂(k̂=k)=p_k∝ 2^-k for every k=1,…, K. This observation prompts us to construct the RT-MLMC gradient estimator asv̂(x) = v̂^1(x)+p_k̂^-1(v̂^k̂+1(x) - v̂^k̂(x)).The RT-MLMC gradient estimator has three key properties: * It is an unbiased estimator for the DL-SGD gradient estimator, i.e., _k̂∼_k̂ [v̂(x)] = v̂^K(x). * Evaluating v̂(x) requires computing y^0_k+1(x,ξ) with probability p_k, which decays exponentially with k. To ensure a small bias, we need to set K=(log(^-1)), and thus p_K =(). Hence, most of the time, EpochSGD only needs to run over k≪ K epochs. As a result, the average sampling and computational costs are markedly smaller for RT-MLMC than for DL-SGD. -1* Sincev̂^k+1(x) and v̂^k(x) differ only in y_k+1^0 and y_k^0, both of which are generated by EpochSGD and are thus highly correlated, v̂^k+1(x)-v̂^k(x) has a small variance thanks to a control variate effect <cit.>. Hence, the variance of RT-MLMC is well-controlled, as shown in Lemma <ref>.In Lemma <ref> below, we will analyze the bias and variance as well as the sampling and computational costs of the RT-MLMC gradient estimator. We will see that it requires only(1) samples to ensure that the bias drops to (). This is in stark contrast to the DL-SGD estimator, which needs (^-2) samples. The lower sample complexity and the corresponding lower computational cost come at the expense of an increased variance of the order (log(^-1)). The construction of the RT-MLMC gradient estimator is detailed in Algorithm <ref>. From now on, we refer to Algorithm <ref> with v(x)=v̂(x) as the RT-MLMC algorithm. §.§ Memory and Arithmetic Operational CostsThe per-iteration memoryand arithmetic operational cost of DL-SGD as well as RT-MLMC is dominated by the cost of computing thematrix-vector productĉ(x,y;ξ):=∇_12^2 g(x,y;η',ξ) Λ̂(x, y;ξ) ∇_2 f(x,y;η”,ξ).By (<ref>), Λ̂(x, y;ξ) is a product of N̂ matrices of the form I-1/(2L_g,1)∇_22 g(x,y;η_n,ξ), and the n-th matrix coincides with the gradient of (y-1/(2L_g,1)∇_2 g(x,y;η_n,ξ)) with respect to y. We can thus compute (<ref>) recursively as follows. We first set v=∇_2 f(x,y;η^'',ξ). Next, we update v by setting it to the gradient of (y-1/(2L_g,1)∇_2 g(x,y;η_N̂,ξ))^⊤ v with respect to y, which is computed via automatic differentiation. This yields v=(I-1/(2L_g,1)∇_22 g(x,y;η_N,ξ)) ∇_2 f(x,y;η^'',ξ). By using a backward recursion with respect to n, we can continue to multiply v from the left with the other matrices in the expansion of Λ̂(x, y;ξ). This procedure is highly efficient because the memory and arithmetic operational cost of computing the product of a Hessian matrix with a constant vector via automatic differentiation is bounded—up to a universal constant—by the cost of computing the gradient of the same function <cit.>. See Algorithm <ref> in Appendix <ref> for details. The expected arithmetic operational costs of Algorithm <ref> is (Nd) and the memory cost is (d). § COMPLEXITY BOUNDSIn this section we derive the sample and gradient complexities of the proposed algorithms. We first analyze the error of the general SGD framework detailed in Algorithm <ref>.If Algorithm <ref> is used to minimize an L_ψ-Lipschitz continuous and S_ψ-Lipschitz smooth fucntion ψ(x) and ifα≤ 1/(2S_ψ), then we have∇ψ(x̂_T)^2≤2A_1/α T + 2/T∑_t=1^T[L_ψ[v(x_t)-∇ψ(x_t)] +S_ψαv(x_t)-∇ψ(x_t)^2],where A_1:= ψ(x_1) -min_x ψ(x).Lemma <ref> sightly generalizes <cit.>. We defer the proof to Appendix <ref>. Thus, to prove convergence to a stationary point, we need to characterize the bias, variance, and computational costs of the DL-SGD and the RT-MLMC gradient estimators. We have the following results. * The biases of the DL-SGD and RT-MLMC estimators match and satisfyv̂^K(x) - ∇ F(x) = v̂(x) - ∇ F(x)≤μ_g^-1 (1- μ_g/(2L_g,1))^N + (N^2 2^-K/2),and the corresponding variances satisfy Var(v̂^K(x))=(N^2) and Var(v̂(x))=(KN^4).* The numbers of samples and iterations needed by EpochSGD to build a DL-SGD estimator are bounded by N + 2^K+1-1 and 2^K+1-1, respectively. The expected numbers of samples and iterations needed for an RT-MLMC estimator are bounded by N + 3K and 3K, respectively. Lemma <ref> implies that setting N=(log(^-1)) and K = (log(^-1)) reduces the bias to (). In this case the RT-MLMC estimators have higher variances than the DL-SGD estimators, but their variances are still of the order 𝒪(log(ϵ^-1)). On the other hand, using RT-MLMC estimators reduces the per-iteration sampling and computational costs from (2^K)=(^-2) to (K) = (log(^-1)).Note that <cit.> characterize the properties of general MLMC estimators and derive their complexity bounds. However, the proposed RT-MLMC estimators for CSBO problems are the first of their kind. In addition, as we need to estimate the Hessian inverse Λ(x,y^*(x;ξ);ξ), our analysis is more involved. In contrast to <cit.>, who use MLMC techniques for estimating projections and proximal points, we use MLMC techniques for estimating gradients in bilevel optimization. The following main theorem summarizes our complexity bounds. If Assumption <ref> holds, then Algorithm <ref> based on the RT-MLMCor the DL-SGD estimator outputs an ϵ-stationary point of F provided that N=(log(^-1)), K = (log(^-1)), α = (^2) and T=(^-4). When using the RT-MLMC estimator, the sample complexities of ξ and η as well as the gradient complexities of g and f are (^-4). When using the DL-SGD estimator, the sample complexity of η and the gradient complexity of g are (^-6), while and the sample complexity of ξ and the gradient complexity of f are (^-4). Remark. Theorem <ref> asserts that the sample complexity of η and the gradient complexity of g are much smaller for RT-MLMC than for DL-SGD, while the gradient complexities of f are comparable. When specialized to SBO or CSO problems, the complexity bounds of RT-MLMC match those of the state-of-the-art algorithms ALEST for SBO problems <cit.> and MLMC-based methods for CSO problems <cit.>. When restricted to classical stochastic nonconvex optimization, the complexity bounds of RT-MLMC match the existing lower bounds <cit.>. These observations further highlight the effectiveness of RT-MLMC across various settings. § APPLICATIONS AND NUMERICAL EXPERIMENTS§.§ Meta-LearningOptimization-based meta-learning <cit.> aims to find a common shared regularization parameter for multiple similar yet different machine learning tasks in order to avoid overfitting when training each task separately on their datasets that each only processes a few data points. Recall Equation (<ref>), the objective function of the optimization-based meta-learning <cit.>, min_x  _i∼μ_D^test_i∼ρ_i [l_i(y_i^*(x),D^test_i) ]where y_i^*(x) =_y_i _D^train_i∼ρ_i[ l_i(y_i,D^train_i) + λ/2y_i - x^2], ∀ i∈[M] and x.where μ is the distribution over all M tasks, ρ_i is the distribution of data from the task i, D_i^train and D_i^test are the training and testing dataset of the task i,  x is the shared parameter of all tasks and y_i^*(x)is the best parameter for a regularized objective of task i,  l_i is a loss function that measures the average loss on the dataset of the i-th task,and λ is the regularization parameter to ensure the optimal solution obtained from the lower-level problem is not too far from the shared parameter obtained from the upper-level problem. Note that such a problem also occurs in personalized federated learning with each lower level being one user. Note that the task distribution μ is usually replaced by averaging over all M tasks.In such cases, existing works <cit.> only demonstrate a convergence rate that scales linearly with the number of tasks M.In contrast, the sample complexity of our proposed method does not depend on the number of tasks M, enabling substantially faster computation for a larger M. The seminal work, Model-agnostic Meta-learning (MAML) <cit.>, is an approximation of Problem (<ref>) via replacing y_i^*(x) with one-step gradient update, i.e., ŷ_i(x):= x - λ^-1∇ l_i(x, D_i^train). We study the case where the loss function l_i(x, D), ∀ i is a multi-class logistic loss using a linear classifier.The experiment is examined on tinyImageNet <cit.> by pre-processing it using the pre-trained ResNet-18 network <cit.>to extract linear features. Since the network has learned a rich set of hierarchical features from the ImageNet dataset <cit.>, it typically extracts useful features for other image datasets. Note that each task consists of labels of similar characteristics. -1r0.5 The computation time of DL-SGD/RT-MLMC gradient estimators on meta-learning.2*K 2c|DL-SGD2cRT-MLMC2-5 MeanVarianceMeanVariance6 2.65e-02 6.34e-03 2.73e-02 1.46e-02 8 7.23e-02 7.77e-03 3.41e-02 1.85e-0210 2.48e-01 2.75e-02 4.93e-02 4.06e-0212 9.38e-01 3.71e-02 1.08e-01 5.44e-02 Figure <ref> presents the average of logistic loss evaluated on the test dataset against the number of iterations, with each iteration representing one upper-level update. From the plot, we see that both DL-SGD and RT-MLMC methods tend to have better generalization performance when using a larger number of levels K.As shown in Table <ref>, RT-MLMCis about 9 times faster to compute the upper-level gradient estimator than DL-SGD when K is large.In contrast, the MAML baseline does not have superior performance since the one-step gradient update does not solve the lower-level problem to approximate global optimality. In Appendix <ref>, we provide numerical results for a modified MAML algorithm with multiple gradient updates, which achieves better performance compared to MAML but is still worse than our proposed method.After the initial submission of the paper, a con-current work <cit.> proposed two types of algorithms (BSVRB^v1 and BSVRB^v2) that apply to the meta-learning formulation (<ref>). Their proposed algorithm BSVRB^v1 is computationally expansive as it requires the exact computation of the inverse of Hessian matrix (which is of size 5120×5120 in this example) with respect to θ in each iteration of the upper-level update. In the following, we compare the performance of our algorithm with their proposed Hessian-free algorithm BSVRB^v2 in Figure <ref>. In the left plot of Figure <ref>, we examine the performance of RT-MLMC method and BSVRB^v2 by running the same number of total epochs. It shows that RT-MLMC method has much better performance in terms of test error. In the right plot of Figure <ref>, we examine the performance of RT-MLMC method and BSVRB^v2 by running the same amount of computational time. Although the per-upper-level-iteration computational costs of BSVRB^v2 is small, it takes a much longer time for BSVRB^v2 to achieve a similar test error as RT-MLMC. §.§ Wasserstein DRO with Side InformationThe WDRO-SI <cit.> studies robust stochastic optimization with side information <cit.>. Let ξ denote the side information and η denote the randomness dependent on ξ. The WDRO-SIseeks to find a parameterized mapping f(x;ξ) from the side information ξ to a decision w that minimizes the expected loss w.r.t. η under the worst-case distributional shifts over (ξ, η). Rigorously, with a penalty on the distributional robust constraint, WDRO-SI admits the formmin_x max_ℙ {𝔼_(ξ,η)∼ℙ[l(f(x,ξ); η)] - λ𝖢(ℙ, ℙ^0) },where l(w;η) is the loss function dependent on the decision w and the random variable η, ℙ^0 is the nominal distribution of (ξ,η) that usually takes the form of an empirical distribution, and 𝖢(·,·) is a casual transport distance between distributions <cit.> – a variant of the Wasserstein distance that better captures the information from ξ. For distributionally robust feature-based newsvendor problems <cit.>, the covariate ξ can be temporal, spatial, or weather information, η is the random demand, f(x;ξ) denotes the ordering quantity for a given ξ, and l(f(x;ξ);η) characterizes the loss if the ordering quantity f(x;ξ) does not match the demand η. Incorporating the cost function of the casual transport distance used in <cit.> and utilizing the dual form, the WDRO-SI problem in (<ref>) can be reformulated as a special case of CSBO:min_x  _ξ∼ℙ_ξ^0_η∼ℙ_η|ξ^0 [l(f(x; y^*(x;ξ)), η) - λy^*(x;ξ) - ξ^2 ](upper level) where y^*(x;ξ) :=_ξ̃ _η∼ℙ_η|ξ^0[ -l(f(x;ξ̃), η) + λξ - ξ̃^2 ],  ∀ξ̃ and x. (lower level) The original work <cit.> only allows affine function f or non-parametric approximation, while our approach allows using neural network approximation such that f(x;ξ) is a neural network parameterized by x. Using Theorem <ref>, we obtain the first sample and gradient complexities for WDRO-SI. For the distributionally robust feature-based newsvendor problems, we compare the performance of DL-SGD and RT-MLMC. We compare with ERM and WDRO, which do not incorporate side information. Fig. <ref> (left) and (middle) present the results of test loss versus the number of upper-level iterations for DL-SGD and RT-MLMC, respectively. From the plot, using a larger number of epochs K for the lower-level problem generally admits lower testing loss values, i.e., better generalization performance.r0.5 Computation time of DL-SGD/RT-MLMC gradient estimators for WDRO-SI. 2*K 2c|DL-SGD2cRT-MLMC2-5 MeanVarianceMeanVariance2 1.27e-02 2.67e-03 5.04e-03 7.26e-044 5.25e-02 2.58e-03 1.25e-02 8.26e-036 1.68e-01 2.74e-03 2.02e-02 9.39e-038 4.63e-01 2.08e-03 3.41e-02 1.68e-02 Fig. <ref> (right) highlights the importance of incorporating side information as the performance of WDRO-SI outperforms the other two baselines. In addition, more observations of η for a given side information ξ can enhance the performance. Table <ref> reports the computational time for DL-SGD and RT-MLMC gradient estimators, and RT-MLMC is significantly faster since it properly balances the bias-variance-computation trade-off for the gradient simulation.§ CONCLUSION We introduced the class of contextual stochastic bilevel optimization problems, which involve a contextual stochastic optimization problem at the lower level. In addition, we designed efficient gradient-based solution schemes and analyzed their sample and gradient complexities. Numerical results on two complementary applications showcase the expressiveness of the proposed problem class as well as the efficiency of the proposed algorithms.Future research should address generalized CSBO problems with constraints at the lower level, which will require alternative gradient estimators. This research was supported bythe Swiss National Science Foundation under the NCCR Automation, grant agreement 51NF40_180545, an NSF CAREER CCF-1650913, NSF DMS-2134037, CMMI-2015787, CMMI-2112533, DMS-1938106, DMS-1830210, and the Coca-Cola Foundation.plainnat § PROOFS OF TECHNICAL RESULTSSince the function ψ is S_ψ-Lipschitz smooth, we haveψ(x_t+1) - ψ(x_t)≤ ∇ψ(x_t)^⊤ (x_t+1 - x_t) +S_ψ/2x_t+1 - x_t^2= -α_t ∇ψ(x_t)^⊤ v(x_t) +S_ψα_t^2/2v(x_t)^2 ≤-α_t ∇ψ(x_t)^2 + α_t ∇ψ(x_t)^⊤ (∇ψ(x_t) - v(x_t)) +S_ψα_t^2v(x_t)-∇ψ(x_t)^2 + S_ψα_t^2 ∇ψ(x_t)^2= -(α_t - S_ψα_t^2) ∇ψ(x_t)^2 + α_t ∇ψ(x_t)^⊤ (∇ψ(x_t) - v(x_t)) +S_ψα_t^2v(x_t)-∇ψ(x_t)^2 = -(α_t - S_ψα_t^2) ∇ψ(x_t)^2 + α_t[∇ψ(x_t)^⊤[∇ψ(x_t) - v(x_t)| x_t]]+S_ψα_t^2v(x_t)-∇ψ(x_t)^2≤-α_t/2 ∇ψ(x_t)^2 + α_t[∇ψ(x_t)^⊤[∇ψ(x_t) - v(x_t)| x_t] ]+S_ψα_t^2v(x_t)-∇ψ(x_t)^2 ≤-α_t/2 ∇ψ(x_t)^2 + α_t[∇ψ(x_t)[∇ψ(x_t) - v(x_t)| x_t] ]+S_ψα_t^2v(x_t)-∇ψ(x_t)^2,where the first inequality uses Lipschitz smoothness of ψ, the first equality uses the updates of the SGD algorithm, the second inequality uses the Cauchy-Schwarz inequality, the third equality uses the conditional expectation and the tower property, the third inequality uses the fact that α_t≤ 1/(2S_ψ), and the fourth inequality uses the Cauchy-Schwarz inequality. Rearranging terms and setting α_t=α, we have∇ψ(x_t)^2 ≤2(ψ(x_t) - ψ(x_t+1))/α+ 2∇ψ(x_t)[∇ψ(x_t) - v(x_t)| x_t] + 2S_ψαv(x_t)-∇ψ(x_t)^2.Averaging from t= 1 to t=T, we have∇ψ(x̂_T)^2=1/T∑_t=1^T∇ψ(x_t)^2≤2(ψ(x_1) - min_xψ(x))/α T+ 2/T∑_t=1^T∇ψ(x_t)[∇ψ(x_t) - v(x_t)| x_t] +2S_ψα/T∑_t=1^Tv(x_t)-∇ψ(x_t)^2 ≤2( ψ(x_1) - min_xψ(x))/α T + 2/T∑_t=1^T[ L_ψ[∇ψ(x_t) - v(x_t)] +S_ψαv(x_t)-∇ψ(x_t)^2],where the inequality holds as ψ is L_ψ-Lipschitz continuous and thus ∇ψ(x)≤ L_ψ for all x. To demonstrate the bias, variance as well as sampling and computational costs for building DL-SGD and RT-MLMC gradient estimators, we first show the iterate convergence of EpochSGD on the lower-level minimization problem with the side information. The analysis follows similarly as <cit.>.For given x and ξ, the iterates y_K+1^0 of Algorithm <ref> with β_0=(4μ_g)^-1 satisfiesy_K+1^0 - y^*(x;ξ)^2≤ 2L_g,0^2μ_g^-2 2^-(K+1). It is important to note that the initial stepsize, denoted as β_0, doesn't necessarily have to be equal to (4μ_g)^-1. Indeed, equivalent results can be achieved with a constant β_0>0. The choice of this specific β_0 value is primarily to streamline the analysis. In reality, there are numerous instances where the value of μ_g is unknown beforehand. Under such circumstances, one practical approach could be to set β_0 to a standard value, such as 0.4. Denote G(x,y;ξ):=_η|ξ g(x,y;η,ξ). For the ease of notation, throughout the proof,denotes taking full expectation conditioned on a given x and ξ. Utilizing the update of y in the k-th epoch of EpochSGD algorithm, it holds for any y and any j=0,…, 2^k-1 thaty^j+1_k - y^2=y^j_k-β_k ∇_2 g(x,y^j_k;η^j_k,ξ) - y^2=y^j_k - y^2+β_k^2 ∇_2 g(x,y^j_k;η^j_k,ξ)^2 - 2β_k ∇_2 g(x,y^j_k;η^j_k,ξ)^⊤ (y^j_k-y) ≤ y^j_k - y^2+β_k^2 L_g,0^2 - 2β_k ( G(x,y^j_k;ξ) - G(x,y;ξ)),where the inequality utilizes the convexity of g and G in y. Rearranging terms, the above inequality yieldsG(x,y^j_k;ξ) - G(x,y;ξ) ≤y^j_k - y^2 - y^j+1_k - y^2/2β_k + β_k L_g,0^2/2.Summing up from j=0 to j=2^k-1 and dividing 2^k on both sides, we obtain the relationG(x,y^0_k+1;ξ)-G(x,y;ξ) ≤ 1/2^k∑_j=0^2^k-1 G(x,y^j_k;ξ) - G(x,y;ξ) ≤y^0_k - y^2/2β_k 2^k + β_k L_g,0^2/2,where the inequality uses the convexity of G in y and Jensen's inequality. Since G(x,y;ξ) is μ_g-strongly convex in y for any given x and ξ, it holds that G(x,y_1^0;ξ) - G(x,y^*(x;ξ);ξ)≤ -∇_2 G(x,y_1^0;ξ)^⊤ (y^*(x;ξ) - y_1^0) -μ_g/2y^*(x;ξ) - y_1^0^2≤ max_y {-∇_2 G(x,y_1^0;ξ)^⊤ (y - y_1^0) -μ_g/2y - y_1^0^2 }=∇_2 G(x,y_1^0;ξ)^2/2μ_g≤L_g,0^2/2μ_g.Next, we use induction on k to show G(x,y_k^0;ξ) - G(x,y^*(x;ξ);ξ) ≤L_g,0^2/2^kμ_g.The base step for k=1 follows from the inequality established above. As for the induction step, suppose that G(x,y_k^0;ξ) - G(x,y^*(x;ξ);ξ) ≤L_g,0^2/2^kμ_g holds for some k≥ n. Plugging y=y^*(x;ξ) into (<ref>), we thus findG(x,y_k+1^0;ξ) - G(x,y^*(x;ξ);ξ)≤ y^0_k - y^*(x;ξ)^2/2β_k 2^k + β_k L_g,0^2/2 ≤G(x,y_k^0;ξ) - G(x,y^*(x;ξ);ξ)/μ_gβ_k 2^k + β_0 L_g,0^2/2^K+1= G(x,y_k^0;ξ) - G(x,y^*(x;ξ);ξ)/μ_gβ_0 + β_0 L_g,0^2/2^K+1 ≤ L_g,0^2/μ_g^2β_0 2^k + β_0 L_g,0^2/2^k+1=L_g,0^2/μ_g 2^k+2 +L_g,0^2/μ_g 2^k+3 ≤ L_g,0^2/μ_g 2^k+1,where the second inequality uses the strong convexity of G in y and the fact that y^*(x;ξ) minimizes G, the first equality uses our assumption that β_k=β_0/2^k, the third inequality uses the induction condition, and the second equality inequality uses the definition of β_0= (4μ_g)^-1. It concludes the induction. Therefore, we haveG(x,y_K+1^0;ξ) - G(x,y^*(x;ξ);ξ)≤L_g,0^2/μ_g 2^K+1.By the μ_g-strong convexity of G(x,y,;ξ) and the fact that y^*(x;ξ) is the minimizer, we thus havey_K+1^0 - y^*(x;ξ)^2≤2/μ_g G(x,y_K+1^0;ξ) - G(x,y^*(x;ξ);ξ)≤L_g,0^2/μ_g^2 2^K. We first demonstrate the properties of the RT-MLMC gradient estimator and then show that of the DL-SGD gradient estimator. To facilitate the analysis, we defineV(x) =__ξ,_η|ξ[∇_1 f(x,y^*(x;ξ);η,ξ) - (__η^'|ξ∇_12^2 g(x,y^*(x;ξ);η',ξ) ) _{η_n}_n=1^N̂∼_η|ξ [Λ̂(x, y^*(x;ξ);ξ)] ∇_2 f(x,y^*(x;ξ);η,ξ) ].RT-MLMC gradient estimator By the triangle inequality, we havev̂(x) - ∇ F(x)≤v̂(x) - V(x) +V(x) - ∇ F(x).From <cit.>, we knowfor given x, y, and ξ that_{η_n}_n=1^N̂∼_η|ξ [Λ̂(x, y;ξ)] -Λ(x,y;ξ)≤1/μ_g(1- μ_g/2L_g,1)^NUtilizing Lipschitz continuity of ∇ g and f, we know thatV(x) - ∇ F(x)≤L_g,1 L_f,0/μ_g(1- μ_g/2L_g,1)^N.On the other hand, by the definition of v̂(x), we havev̂(x) -V(x) =∇_1 f(x,y_K+1^0;η,ξ)- ∇_1 f(x,y^*(x;ξ);η,ξ) + ∇_12^2 g(x,y_K+1^0;η,ξ) [Λ̂(x, y^0_K+1;ξ)] ∇_2 f(x,y_K+1^0;η,ξ) - ∇_12^2 g(x,y^*(x;ξ);η^',ξ) [Λ̂(x, y^*(x;ξ);ξ)] ∇_2 f(x,y^*(x;ξ);η^',ξ).By the Lipschitz continuity of ∇ f and Lemma <ref>, we have∇_1 f(x,y_K+1^0;η,ξ) - ∇_1 f(x,y^*(x;ξ);η,ξ) ≤L_f,1y_K+1^0 - y^*(x;ξ) ≤ L_f,1 L_g,0/μ_g 2^K/2.Utilizing the Lipschitz continuity of ∇ f, ∇ g, and ∇^2 g in y, we havev̂(x) -V(x) ≤L_f,1y_K+1^0 - y^*(x;ξ) + L_g,1 L_f,1N/L_g,1y_K+1^0 - y^*(x;ξ) + L_f,0 L_g,2N/L_g,1y_K+1^0 - y^*(x;ξ) + L_f,0 L_g,1N/L_g,1∑_N^' =1^N1/N^' N^'y_K+1^0 - y^*(x;ξ)≤ L_f,1 L_g,0/μ_g 2^K/2 + (L_g,1 L_f,1N/L_g,1 + L_f,0 L_g,2N/L_g,1 + L_f,0 L_g,1N^2/L_g,1) L_g,0/μ_g 2^K/2=(N^2/2^K/2).Next, we show the sampling and computational costs. To build up the RT-MLMC gradient estimator v̂(x), we need one sample of ξ, and the number of samples of η from _η|ξ is∑_N^'=1^N1/N^' N^' + ∑_k=1^K (2^k+1-1) 2^-k/1-2^-K - 1 < N + 3K.On average, the iteration needed for EpochSGD is ∑_k=1^K (2^k+1-1) 2^-k/1-2^-K - 1 <3K.Next, we demonstrate the variance of v̂(x). Denote H_K(1) := ∇_1 f(x,y_K^0;η^'',ξ), H_K(2) := ∇_12^2 g(x,y_K^0;η^',ξ)[Λ̂(x, y^0_K;ξ)]∇_2 f(x,y_K^0;η^'',ξ),H^*(1) := ∇_1 f(x,y^*(x;ξ);η^'',ξ), H^*(2) := ∇_12^2 g(x,y^*(x;ξ);η^',ξ)[Λ̂(x, y^*(x;ξ);ξ)]∇_2 f(x,y^*(x;ξ);η^'',ξ).Thus one may rewrite v̂(x) and V(x) as v̂(x) = H_k+1(1) - H_k(1) - H_k+1(2) + H_k(2)/p_k + H_1(1)- H_1(2),V(x) = [H^*(1)- H^*(2)].It holds that v̂(x) - v̂(x)^2 ≤ v̂(x) - V(x)^2 ≤ 2v̂(x) - H_1(1)+ H_1(2)^2 + 2H_1(1)- H_1(2) -V(x) ^2≤41/p_k (H_k+1(1) - H_k(1))^2 + 41/p_k (H_k+1(2) - H_k(2))^2 + 2H_1(1)- H_1(2) -V(x) ^2,where the first inequality holds by the definition of variance, the second inequality holds by the Cauchy-Schwarz inequality, the third inequality uses the Cauchy-Schwarz inequality and the definition of v̂(x). It remains to analyze1/p_k (H_k+1(1) - H_k(1))^2, 1/p_k (H_k+1(2) - H_k(2))^2, and H_1(1)- H_1(2) -V(x) ^2.* For the first term, we have1/p_k (H_k+1(1) - H_k(1))^2=∑_k=1^K p_k^-1H_k+1(1) - H_k(1)^2 ≤ ∑_k=1^K p_k^-1 L_f,1^2 y_k+1^0 -y_k^0^2≤ ∑_k=1^K p_k^-1 L_f,1^2 2(y_k+1^0 -y^*(x;ξ)+y^*(x;ξ)- y_k^0^2)≤ ∑_k=1^K p_k^-1 L_f,1^2 6L_g,0^2/μ_g^2 2^k ≤K L_f,1^2 6L_g,0^2/μ_g^2,where the first inequality uses the Lipschitz continuity of ∇ f, the second inequality uses the Cauchy-Schwarz inequality, the third inequality uses Lemma <ref>, and the last inequality uses the definition of p_k.* For the second term, we may conduct a similar analysis. 1/p_k (H_k+1(2) - H_k(2))^2 =∑_k=1^K p_k^-1H_k+1(2) - H_K(2)^2≤ ∑_k=1^K p_k^-1 6(L_f,1^2N^2 + L_f,0^2 L_g,2^2N^2/L_g,1^2 + L_f,0^2 N^4) y_K+1^0 -y_K^0^2 ≤ ∑_k=1^K p_k^-1 2(L_g,0^2/μ_g^2 2^K+L_g,0^2/μ_g^2 2^K-1) 6(L_f,1^2N^2 + L_f,0^2 L_g,2^2N^2/L_g,1^2 + L_f,0^2 N^4)=36KL_g,0^2/μ_g^2(L_f,1^2N^2 + L_f,0^2 L_g,2^2N^2/L_g,1^2 + L_f,0^2 N^4),where the first inequality follows utilizing Lipschitz continuity of f, ∇ f, ∇ g, ∇^2 g in y, the second inequality uses Lemma <ref>, and the last inequality uses the definition of p_k.* For the third term, we haveH_1(1)- H_1(2) -V(x) ^2 ≤2 H_1(1)- H^*(1)^2 + 2 H_1(2)- H^*(2)^2Notice that H_1(1)- H^*(1)^2=∇_1 f(x,y_1^0;η,ξ)-∇_1 f(x,y_1^0;η,ξ)^2+ ∇_1 f(x,y_1^0;η,ξ)-∇_1 f(x,y^*(x;ξ);η,ξ)^2 ≤ σ_f^2+ L_f,1^2 L_g,0^2/2μ_g^2,where the first equality holds as a+b^2 = a^2 + b^2 +2 a^⊤ b, andlast inequality holds by Lemma <ref> and the Lipschitz continuity of ∇ f. On the other hand, we haveH_1(2)- H^*(2)^2 = H_1(2)- H_1(2)^2 +H_1(2) -H^*(2)^2 ≤H_1(2)^2 +H_1(2) - H^*(2)^2≤L_g,1^2 L_f,0^2 ∑_N̂ = 1^N 1/N̂N^2/4L_g,1 + 6(L_f,1^2N^2 + L_f,0^2 L_g,2^2N^2/L_g,1^2 + L_f,0^2 N^4)L_g,0^2/2μ_g^2= L_g,1^2 L_f,0^2 N^2/4L_g,1 + 6(L_f,1^2N^2 + L_f,0^2 L_g,2^2N^2/L_g,1^2 + L_f,0^2 N^4)L_g,0^2/2μ_g^2,where the first equality uses a+b^2 = a^2 + b^2 +2 a^⊤ b, andlast inequality holds by Lemma <ref> and the Lipschitz continuity.As a result, we conclude that the variance satisfies thatv̂(x)-v̂(x)^2 ≤v̂(x) - V (x)^2 =(K N^4).DL-SGD gradient estimators Note that v̂(x)= v̂^K(x). Thus the bias follows directly from the analysis of RT-MLMC.Next, we consider the per-iteration sampling costs and the average number of iterations for the EpochSGD. DL-SGD runs EpochSGD as a subroutine for 2^K+1-1 number of iterations. Thus the sampling cost on average is N+2^K+1-1.Consider the variance of the DL-SGD method. Note that v̂^K(x) = H_K+1(1) - H_K+1(2)Following a similar decomposition as we did for the third term in bounding the variance of RT-MLMC estimators, we have v̂^K(x) - v̂^K(x)^2 ≤ v̂^K(x) - V(x)^2 ≤2H_K+1(1) -H^*(1)^2 + 2H_K+1(2) -H^*(2)^2≤2H_K+1(1) -H_K+1(1) ^2 + 2 H_K+1(1)-H^*(1)^2 + 2H_K+1(2)^2 +2 H_K+1(2) -H^*(2)^2≤2σ_f^2+ L_f,1^2 L_g,0^2/2^Kμ_g^2 + L_g,1^2 L_f,0^2 N^2/2L_g,1 + 12(L_f,1^2N^2 + L_f,0^2 L_g,2^2N^2/L_g,1^2 + L_f,0^2 N^4)L_g,0^2/2^K+1μ_g^2=(N^2 + N^4 2^-K),where the first inequality uses the definition of variance, the second inequality uses the Cauchy-Schwarz inequality, and the third and the last equality uses Lipschitz continuity of f, ∇ f and ∇ g and follows a similar argument as in bounding the third term for the variance of RT-MLMC estimators. Note that to control the bias, we let both N and K to be of order (log(ϵ^-1)). Thus N^2 is the dominating term in the variance of DL-SGD gradient estimator. We first demonstrate the analysis for RT-MLMC. RT-MLMC gradient method:combining Lemmas <ref> and<ref>, we know that∇ F(x̂_T)^2≤ 2( F(x_1) - min_x F(x))/α T+ 2/T∑_t=1^T∇ F(x_t)[∇ F(x_t) - v̂(x_t)| x_t]+2S_Fα/T∑_t=1^Tv̂(x_t)-∇ F(x_t)^2 ≤ 2( F(x_1) - min_x F(x))/α T+ 2/T∑_t=1^T∇ F(x_t)[∇ F(x_t) - v̂(x_t)| x_t]+4S_Fα/T∑_t=1^Tv̂(x_t)-V(x_t)^2+ 4S_Fα/T∑_t=1^TV(x_t) - ∇ F(x_t)^2=(1/α T + S_F 1/μ_g(1- μ_g/2L_g,1)^N +S_FN^2/μ_g 2^K/2 + α S_FKN^4/μ_g^2 + α1/μ_g^2(1- μ_g/2L_g,1)^2N).To ensure that x̂_T is an -stationarity point, it suffices to let the right hand side of the inequality above to be (^2). Correspondingly, we set N = (log(^-1)), K = (log(^-2)), α = (T^-1/2), and T = (^-4). As a result, the total sampling and the gradient complexity over g are of order 3KT=(^-4). Since at each upper iteration, we only compute one gradient of f, the gradient complexity over f is T=(^-4).Next, we demonstrate the analysis for DL-SGD. DL-SGD gradient method: combining Lemmas <ref> and <ref>, we have∇ F(x̂_T)^2≤ 2( F(x_1) - min_x F(x))/α T+ 2/T∑_t=1^T∇ F(x_t)[∇ F(x_t) - v̂^K(x_t)| x_t] +2S_Fα/T∑_t=1^Tv̂^K(x_t) - ∇ F(x_t)^2 ≤ 2( F(x_1) - min_x F(x))/α T+ 2/T∑_t=1^T∇ F(x_t)[∇ F(x_t) - v̂^K(x_t)| x_t] +4S_Fα/T∑_t=1^Tv̂^K(x_t) - V(x_t)^2 +4S_Fα/T∑_t=1^TV(x_t) - ∇ F(x_t)^2=(1/α T + S_F 1/μ_g(1- μ_g/2L_g,1)^N +S_FN^2/μ_g 2^K/2 + α S_F N^2 + α1/μ_g^2(1- μ_g/2L_g,1)^2N).To ensure that x̂_T is an -stationarity point, it suffices to let N = (log(^-1)), K = (log(^-2)), α = (T^-1/2), and T = (^-4). As a result, the sample complexity and the gradient complexity over g is of order (2^KT)=(^-6). At each upper iteration, we compute one gradient of f, and thusthe gradient complexity over f isT=(^-4). § COMPUTINGTo derive an explicit formula for ∇_1 y^*(x;ξ),we use the first-order optimality condition of the unconstrained lower-level problem. Indeed, as g(x,y;η,ξ) is strongly convex in y, for given x and ξ, y^*(x;ξ) is the unique solution of the equation__η|ξ∇_2 g(x,y^*(x;ξ);η,ξ) = 0.Taking gradients with respect to x on both sides and using the chain rule, we obtain__η|ξ[∇_21^2 g(x,y^*(x;ξ);η,ξ) +∇_22^2g(x,y^*(x;ξ);η,ξ) ∇_1 y^*(x;ξ) ]=0.Since g(x,y;η,ξ) is μ_g-strongly convex in y for any given x, η, and ξ,the expected Hessian matrix __η|ξ∇_22^2g(x, y; η, ξ)∈𝕊^d_y_+ is invertible, and thus we find∇_1 y^*(x;ξ)^⊤ = - (__η|ξ∇_12^2 g(x,y^*(x;ξ);η,ξ) ) (__η|ξ∇_22^2g(x, y^*(x;ξ); η, ξ) )^-1,where __η|ξ∇_12^2 g(x,y^*(x;ξ);η,ξ) is in fact the transpose of __η|ξ∇_21^2 g(x,y^*(x;ξ);η,ξ).§ EFFICIENT IMPLEMENTATION FOR HESSIAN VECTOR PRODUCTS Our goal is to compute ĉ(x,y;ξ) =∇_12^2 g(x,y;η',ξ) Λ̂(x, y;ξ) ∇_2 f(x,y;η”,ξ) in(<ref>) efficiently.-0.2in § IMPLEMENTATION DETAILS In this section, we provide all relevant implementation details. We fine-tune the stepsize for all approaches using the following strategy: we pick a fixed breakpoint denoted as t_0 and adjust the stepsize accordingly.Specifically, for the t-th outer iteration when t≤ t_0, the stepsize is set as 1/√(t), while for iterations beyond t_0, the stepsize is set as 1/t. We choose the breakpoint t_0 such that training loss remains relatively stable as the number of outer iterations approaches t_0. §.§ Meta-LearningLet D=(a,b) denote the training or testing data, where a∈ℝ^d represents the feature vector and b∈[C] represents the label with C categories. In this section, the loss ℓ_i(x, D) is defined as a multi-class logistic loss, given by:ℒ(x; D) = -b^T x^⊤ a + log(1^⊤ e^x^⊤ a),where the parameter x∈ℝ^d× C stands for the linear classifier, b∈{0,1}^C stands for the corresponding one-hot vector of the label b. The experiment utilizes the tinyImageNet dataset, which consists of 100,000 images belonging to 200 classes. After data pre-processing, each image has a dimension of 512. For dataset splitting, we divide the dataset such that 90% of the images from each class are assigned to the training set, while the remaining 10% belong to the testing set. The meta-learning task comprises 20 tasks, with each task involving the classification of 10 classes. Additionally, we set the hyper-parameter λ=2. Finally, we provide additional experiments for meta-learning in Figure <ref>. In the left plot, we examine the performance of MAML by varying the stepsize of inner-level gradient update from {5e-3,1e-2,5e-2,1e-1,2e-1}. From the plot we can tell that when using small stepsize, MAML tends to have similar performance whereas the performance of MAML tend to degrade when using large stepsize. In the right plot, we examine the performance of m-step MAML against upper-level iterations. The m-step MAML is an approximation of problem (<ref>) via replacing y_i(x) with the m-step gradient update ŷ_i,m(x), which is defined recursively: ŷ_i,0(x)=x, ŷ_i,k(x)=ŷ_i,k-1(x)-γ∇[ l_i(ŷ_i,k-1(x), D_i,k-1^train)+λ/2ŷ_i,k-1(x)-x^2],with stepsize γ and D_i,k-1^train∼ρ_i for k=1,…,m. Here we take the number of gradient updates at inner level m from {1,4,8,12}. From the plot, we realize that multi-step MAML tends to have better performance, but it still cannot outperform the RT-MLMC method.§.§ Wasserstein DRO with Side Information In this subsection, we take the loss function l(w, η) = h(w-η)_+ + b(η-w)_+,where h>0 is a constant representing the per-unit holding cost and b>0 is a constant representing the per-unit backlog cost. Since Assumption <ref> does not hold for the objective function due toits non-smooth structure, we approximate the loss function with the smoothed versionl_β(w, η) = h/βlog(1 + e^β(w-η)) + b/βlog(1 + e^β(η-w)),where we specify the hyper-parameter β=5 to balance the trade-off between loss function approximation and smoothness.The synthetic dataset in this part is generated in the following procedure:the covariate ξ is sampled from the 100-dimensional uniform distribution supported on [-15,15]^100. The demand η depends on ξ in a nonlinear way: η =f_NN(x; ξ) + ϵ, ϵ∼𝒩(0,1),f_NN(x; ξ)= 10*Sigmoid(W_3·ReLU(W_2·ReLU(W_1ξ+b_1)+b_2) + b_3),where the neural network parameter x:=(W_1,b_1,…, W_3,b_3). In particular, the ground-truth neural network parameter is generated using the uniform initialization procedure by callingin pytorch. We quantify the performance of a given θ using the testing loss 𝔼_(ξ,η)∼ℙ^*[l(f(x;ξ), η)], where the expectation is estimated using sample average approximation based on 2·10^5 sample points. When creating training dataset, we generate M=50 samples of ξ, denoted as {ξ_i}_i∈[M] and for each x_i, we generate m∈{10,30,50,100} samples of η from the conditional distribution ℙ_η|ξ_i. When generating the left two plots of Fig. <ref>, we specify m=100.When solving the WDRO baseline, we consider the formulationmin_x max_ℙ {𝔼_(ξ,η)∼ℙ[l_β(f(x; ξ); η)] - λ𝒲(ℙ, ℙ^0) },with 𝒲(·,·) being the standard Wasserstein distance using the same cost function as the casual transport distance.We apply the SGD algorithm developed in <cit.> to solve the WDRO formulation. See Algorithm <ref> for detailed implementation. The hyper-parameter λ for WDRO-SI or WDRO formulation has been fine-tuned via grid search from the set {1,10,50,100,150} for optimal performance.-0.2in
http://arxiv.org/abs/2310.18535v1
{ "authors": [ "Yifan Hu", "Jie Wang", "Yao Xie", "Andreas Krause", "Daniel Kuhn" ], "categories": [ "math.OC", "cs.LG" ], "primary_category": "math.OC", "published": "20231027232437", "title": "Contextual Stochastic Bilevel Optimization" }
End-to-end Video Gaze Estimation via Capturing Head-face-eye Spatial-temporal Interaction ContextYiran Guan^*, Zhuoguang Chen^*, Wenzheng Zeng†, Zhiguo Cao Member, IEEE and Yang Xiao†This work is supported by the National Natural Science Foundation of China (Grant No. 62271221).Yiran Guan, Zhuoguang Chen, Wenzheng Zeng, Zhiguo Cao, and Yang Xiao are with National Key Laboratory of Science and Technology on Multi-Spectral Information Processing, School of Artificial Intelligence and Automation, Huazhong University of Science and Technology, China. E-mail: yiranguan, zgchen33, wenzhengzeng, zgcao, [email protected].* Yiran Guan and Zhuoguang Chen are of equal contribution. † Wenzheng Zeng and Yang Xiao are corresponding authors.January 14, 2024 ============================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================In this letter, we propose a new method, Multi-Clue Gaze (MCGaze), to facilitate video gaze estimation via capturing spatial-temporal interaction context among head, face, and eye in an end-to-end learning way, which has not been well concerned yet. The main advantage of MCGaze is that the tasks of clue localization of head, face, and eye can be solved jointly for gaze estimation in a one-step way, with joint optimization to seek optimal performance. During this, spatial-temporal context exchange happens among the clues on the head, face, and eye. Accordingly, the final gazes obtained by fusing features from various queries can be aware of global clues from heads and faces, and local clues from eyes simultaneously, which essentially leverages performance. Meanwhile, the one-step running way also ensures high running efficiency. Experiments on the challenging Gaze360 dataset verify the superiority of our proposition. The source code will be released at <https://github.com/zgchen33/MCGaze>.gaze estimation, video, head-face-eye spatial-temporal context, query§ INTRODUCTIONVideo gaze estimation is a recently emerged challenging research task that suffers from the critical issues of the variations on the pose, human attribute, illumination, etc.It can be widely used to understand human cognitive patterns <cit.>,human social interaction <cit.>, and human-machine interaction <cit.>. Compared with estimating gaze in individual images <cit.>, richer spatial-temporal context over head, face, and eye is essentially involved in video setting, which is beneficial for better characterizing gaze patterns.Although the paid efforts <cit.>, we argue that they still have not well captured the spatial-temporal descriptive clues as below: ∙ First of all, the interaction among head, face, and eye features has not been established, for distilling the underly video gaze characterization context;∙ Secondly, tasks of gaze estimation, and clue localization of head, face, and eye cannot be jointly solved with joint optimization to seek optimal performance; ∙ Last but not least, multi-clue spatial and continuous temporal features cannot be extracted holistically within a unified framework.To address these, we propose MCGaze, a video gaze estimation method that facilitates performance by capturing the head-face-eye spatial-temporal interaction context in an end-to-end query-based learning way. Meanwhile, the tasks of gaze estimation and clue localization of the head, face, and eye can be solved integrally in a one-step running way.Particularly, our main idea is shown in Fig. <ref>. Towards a gaze video clip, its per-frame features will be first extracted to form a video feature tensor. Then, the learnable queries of spatial-temporal form on the head, face, and eye will be set up to take the roles of localizing clues on the head, face, and eye for gaze characterization jointly. At each time point, the frame-wise feature interaction among head, face, and eye queries is executed via spatial interaction for information exchange between the global descriptive clues on head and face, and the local fine clues on eyes. Accordingly, each type of query will be of strong local-global gaze characterization ability. More specifically, head and face clues can reveal human pose, human attributes, and illumination information. And, eye clues essentially characterize the gaze's fine details. On the other hand, within each query, feature interaction between neighboring frames via temporal interaction is also performed to capture the motion information on the head, face, and eye to leverage sequential gaze estimation and facilitate temporal consistency. Finally, features from the head, face, and eye will be jointly used for gaze estimation. It is worth noting that, the procedures of gaze estimation, and clue localization of head, face, and eye are conducted in a one-step running way, with joint optimization to seek the optimal performance. That is to say, unlike previous works, we do not need to use a face detector <cit.> or eye detector <cit.> to preprocess the input head images. This manner can help ensure high running efficiency due to feature sharing among the tasks, which practical applications prefer. The experiments on the challenging Gaze360 dataset <cit.> verify the superiority of our proposition for video gaze estimation.Overall, our main contributions can be summarized as:∙ A novel end-to-end video gaze estimation method is proposed, via capturing head-face-eye spatial-temporal interaction context to facilitate performance;∙ Video gaze estimation, and clue localization of head, face, and eye can be solved integrally in a one-step running way, with joint optimization to seek optimal performance.§ APPROACH§.§ Overall Method In this section, we present our proposed method, MCGaze. Taking a video clip as input, it can automatically capture head, face, and eye clues for hierarchical spatial-temporal gaze representation, and predict the gaze direction of each frame in the video.Our method employs spatial-temporal interactions among head-face-eye clues throughout the video clip.It draws inspiration from query-based methods <cit.> and local-global spatial-temporal modeling approaches <cit.>.The architecture is illustrated in Fig. <ref>.Specifically, our method applies a backbone network to extract features from a video clip I∈ℝ^T× 3 × H × W. Here, T represents the number of frames, and 3 × H × W represents the input frame as an RGB image of size H × W. Then, the backbone network generates F ∈ℝ^T× C × H^'× W^', where C represents the number of channels and H^'× W^' denotes the size of the feature maps. Next, the extracted features are fed into our query-based architecture, which iterates N times and consists of two main components: the spatial-temporal query interaction and the task-specific heads (i.e., clue localization head and gaze fusion head). In each iteration, the queries for the head, face, and eye clue are updated, and the clue localization head predicts the clue region of the head, face, and eye. On the other hand, the gaze fusion head determines the direction of the human gaze from the head, face, and eye clue. The gaze predicted by the last iteration is used as the output of the model. §.§ Head-face-eye Queries Our approach applies multi-clue queries q_clue∈ℝ^T × C, clue ∈{head, face, eye} to capture the subject's corresponding clue regions and gaze representations from it in the video. Each query comprises T embeddings with a feature dimension of C. Each embedding generally focuses on the feature representation of the corresponding frame. Additionally, corresponding to each query, there exist proposal boxes p_clue∈ℝ^T × 4 that indicate the locations of the subject's head, face, and eye in the feature map. The parameters of both q_clue and p_clue are learnable. For each complete forward propagation, they will be updated in an iterative way to achieve effective extraction of target clues and gaze representations from it.§.§ Spatial-temporal Queries Interaction (STQI)Local-global spatial-temporal modeling is very important for the video task<cit.>, here we design specific queries for the three key clues for our task. Inspired by the transformer structure, we build strong interaction among spatial and temporal dimensions to facilitate gaze representation. Specifically, we use spatial-temporal queries interaction module <cit.> to better localize the hierarchical clues and build effective information exchange for robust gaze representations. In this module, a spatial self-attention layer is used to enable spatial interaction among head, face, and eye query within the same frame:{q_head^t,q_face^t,q_eye^t} = MHSA({q_head^t,q_face^t,q_eye^t}), where t ∈[0,T-1], and the abbreviation MHSA stands for multi-head self-attention <cit.>. Actually, these three types of queries with MHSA can essentially promote the information exchange among the head and face of global clues and the eye of local clues within the spatial domain. This leads the queries to be of both global and local spatial perspectives for gaze characterization. Moreover, we apply a self-attention layer to enable temporal interaction for each query along the temporal dimension:{q_clue^t}_t=1^T = MHSA({q_clue^t}_t=1^T),where clue ∈{ head, face, eye }. Applying temporal interaction on each query promotes sequential modeling of distinctive features, such as pose variation and eye movement, and facilitates temporal consistency, leading to robust clue localization and gaze estimation.To let the query acquire highly relevant features from input video features, we use dynamic convolution <cit.> acting on an RoI feature to update the query's features within each iteration. Specifically, the RoI feature is obtained by RoI align <cit.> based on the proposal boxes p_clue. The output feature from dynamic convolution will be used to update query features. The updated query feature q_clue^* will be used to perform clue localization and gaze estimation by task-specific heads. §.§ Task-specific Heads We design two task-specific heads (i.e., clue localization and gaze fusion head) for clue localization and gaze estimation. §.§.§ Clue localization headGiven an updated query, we can obtain the corresponding clue region that the query focuses on by the clue localization head. For each query q^*_clue, we use a multilayer Perception (MLP) followed by a sigmoid normalization to indicate the clue region existence (e.g., the face or eye cannot be detected when the subject's head is turned back to the camera) s_clue∈ℝ^T for the different clue ∈{head, face, eye}:s_clue= Sigmoid(MLP^s_clue(q^*_clue)). Similarly, we employ three separate multilayer perceptions to accomplish clue region localization for clue ∈{head, face, eye}: b_clue= MLP^b_clue(q^*_clue),where b_clue indicates the clue region localization and will be used to update the proposal boxes p_clue.§.§.§ Gaze fusion headFor the updated query features of the three clues q^*_head, q^*_face and q^*_eye, we use three different MLPs to regress the gaze vectors g_clue from them asg_clue=MLP^g_clue(q^*_clue), where g_clue∈ℝ^3 and clue ∈{ head, face, eye }. In fact, the reliability of the gaze prediction obtained from different clues may vary in different situations. For instance, when the head is turned backward, the eyes are not visible, resulting in a low reliability of gaze prediction using the eye clue. Therefore, We use three MLPs to predict the confidence level c_clue of the three predicted gazes asc_clue=MLP^c_clue(q^*_clue).Then we multiply the gaze vectors from different queries by their corresponding confidence and concatenate the resulting products. The final gaze direction g_fusion∈ℝ^T× 3 after fusion is output by a fully connected (FC) layer asg_fusion=FC([g_head× c_head,g_face× c_face,g_eye× c_eye]).§.§ Model Training We design several loss functions to optimize the whole network. In order to have the clues anchor at the target level (i.e., head, face, and eye), we supervise the clue region existence s_clue and bounding box location b_clue usingℒ_cls and ℒ_box respectively, where ℒ_cls indicates the focal loss <cit.>. ℒ_box indicates the combination of L1 loss and GIoU loss <cit.> for bounding box regression. Specifically, the lossis formulated asℒ_anchor = ∑_t=0^T-1∑_clue(ℒ_box(b^t_clue,b̂^t_clue)+ℒ_cls(s^t_clue,ŝ^t_clue)),where clue ∈{head, face, eye}. Besides, we use arccos loss to supervise gaze estimation, whose expression isℒ_arccos=arccosg ·ĝ/‖ g ‖‖ĝ‖,where ĝ denotes the output predicted gaze and g denotes the ground-truth gaze. Besides the final output g_fusion from the gaze fusion head, we also supervise the gaze prediction result within each individual clue to make them close to the real gaze direction. Specifically, the loss of gaze estimation is formulated as ℒ_gaze=∑_t=0^T-1(ℒ_arccos(g_fusion^t,ĝ^t)+∑_clueℒ_arccos(g^t_clue,ĝ^t)),where clue ∈{head, face, eye}. In addition, for better temporal modeling and to ensure the temporal stability of the output gaze, we add the temporal regularization term 𝒥_temp with the expression:𝒥_temp=∑_t=1^T-2| 2×ĝ_fusion^t-ĝ_fusion^t+1-ĝ_fusion^t-1|,where ĝ^t denotes the t-th frame of the output gaze. our overall loss function is designed asℒ_total= ℒ_anchor+λ_1 ℒ_gaze+λ_2 𝒥_temp,whereλ_1,λ_2 represent the hyperparameters in the loss function. In our experiments, they are set to 6 and 1 respectively. § EXPERIMENTS §.§ DatasetTo verify the superiority and effectiveness of MCGaze, it is tested on the challenging video gaze estimation dataset Gaze360 <cit.>. It involves 238 subjects under indoor and outdoor environments with labeled 3D gaze with variational head poses and imaging distances.Recent researches <cit.> conduct evaluation on the face-detectable subset of the Gaze360 dataset. The reason is that some samples within Gaze360 only capture the back side of the subject whose eyes are not visible and thus unsuitable for appearance-based methods. Following the main evaluation procedure of the recent works <cit.>, we train and evaluate our model on the face-detectable sub-dataset of Gaze360 which we refer to as the detectable face setting. Besides, we also conduct experiments on the entire Gaze360 to compare with some earlier works <cit.> that focused on all 360 degrees which we refer to as the 360^∘ setting. Evaluation metirc. Following most of works <cit.>, angular error (^∘) is used to measure the accuracy of 3D gaze estimation, with the following expression: ℒ_angular=g ·ĝ/‖ g ‖‖ĝ‖,where ĝ∈ℝ^3 is the predicted gaze vector; g∈ℝ^3 is the ground-truth gaze direction.§.§ Implementation detailsOn the detectable face setting, we use ResNet-50-FPN <cit.> backbone.The ResNet-50 is pre-trained on ImageNet-1K <cit.> and the iteration time N is set to 4. The model is trained using AdamW <cit.> optimizer with a batch size of 8. The initial learning rate is set to 1e-4 for the backbone and 1e-3 for the other components. During training, the input video clip length is set to 7, and before being fed into the network, frames are resized to 448 × 448 following L2CS-Net baseline <cit.>. We train the model for 13,000 iterations, with the learning rate decreasing by a factor of 0.1 at iteration 12,000. During testing, we set the input video clip length to 7 with a stride of 4 and employ temporal smoothing. On the 360^∘ setting, the experimental details are similar to those on the detectable face setting. The differences are that the frames are resized to 224 × 224 following Gaze360 baseline <cit.> for a fair comparison, and the batch size is set to 32. All experiments are conducted on a single RTX 3090 and no Test-Time Augmentation is used in any of our experiments.§.§ Comparison with state-of-the-art methodsThe comparison with the state-of-the-art methods on the detectable face setting is shown in Table <ref>. We use the same training and testing set as the listed methods for a fair comparison. Essentially, our proposition outperforms the other methods in all the test cases, thus verifying its superiority.Additionally, the comparison on the 360^∘ setting is shown in Table <ref>. Particularly, all models and methods in the table are trained using the entire Gaze360 dataset.In this more challenging setting, our approach still outperforms the state-of-the-art counterparts consistently. This indeed demonstrates the effectiveness and generality of our proposition. Moreover, our model runs efficiently, achieving a processing speed of 70 FPS (inferencing within a video clip length of 7) on the Gaze360 dataset with a single RTX 3090. Our model has 83.09 M parameters and uses 28.01 GFLOPs. §.§ Ablation StudyHead-face-eye queries. The effectiveness of concerning joint clues from the head, face, and eye in query form is verified in Table <ref>. It can be observed that when all the 3 queries are used, the optimal performance can be acquired in all the test cases. This essentially reveals that, towards gaze estimation, the global clues from the head and face are complementary to local clues from the eye for leveraging performance. Besides, we notice that the feature degradation issue happens when there is only one head query. Specifically, for the Gaze360 benchmark, the input image is a human head image, so the network may learn more about the fixed head position and thus does not learn the gaze representation well. However, for the multi-clue case, the head query can provide useful global information as complementary and thus facilitate performance. Overall, adding more clues can facilitate gaze representation and boost performance consistently.Spatial and temporal interaction in STQI. The effectiveness of spatial and temporal interaction is also demonstrated in Table <ref>. We can see that both spatial interaction and temporal interaction can facilitate performance consistently. When they are conducted jointly, the performance can be further enhanced. These indeed verify their effectiveness and the importance of head-face-eye spatial-temporal interaction context for video gaze characterization.Clue localization head in task-specific heads. The effectiveness of clue localization head is shown in Table <ref>. In MCGaze, we use this component to help query locate different clues, thereby boosting the performance of gaze estimation. §.§ Qualitative analysisAs shown in the left side of Fig. <ref>, MCGaze can produce excellent results under various environments, lighting, and gender. Some intuitive failure cases of our method are also given in the right part of Fig. <ref>. Specifically, our proposition cannot work well under some conditions:(a) Low imaging quality: The limited contextual information available from the images hinders the accuracy of the predicted gaze direction. (b) Invisible eyes: The eye clues fail to capture local information from the eye region, leading to suboptimal predicted results. (c) Gaze and head directions in highly conflict: The predicted gaze directions may be influenced by head directions.§ CONCLUSIONSIn this letter, we propose MCGaze to capture head-face-eye spatial-temporal interaction context well to facilitate video gaze characterization. In an end-to-end learning way, our proposition can be trained to solve the tasks of clue localization and gaze estimation with joint optimization. It achieves state-of-the-art performance on the challenging Gaze360 dataset with high running efficiency.However, our approach is tailored for individual subjects, and this presents a limitation. In the future, we will enhance this method to encompass multi-person scenarios and exploit richer spatial-temporal descriptive clues for video gaze estimation. IEEEtran
http://arxiv.org/abs/2310.18131v3
{ "authors": [ "Yiran Guan", "Zhuoguang Chen", "Wenzheng Zeng", "Zhiguo Cao", "Yang Xiao" ], "categories": [ "cs.CV" ], "primary_category": "cs.CV", "published": "20231027132338", "title": "End-to-end Video Gaze Estimation via Capturing Head-face-eye Spatial-temporal Interaction Context" }
affil=]PriyaHasan []Maulana Azad National Urdu University, Hyderabad, India [email protected] in the Main-Sequence of Star Cluster Hertzsprung Russell Diagrams [ 31 May 2023 ====================================================================== The presence of gaps or regions of small numbers of stars in the main sequence of the Hertzsprung RussellDiagram (HRD) of star clusters has been reported in literature. This is interesting and significant as it could be related tostar formation and/or rapid evolution or instabilities.In this paper, using Gaia DR3 photometry and confirmed membership data, we explore the HRD of nine open clusterswith reported gaps, identify them and assess their importance and spectral types. § INTRODUCTIONThe Hertzsprung Russell Diagram (HRD)of star clusters is the holy grail to understanding stellar evolution and populations. It is a snapshot of stellar lives as a plot of color (temperature) versus magnitude (luminosity). The precise position of a star can be usedto find various parameters of a star including its size, metallicity, and evolutionary state. The HRD traces stars at various phases of evolution alongthe main sequence andas they turn off to the giant branch and beyond. The HRD has been used to find the distances, ages and reddening of star clusters. The European Space Agency Gaia mission has provided unprecedented sub-milliarcsecond parallax precision for over a billion stars <cit.> that can be utilized to study the precise locations of individual stars on the HRD as well as populations of stars. The accurate, all-sky data produces an HRD that shows previously unknown features. Gaps or regions of low density of stars in the HRDhave been reported by various authors <cit.> and could be important milestones of stellar evolution.In this paper, we present a detailed study of main sequence gaps in the HRD of a sample of nine clusters of ages ranging from log t= 7.09- 9.63 and at distances 889-2773 pc using Gaia DR3 data. We use membership data and parameters from <cit.>. We identify the gaps, assess their statistical significance using the χ^2 test and identify their spectral types.§ REPORTED GAPS IN LITERATUREMain sequence gaps in HRD were reported in literature <cit.> and are listed in Table <ref>. A gap wasalso found by <cit.> in Gaia DR2 data at G ≈ 10. The gap is very narrow (≈ 0.05 mag) and is near the region in the HRDwhere M dwarf stars transition from partially to fully convective, near spectral type M3.0V.§ CLUSTER SAMPLEWe selected asample of nine clusters as clusters with confirmed gaps from literature. These are NGC 2169,NGC 2360, NGC 1778, NGC 6939, NGC 3680, NGC 2682, Trumpler 1, NGC 2420 and NGC 6134. Weusedthe following cluster parameters <cit.> shown in Table <ref> to convert magnitudes to absolute scale. The table shows the coordinates of these clusters (RA and Dec), the angular diameter (r50) which is the radius that contains half the number of members from the same reference, the logarithm of age logt, the extinction A_V, distance modulus DM and the distance to the cluster in parsecs.§ ANALYSIS We use membership data from <cit.> for our sample of nine clusters.As described in <cit.>, we find the absolute magnitude and color:G= G -μ -0.89 *A_V (BP-RP)_0= (BP-RP) -0.89/1.85*A_V. We plot the color magnitude diagrams (Fig. <ref>)and the luminosity functions (Fig. <ref>) to identify possible gaps in the HRD listed in Table <ref>.The likelihood that the observed gap represents a chance variation can be estimated as follows.For the identified gaps, we calculate χ^2=(N-N_o)^2/N, where N is the expected number of stars and N_0 is the observed number of stars as described in <cit.>. We find the expected number as the average of the numbers before and after the gap.χ^2is related to p that is the probability that the gap is an chance event. For a χ^2=4.0, with one degree of freedom, the p value is 0.05. This means that the probability of the gap being significant is 1-0.05 = 0.95, that is95 %. This implies that a smaller value implies a higher chance of the gap being significant. Table <ref> lists the gaps found in our sample with their spectral Types and significance. We noticethe gaps we foundare of similar spectral types as described in Table <ref>. § CONCLUSIONS In this paper, we use Gaia DR3 data and membership dataof <cit.> to study gaps in the main sequence of the HRD of star clusters. We use the χ^2 test to find the significance of the gaps. We compare the spectral types of earlier detections and find that they agree with our present results.Gaps were reported by <cit.> in Gaia DR2 data for M dwarfs. In our sample, the membership data used is available only till apparent G magnitude 18 and does not include M dwarfs, we go to a spectral type of upto G, therefore we don't find that in our data. A more detailed study of HRD of star clusters is necessary to characterise these gaps and study them in more detail.This work has made use of data from the European Space Agency (ESA) mission Gaia (<https://www.cosmos.esa.int/gaia>), processed by the Gaia Data Processing and Analysis Consortium (DPAC, <https://www.cosmos.esa.int/web/gaia/dpac/consortium>). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the Gaia Multilateral Agreement. 0000-0002-8156-6940PriyaHasan The authors declare no conflict of interest. bullsrsl-en
http://arxiv.org/abs/2310.17725v1
{ "authors": [ "Priya Hasan" ], "categories": [ "astro-ph.SR", "astro-ph.GA" ], "primary_category": "astro-ph.SR", "published": "20231026183643", "title": "Gaps in the Main-Sequence of Star Cluster Hertzsprung Russell Diagrams" }
APS/123-QEDDepartment of Physics and Astronomy, University of British Columbia, 6224 Agricultural Road, Vancouver, BC, V6T 1Z1, CanadaIn this article, we propose a practical way to realize topological surface Dirac fermions with tunable attractive interaction between them. The approach involves coating the surface of a topological insulator with a thin film metal and utilizing the strong-electron phonon coupling in the metal to induce interaction between the surface fermions. We found that for a given TI and thin film, the attractive interaction between the surface fermions can be maximally enhanced when the Dirac point of the TI surface resonates with one of the quasi-2D quantum-well bands of the thin film. This effect can be considered to be an example of ’quantum-well resonance’. We also demonstrate that the superconductivity of the resonating surface fermions can be further enhanced by choosing a strongly interacting thin film metal or by tuning the spin-orbit coupling of the TI. This TI-thin film hybrid configuration holds promise for applications in Majorana-based quantum computations and for the study of quantum critical physics of strongly attractively interacting surface topological matter with emergent supersymmetry.Realizing attractive interacting topological surface fermions: A resonating TI- thin film hybrid platform Saran Vijayan and Fei Zhou January 14, 2024 =========================================================================================================§ INTRODUCTIONTopological Insulators(TI)<cit.> belong to the class of symmetry-protected topological phases, where the gapless boundary states are protected by Time-Reversal Symmetry(TRS). One interesting feature of these surface states is that their low-energy excitations can resemble a single flavor of 2-component massless Dirac fermions(N_f = 1/2). This is unique because it is impossible to realize an odd number of flavors of 2-component Dirac fermions in a bulk lattice because of the fermion doubling problem intrinsic to lattice models<cit.>. Therefore topological surface provides an interesting platform to study various interactions involving a single flavor of 2-component Dirac fermions, provided the interactions do not break the time-reversal symmetry.Of particular interest is when there is an effective attractive interaction between the surface fermions. For a finite chemical potential (i.e. when the Fermi level is above or below the Dirac point), the U(1) symmetry breaking leading to the superconducting phase can happen for arbitrarily weak attractive interaction due to the Cooper instability at the surface. On the other hand at the zero chemical potential (Fermi level aligned with the Dirac point), the interaction strength must be greater than a critical value for the phase transition to occur. In both these cases, the resulting superconducting phase can be of non-trivial topological character<cit.>. Specifically, it implies that the vortex core of the superconductor can host Majorana zero modes<cit.>. They are considered to be an ideal candidate for fault-tolerant quantum computing because of their non-Abelian statistics.Another interesting feature of the attractively interacting surface fermions is that surface dynamics have an emergent Lorentz symmetry when the chemical potential is zero. It has been demonstrated that the effective field theory of the surface states further exhibits emergent supersymmetry (SUSY) when the coupling constant of the attractive interactions is tuned to be quantum critical <cit.>. Supersymmetry is the symmetry between bosons and fermions and had been speculated to exist as a fundamental symmetry in elementary particle physics. Emergent supersymmetry in lattice models is difficult to realize, at least in d>1 spatial dimensions, because fermions typically have more degrees of freedom than bosons in lattices, a consequence of the fermion doubling problem. But at a quantum critical point of topological surfaces, an emergent SUSYexists between the charge 2e bosons that naturally emerge as quasi-particles and the 2-component Dirac fermions in the semi-metallic phase, both of which can be strongly self-interacting and mutually interacting. Therefore, the topological surface provides an ideal platform to study the dynamics of supersymmetric quantum matter.However, realizing a topological surface with net attractive interactions between them is not straightforward and can be challenging. One reason is the unscreened nature of repulsive Coulomb interactions in an insulator. In addition, many topological insulator materials do not have strong electron-phonon interactions. In this article, we propose coating the 3D TI surface with a metallic thin film as a practical way to realize a ground state of interacting surface fermions with net attractive interaction between them. A thin film is characterized by the quasi-2-dimensional quantum-well bands due to the quantum confinement of the electronic states in the third dimension. Due to the screened nature of Coulomb repulsion, the phonon-mediated attractive interaction can be the dominant form of interaction between electrons at zero temperature. On depositing the thin film to the TI surface, the 2D surface Dirac fermions and the quasi-2D quantum-well fermions start hybridizing. These hybrid fermions are a quantum superposition of the quantum-well states and the TI surface states. Hence the hybrid fermions not only acquire a helical spin-texture from the surface side but will also experience a net attractive interaction due to coupling with the phonons in the thin film. In a way, hybridization causes the surface Dirac cone to be exported to the thin film which results in the helical Dirac fermions experiencing a phonon-mediated attractive interaction between them. Alternatively, one can show that the hybridization leads to variable or tunable attractive interactions among topological surface Dirac fermions. We have observed that this attractive interaction between the helical fermions is maximally enhanced when the Dirac point of the TI surface resonates with one of the quantum-well states of the thin film. While at resonance, there is no clear distinction between the TI surface and the thin film states as the electronic states are strongly hybridized, we do show that in the wide range of parameter space, the low energy physics effectively becomes that of strongly interacting surface Dirac fermions.We study the superconductivity of these resonating hybrid states at different thickness regimes. Consider the ultra-thin limit of the film, when only a single quantum-well (QW) band crosses the Fermi level (we shall call this the N = 1 limit, where N is the number of QW bands crossing the Fermi level). Then we effectively have a four-band model of the interacting helical hybrid states. Following the bulk-boundary relations(BBR) of interactions obtained before<cit.>, we find that effective phonon-mediated interaction scales as 1/D, D being the thickness of the film and hence the interaction strength is at its strongest in the N=1 ultrathin limit. We show that for a wide range of chemical potentials, it is possible to construct an effective field theory of attractively interacting 2-component Dirac fermions. We then studied possible ways of enhancing the superconducting gap by tuning the bulk coupling constant of the thin film and the Dirac velocity of the surface fermions. When the thin film thickness is increased further, in addition to the Fermi surfaces formed by the resonating hybrid bands, there also exists Fermi surfaces formed by the QW bands that were off-resonance. Therefore the superconducting gap in this limit is formed not just due to attractive interaction between the surface fermions but also because of the scattering of the singlet pair of electrons from these background off-resonance QW Fermi surfaces. In the very thick limit (large-N limit), we explicitly show that the superconductivity on these resonating hybrid bands is dominated by the scattering of the singlet pair of electrons from the off-resonance Fermi surfaces. However, the interaction between the surface fermions can further enhance the surface superconductivity.And when the interactions are sufficiently strong, the enhancement can be very substantial.It should be noted here that if one's prime focus is mainly to realize a topological superconducting phase, then it is not necessary to have attractive interaction between the surface fermions<cit.>. Superconductivity can be induced on the surface by the proximity effect, implemented by depositing a bulk s-wave superconductor on the TI surface. The interface between the TI and the s-wave superconductor had been shown to be in the topological superconducting phase even though the surface electrons are non-interacting. As mentioned before, the main objective of our work is to realize a platform of strongly interacting surface fermions. The attractive interactions between surface fermions can lead to emergent SUSY at its QCP, a phenomenon that can potentially have a lot of impacts on the fundamental understanding of the building blocks of nature.However, the TI-thin film hybrid has richer physics over the conventional proximity structures even if our objective is only to realize a topological superconducting phase. Due to the strong single-particle hybridization, there is a finite probability of finding the surface fermions on the thin film side, sometimes called the 'topological proximity effect' <cit.>. Thus in this structure, the topological superconducting phase can proliferate across the interface, and can even be observed on the thin film side and not just at the interface, making it easy to detect in the experiments <cit.>. We like to note here that the effect of tunneling of the TI surface fermions on the superconductivity in the thin film has been extensively studied in refs.<cit.>. Ref.<cit.> studied the superconductivity in the monolayer thin film-TI hybrid as a function of tunneling strength and the chemical potential.They found a suppression of the superconducting gap in the thin film when the Fermi momentum of the thin film and the TI surface matched and non-hybridized surface fermions were integrated out. Ref.<cit.> found an enhancement in the superconducting order when the Fermi level crosses the bottom of the double-well hybrid bands. This result is encouraging in the context of thin film superconductivity. However, the Lifshitz transition leading to the enhancement results in two additional fermion surfaces and does not affect the topological aspect of superconducting pairing. The main focus of this article on the other hand is to understand Dirac fermions in the topological surface and their interactions and pairing dynamics mediated by coated thin films. When the superconductivity of non-interacting surface fermions (before the tunneling is turned on) is concerned, we find in this work that the superconductivity on the surface fermions can be induced and greatly enhanced if surface fermions are in resonance with electrons in thin films. Although in the limit of resonance physically it is not possible to entirely isolate the surface fermions from the thin film electrons, the effective field-theory description in the most interesting limit is simply of the form of interacting Dirac fermions but with various substantially renormalized parameters. These renormalization effects especially the fermion-field renormalization are one of the main focuses of our studies and discussions below as they directly set the strength of interactions mediated by the thin films. These renormalization effects can either lead to surface superconductivity that otherwise won't exist because of the absence of direct pairing dynamics or further enhance the well-known Fu-Kane proximity effects of non-interacting surface fermions<cit.>. The induced surface fermion interactions are also shown to follow explicitly the generic scaling law indicated in the general bulk-boundary interaction relation obtained in a previous article<cit.>.The article is organized as follows: In the section <ref>, we discuss the single particle tunneling physics at the interface. We write down the single-particle Hamiltonian for the hybrid fermions on the helicity basis. In section <ref>, starting from the fundamental electron-phonon coupling Hamiltonian of the thin film, we derive a general short-ranged pairing Hamiltonian that explains the interactions of the hybrid fermions with one another and with the thin film electrons belonging to the off-resonance bands. Here we find that the hybrid fermions acquire an effective attractive interaction between them and the interaction strength is renormalized by a Z-factor. The Z factor is essentially a measure of the probability amplitude of the hybrid fermions to be in the thin film side of the interface. In section <ref>, we study the evolution of this Z-factor as a function of the dimensionless detuning parameter δ̃ and find that the attractive interaction between the surface fermions is enhanced at the quantum-well resonance (δ̃ = 0). Section <ref> is dedicated to the mean-field approximation. Here we derive the superconducting gap equation under the assumption that the Debye frequency ω_D ≪μ, where μ is the chemical potential. Essentially, we assume that only electronic states near the Fermi level are interacting.In section <ref>, we consider the limit when the surface states hybridize with the N=1 QW band of the thin film. We construct an effective theory followed by exploring various ways to enhance the superconducting gap in this limit. Section <ref> discusses the large-N limit of the theory. Here we make connections to Fu-Kane's model in the perturbative limit of tunneling. In Section <ref>, we studied the evolution of the superconductivity on the resonating hybrid states as a function of thickness (parametrized by the band index N).§ NON-INTERACTING THEORY§.§.§ Model HamiltonianWe start by defining a minimal theoretical model to understand the essential tunneling physics at the thin film-topological insulator interface.Let the thin film - TI interface be at z = 0. The topological insulator(TI) occupies the bottom half-plane defined by z < 0. Consider a thin film of thickness d deposited over the TI surface, so that it occupies the space 0 < z < d. Let us first write down a simple model for thin film electrons. In the XY plane, we apply periodic boundary conditions. The electron confinement in the z-direction is usually characterized by an infinite well potential with its boundaries at z=0 and z=d. However this model cannot permit tunneling of thin film electrons to the TI side since the amplitude of the electron wavefunction is zero at the interface. To allow for tunneling, a simple way is to impose open boundary conditions at the interface so that the amplitude is maximum at the interface. As a result, the momentum in the z-direction gets quantized as k_z = (n - 1/2)π / d where n = 1, 2,.., and the z-dependence of the electron wavefunction becomes ψ_n(z) = √(2/d)cos((n - 1/2)π z/ d). Thus, the Hamiltonian governing the dynamics of thin film electrons deposited over the TI has the form,ℋ^tf =∑_n, s∫d^2k/(2π)^2 c^†_k, n h^tf_k, n c_k, nwhereh^tf_k, n = ϵ^tf_k,nÎ= [ħ^2k^2/2m^* + (n - 1/2)^2π^2ħ^2/2m^*d^2]Îwherec_k, n = [ c_k, n, ↑ c_k, n, ↓ ] is thecreation operator for an electron at the nth quantum well state with in-plane momentum k = (k_x, k_y) in the thin film. Î is just an identity matrix to emphasize that h^tf is a 2× 2 matrix in the spin-1/2 space.The effective Hamiltonian that describes the surface states of a topological insulator is,ℋ^surf = ∫d^2k/(2π)^2χ^†_k h^surf_kχ_kwhereh^surf_k = A_0(s_x k_y - s_y k_x) + E_0where χ^† = [ χ^†_↑ χ^†_↓ ] is the creation operator of the surface electron. s_x, s_y are Pauli matrices in the spin-1/2 space. A_0 describes the strength of spin-orbit coupling. E_0 is the energy at the Dirac point. Due to the presence of spin-orbit coupling, the Hamiltonian does not have spin-rotation symmetry. Rather it is diagonal in the helicity basis. The energy eigenstates in the helicity basis are given by,ϵ^surf_k,± = ± A_0 |k| + E_0 Assuming that the tunneling process is spin-independent, the simplest model that can describe the hybridization of the surface states of the TI with the quantum well states of the thin film is given by,ℋ^t = t∫ d^2r (χ^†(r) Ψ(r, z = 0) + h.c)here χ^†(r) is the spinor field operator that creates a topological surface electron at in-plane position r = (x,y). Ψ(r) is the spinor field operator thin film electrons with open boundary conditions. In the k-space, the Hamiltonian is of the form,ℋ^t=t_d∑_n∫d^2k/(2π)^2χ^†_k c_k, n + h.ct_d= t/√(d) We find that the effective tunneling strength scales as a function of the thin film thickness d as a result of quantum confinement in the z-direction. The surface area in the xy-plane given by L_xL_y is set to unity throughout this paper. In this article, we ignore the possibility of multi-band tunneling. This is a good approximation provided we work in the limit where the energy difference between successive thin film QW bands is greater than the bulk energy gap of the topological insulator. In this limit, there is effectively only one QW band on which the tunneling effects due to TI surface electrons are significant. Since the chemical potential is aligned within the bulk energy gap of the TI, this QW band will be the topmost conduction band of the thin film. In other words, this band will be the one closest to the Dirac point of the TI surface in terms of energy. Tunneling effects on other QW bands are perturbative which is not the focus of our study in this section. Quantitatively, the effective model that we introduce in this article works well only when the following condition is satisfied, |ϵ^tf_k=0,N - ϵ^tf_k=0,N± 1| ≥ m where m is the mass gap of the topological insulator and N is the band index of the thin film QW band that is energetically closest to the Dirac point of the TI surface. Once this condition is satisfied, we can conveniently ignore the tunneling effects on all other n ≠ N bands. This setup is illustrated schematically in Fig.<ref>(b). Then the simplified effective Hamiltonian of the electronic states involved in tunneling becomes, ℋ^hbd =∫d^2k/(2π)^2[ c^†_k,Nh^tf_k, Nc_k, N + χ^†_k h^surf_kχ_k+ t_dc^†_k,Nχ_k + h.c] §.§.§ Hybridization at the interfaceTurning on t results in thin film electrons tunneling to the TI surface side and vice versa. Tunneling effects will be significant when t/| E| > 1, where E is the difference in energy between the initial and the final state. In this case, a perturbative treatment won't be sufficient. Here we shall understand the effects of tunneling in a non-perturbative manner. The full Hamiltonian is diagonalized exactly and the properties of the resulting hybrid electrons are studied. To diagonalize the Hamiltonian, we shall define a SU(2) space σ_i(i = x,y,z) to model the spatial profile of the electrons. In this space, the single-particle Hamiltonian in the momentum space becomes,h^hbd_k = ( [ h^tf_k, N t_d; t_dh^surf_k ])=I ⊗ M_k, N + σ_z ⊗δ_k, N+σ_x ⊗ I t_d in the basis Γ_k,N^† = ([ c^†_k, N, ↑ c^†_k, N, ↓ χ^†_k,↑ χ^†_k,↓ ]). Here δ_k, N and M_k, N are 2×2 matrices in the spin-1/2 space with the respective definitions:δ_k, N = ( h^tf_k, N - h^surf_k)/2 M_k, N = (h^tf_k, N + h^surf_k)/2Since we discuss the hybridization effect only on the Nth band in the thin film, the index N will be dropped from now on. But do note that the unitary matrix elements do depend on the value of N which in turn is connected to the thickness of the thin film. The Hamiltonian can be diagonalized in the σ space by performing a unitary transformation with the following unitary matrix,U_k = ([cosθ_k/2sinθ_k/2; -sinθ_k/2cosθ_k/2 ]), cosθ_k = δ_k/√(δ^2_k + t^2/d)The Hamiltonian after rotation attains the following diagonal form,ℋ^hbd = ∫d^2k/(2π)^2[d^†_k,th_k,t d_k,t + d^†_k,bh_k,b d_k,b]where d^†_k,t(b) = [ d^†_k,t(b),↑ d^†_k,t(b),↓ ] are two-component spinors in the spin basis. h_k,t(b) have the following definitions,h_k,t = M_k + √(δ^2_k + t^2_d) h_k,b = M_k - √(δ^2_k + t^2_d)here the index t and b represent the 'top' and 'bottom' bands respectively. This splitting is a result of the tunneling of single-particle states between the two sides of the hybrid. In addition, we find thath_t(b) are 2×2 matrices in the spin-1/2 space. h_k,t(b)has terms proportional to s_x k_y - s_y k_x implying that the hybrid states acquired an emergent spin-orbit coupling. Tunneling essentially resulted in hybridizing the thin film QW state and the TI surface state. Due to this induced helical spin structure of the hybrid states, it is better to write the full Hamiltonian on a helicity basis. We define the following set of creation operators, d^†_k,t(b) = a^†_k,t(b)Π^†_k, Π_k = 1/√(2)([ 1 1;e^iϕ_k -e^iϕ_k ])e^iϕ_k = k_y - i k_x/|k|where a^†_k,t(b) = [ a^†_k,t(b),+ a^†_k,t(b),- ]. Here (+) and (-) represent states with positive and negative helicity respectively. In this helicity basis, the single-particle Hamiltonianhas the following diagonal representation, ℋ^hbd = ∫d^2k/(2π)^2[a^†_k,t, +ϵ^hbd_k,t,+a_k,t, ++a^†_k,t, -ϵ^hbd_k,t,-a_k,t,- + a^†_k,b, +ϵ^hbd_k,+a_k,b, ++a^†_k,b, -ϵ^hbd_k,b,-a_k,b,-] ϵ^hbd_k,t,± = ϵ^tf_k,N + ϵ^surf_k,±/2 + √(( ϵ^tf_k,N - ϵ^surf_k,±/2)^2 + t^2_d)ϵ^hbd_k,b,± = ϵ^tf_k,N + ϵ^surf_k,±/2 - √(( ϵ^tf_k,N - ϵ^surf_k,±/2)^2 + t^2_d) Fig.<ref> shows an example of the energy spectrum before and after the tunneling.Given that the condition in Eqn.<ref> is satisfied, the tunneling effect on the thin QW bands of index n ≠ N is perturbative and hence they are ignored. Therefore, the single-particle Hamiltonian of all these n≠ N QW bands is unaffected by the tunneling and retains the form given in Eqn.<ref>. The electronic states in these bands will play huge role in the pairing physics especially in the large-N limit, as we shall see later in this article. § EFFECTIVE PAIRING HAMILTONIANWe examined the physics of single-particle tunneling at the thin film-TI hybrid in the preceding section. We discovered that non-perturbative tunneling results in the hybridization of the surface bands with the thin film's resonant quantum-well band. We now have a four-band model with single particle states that are a linear superposition of the thin film state and the surface state. As a result, it is possible that the hybrid states couple with the phonons in the thin film. The effective short-ranged pairing Hamiltonian that explains the interactions of the hybrid electrons with one another and with the thin film electrons belonging to the inner bands is derived in this section starting with the fundamental electron-phonon coupling Hamiltonian.§.§ Phonon-mediated interaction potential between thin film electrons§.§.§ 2D electron-phonon coupling HamiltonianSimilar to electrons, phonons in the thin film are also spatially confined within the range z=0 and z=d. As a result, the phonon spectrum also gets quantized resulting in the formation of 2D QW bands indexed by the integer l. We implement open boundary conditions at the thin film-TI interface. The phonon spectrum becomes, E_ph(q, l) = ħ c √(q^2 + ((l-1/2)π/d)^2), where l is an integer identifying the confined slab phonon mode. The electron-phonon coupling Hamiltonian in 3D has the form,ℋ_e-ph = G_fp∫ d^2rdz Ψ^†(r,z)∇.Φ(r,z)Ψ(r,z)where Ψ(r) is the 2-component electron field operatorand Φ_i(r)(i=x,y,z) is the phonon field operator in the thin film with the following definitions,Ψ(r,z)= ∑_n∫d^2k/(2π)^2 ψ_n(z) e^ik.r c_k, n Φ_i(r,z)= ∑_l∫d^2q/(2π)^2ϕ_l(z)e^iq.r/2√(E_ph(q, l))[b_q, l, i + b^†_-q, l, i]where ψ_n(z) = √(2/d)cos((n-1/2)π z/d) and ϕ_l(z) =√(2/d)cos((l-1/2)π z/d). Integrating out the z-degrees of freedom, we obtain the following effective 2D Hamiltonian, ℋ_e-ph = ∑_n, n', l g^l_n,n^'(d) ∫ d^2r Ψ^†_n'(r)∇⃗.Φ⃗_l(r)Ψ_n(r)Here Ψ_n is the effective 2D electron field operator for an electron with band index n. Similar definition holds for Φ⃗_l. The scattering matrix g^l_n,n^'(d) is given byg^l_n,n^'(d)=(-1)^n+n'-lG_fp/π√(2/d)[l - 1/2/( l - 1/2)^2 - (n - n')^2- l - 1/2/( l - 1/2)^2 - (n + n' - 1)^2]We find here that coupling with phonons can lead to interband scattering of electrons in the thin film.§.§.§ Pairing potential matrix It is well known that coupling with phonons leads to an effective electron-electron interaction that could be attractive under certain conditions. The minimal BCS pairing Hamiltonian that emerges out of the coupling term in Eqn.<ref>, has the following form, ℋ_I = ∑_n,n'ℋ_I(n, n')=- ∑_n,n'∫d^2k/(2π)^2d^2p/(2π)^2V^n,n'_k,pc^†_p, n s_y c^† T_-p, n c^T_-k,n's_y c_k, n'where the pairing potential V_n,n' has the form,V^n,n'_k,p = ∑_l=1^l_max |g^l_n,n'(d)|^2,- ω_D < ξ_k,n,ξ_p,n' < ω_D 0, else Here ω_D is the Debye frequency of the thin film. ξ_k,n is the single-particle energy of the thin film electrons measured from the chemical potential. The electron-phonon coupling matrix g_n,n' is summed over all the slab phonon modes up to l_max. It is the maximum value that a phonon mode could have in the thin film at a given thickness 'd'. To find its value, recall that Debye frequency sets the UV cut-off for the energy of lattice vibrations. Hence l_max can be calculated by taking the integer part of the expression d (k_D/π), where k_D is the Debye momentum. A comprehensive study of the thin film superconductivity with attractive interaction mediated by confined phonons was conducted in ref.<cit.>. An important consequence of the dimensional reduction applied in the context of interactions<cit.> is that the effective 2D interaction potentialacquires a scaling dependence on the thin film thickness as, V^n,n'_k,p∝1/dThus the attractive interaction increases with reducing thickness. This implies that the attractive interaction is maximum in the ultrathin(N=1) limit of the thin film. We shall use this scaling relation in the later part of this paper in order to enhance the attractive interaction between surface fermions. §.§ The general interaction Hamiltonian of the thin film-TI hybrid When the tunneling is turned on, the thin film band which is close to the Dirac point of the TI surface is hybridized. Let N be the index of the band that is hybridized. As mentioned before, we consider only the limit when the N±1 bands are separated from the Nth band by a magnitude of at least the order of bulk energy gap of the TI (See Eqn.<ref>). So, the effects of hybridization on all these n ≠ N bands are ignored. Now coming back to the Nth band, hybridization with the surface Dirac cone implies that the electronic states in that QW band are no longer diagonal in the thin film basis. The hybrid states are in a linear superposition of the thin film and the TI surface states. The emergent excitations of this hybrid system are the statesd^†_k,t(b)|0⟩ in the spin basis. It is even easier to study the interaction if we could rotate the states to the helicity basis since the hybrid states are diagonal in the helicity basis. So we project the interaction Hamiltonian ℋ_I of the resonant band indexed by N into the basis spanned by a_k,t(b),± states (defined in Eqn.<ref>). After the projection, the full Hamiltonian ℋ_I can be divided into essentially three terms. The first term is the Hamiltonian describing the attractive interaction between the helical hybrid fermions. Secondly, we have the term describing attractive interaction between the hybridized fermions and the trivial fermions of all the n≠ N thin film transverse bands. Lastly, we have the interaction Hamiltonian for the fermions in the thin film unaffected by hybridization. In doing this projection, terms that describe interband pairing between the helical fermions have been ignored. This is a good approximation in the BCS limit. We shall write down the three terms in the Hamiltonian explicitly below,ℋ_I = ℋ^hbd-hbd_I + ℋ^hbd-tf_I + ℋ^tf-tf_INow we shall derive these three terms in the Hamiltonian starting from the fundamental s-wave pairing Hamiltonian in the thin film. The details of the derivation are given in the appendix <ref>.§.§.§ Hamiltonian for Interaction between hybrid fermions (ℋ^hbd-hbd) Here we shall derive the pairing Hamiltonian that describes the attractive interaction between the helical hybrid fermions. Before the tunneling was switched on, the interaction between electrons in the Nth band of the thin film is described by the following Hamiltonian,ℋ_I(N, N)=- ∫d^2k/(2π)^2d^2p/(2π)^2V^N,N_k,pc^†_p, N s_y c^† T_-p, N c^T_-k,Ns_y c_k, NOnce the tunneling is switched on, the electronic states in the Nth band are hybridized and we have a 4-band model with a helical spin texture. So, it is better that the interaction Hamiltonian be written down in the helicity basis. Before we write down the Hamiltonian, we shall define the notations used to identify all four hybrid bands. Let m, m' run over the band indices t(top) and b(bottom). Similarly, λ and λ' run over the + and - helicity branches. Using this set of indices, we can write down the following interaction Hamiltonian that describes all possible pairing interactions(except the inter-band pairing) between the four hybrid bands: ℋ^hbd-hbd_I = - ∑_α, β∫d^2k/(2π)^2d^2p/(2π)^2 e^i(ϕ_p - ϕ_k) λλ'J^α,β_k,pa^†_k,αa^†_-k,αa_-p,βa_p,β J^α,β_k,p = V^N,N_k,pZ^α_kZ^β_pHere α = (m,λ) β = (m', λ') is used as a shorthand notation to denote the band indices. Note that λλ' = -1 if the scattering is between bands of opposite helicity. Z^α_k can be identified as the wavefunction renormalization of a hybridized electronic state as a result of tunneling with respect to a thin film state without tunneling. This implies that Z^α_k = 1 for a thin film state and Z^α_k = 0 for TI surface state before the tunneling was turned on. They have the following structure,Z^(t,±)_k = 1/2( 1 + δ_k,±/√(δ^2_k,± + t^2/d))Z^(b,±)_k = 1/2( 1 - δ_k,±/√(δ^2_k,± + t^2/d)) δ_k,± = 1/2(ϵ^tf_k,N - ϵ^surf_k,±) So we find here that, as a result of tunneling, a pairing potential exists between the helical hybrid fermions and it is proportional to the square of the renormalization factors of the bands corresponding to the initial and final states of the Kramers pair of electrons involved in pairing. This makes physical sense because the Z-factor determines the probability that an electron is in the thin film side of the interface. Only the electrons in the thin film side of the interface will experience an attractive interaction mediated by phonons. If Z^α_k = 1 for an electronic state of momentum k and in a hybrid band indexed by α, the electronic state is completely in the thin film side of the interface and experience the full attractive interaction. But in this case, the electronic state will not have the helical spin texture induced by the TI surface. On the other hand, if Z^α_k = 0 for an electronic state in the hybrid band, then the electron is entirely on the TI side of the interface and does not experience an attractive interaction. So we have to fine-tune the material parameters such that both the effects, the helical spin texture, and the attractive interaction are substantial. We shall show in this article quantitatively that this can be achieved by fine-tuning the thickness to 'quantum well resonance' at the Dirac point. A detailed discussion of this phenomenon will be presented in the next section.§.§.§ Interaction between hybrid fermions and the thin film fermions in the n ≠ N bandℋ^hbd-tfIn the limit that we are working, hybridization effects are substantial only for the thin film QW band at n=N. All the other n≠ N bands are much above or much below the Dirac point of the TI surface so that the tunneling effects due to surface fermions are negligible. But it is possible that the hybrid fermions can still experience attractive interaction with the thin film electrons lying in all of the n≠ N bands. This effect is captured by the interband scattering terms of the thin film interaction Hamiltonian given in Eqn.<ref>. Before tunneling is introduced, it is possible that a singlet Cooper pair of electrons in the Nth band can scatter to any of the n≠ N bands. The Hamiltonian describing such a process can be read out from the full interaction Hamiltonian given in Eqn.<ref> by fixing n' to N and letting n run over all n≠ N.∑_n≠ Nℋ_I(N, n) = - ∑_n≠ N∫d^2k/(2π)^2d^2p/(2π)^2V^N,N_k,pc^†_p, n s_y c^† T_-p, n c^T_-k,Ns_y c_k, NOnce the tunneling is switched on, the Cooper pair c^T_-k,Ns_y c_k, N is projected to the helicity basis of the t and b hybrid bands. In doing this, we arrive at an interaction Hamiltonian that describes the attractive interaction between the hybrid fermions and the off-resonance thin film fermions. Let us call the Hamiltonian by the name ℋ^hbd-tf_I and has the following definition, ℋ^hbd-tf_I = - ∑_n≠ N∑_α∫d^2k/(2π)^2d^2p/(2π)^2e^iϕ_pλ K^n,α_k,p c^†_k, n(-is_y) c^† T_-k, n a_-p,α a_p,α K^n,α_k,p = V^n,N_k,pZ^n_kZ^α_pNote that Z^n_k = 1 for all k and n≠ N since it corresponds to the renormalization factor of the thin film electrons which did not participate in tunneling. It has been included in the expression only for the purpose of generality. So here we find that even though the thin film electrons in the n ≠ N bands do not participate in tunneling, they do contribute to the superconducting phase of the hybrid fermions.§.§.§ Interaction between all the n ≠ N band thin film fermions (ℋ^tf-tf_I)It is also important to consider the attractive interaction between the electrons in the n≠ N bands that were not part of the tunneling. It is just the trivial BCS singlet pairing Hamiltonian. It is found by summing over ℋ_I(n, n') defined in Eqn.<ref> for all n,n' ≠ N. Let us call this Hamiltonian as ℋ^tf-tf_I. It has the form, ℋ^tf-tf_I =- ∑_n,n' ≠ N∫d^2k/(2π)^2d^2p/(2π)^2V^n,n'_k,pc^†_p, n s_y c^† T_-p, n c^T_-k,n's_y c_k, n'The full interaction Hamiltonian of the TI-thin film hybrid is now the sum of all three terms as given in Eqn.<ref>. § Z-FACTOR AND THE QUANTUM-WELL RESONANCE In section II, we studied the single-particle tunneling of electronic states in the topological surface to the QW thin film band lying closest to it. The tunneling effectively results in the hybridization of the electronic states and leads to the formation of four spin-split hybrid bands, with an emergent helical spin texture for each of them.In section III, we found that these helical hybrid electrons can couple with the confined phonons of the thin film and could result in an effective attractive interaction between them. The effect of tunneling is taken into account in the interaction strength by the renormalization factor Z^α_k defined in Eqns.<ref>. For instance,one can show that the type of pairing between two electrons with renormalization factors equal to unity will be trivial s-wave-like. This is because these electrons lie entirely in the thin film side and the tunneling effect on them is negligible. The other extreme is when the renormalization factor of the electrons is zero. This corresponds to the non-interacting surface electrons. From these intuitive arguments, one can anticipate that the ideal choice for the renormalization factor of an electronic state will be 1/2.It is at this limit the tunneling effect is maximum. This implies that the surface states that are initially non-interacting will acquire maximum attractive interaction in this limit. This is because it is the tunneling that actually induces an effective attractive interaction between surface fermions. In order to realize this maximum tunneling effect, the corresponding electronic states on both sides of the interface must be degenerate. In other words, the electronic states should be in quantum-well resonance. In this section, we will show this explicitly by studying the behavior of the renormalization factor as a function of the detuning parameter defined at the Dirac point. The renormalization factors were defined in Eqn.<ref> as a function of the band indices and momentum k. Since there are four hybrid bands, we have four renormalization factors for a fixed momentum k. One can show that they follow a general relationship,Z^(t,+)_k + Z^(b,+)_k =1Z^(t,-)_k + Z^(b,-)_k =1for any momentum state k. This implies that for a fixed helicity if one hybrid band is on the thin film side, the other band lies on the TI surface side. We are mostly interested in the interacting dynamics of the electronic states near the Dirac point. Therefore we set the momentum k = 0 in the above equations and study the evolution of the Z-factors as a function of the detuning parameter also defined at zero momentum. At k=0, there is a further simplification. We find that due to the crossing of the two helicity branches at the Dirac point, the respective Z factors turn out to be equal. That is, Z^(t,+)_k = 0 = Z^(t,-)_k = 0, and Z^(b,+)_k = 0 = Z^(b,-)_k = 0So at k=0, we essentially have ended up with just two Z factors subject to the constraint that their sum must be equal to unity. We shall make the following redefinitions,Z^t = Z^(t,±)_k = 0andZ^b = Z^(b,±)_k = 0 so we haveZ^t + Z^b=1Now we shall define the detuning parameter at k = 0. It has the form,δ̃(d) = δ_k=0,N(d)/t_dhere δ_k is defined in Eqn.<ref> in the section II as a 2×2 matrix in the spin space. But at k = 0, it turns out to be an identity matrix that can be treated as a number. δ̃ essentially gives the energy difference between the electronic state in the thin film band closest(indexed by n=N) to the Dirac point and the Dirac point of the TI surface. When the energy difference is zero, the electrons at k=0 are in quantum-well resonance and the tunneling effect will be maximum. Moving away from δ̃ = 0 is equivalent to detuning away from resonance. We defined the detuning parameter at k=0 because we are mostly interested in studying the interacting dynamics of the electrons near the Dirac point. In general, one can define a detuning parameter for any general k. Here we use thin film thickness d to tune the detuning parameter. From Eqns.<ref> and <ref>, we could deduce the following simple relationship betweenrenormalization factorsand the dimensionless detuning parameter δ̃ at zero momentum,Z^t(δ̃) = 1/2( 1 + δ̃/√(1 + δ̃^2))Z^b(δ̃) = 1/2( 1 - δ̃/√(1 + δ̃^2)) Fig.<ref> shows the results. In b) we plotted Z^t and Z^b as a function of the detuning parameter. a) part shows the band spectrum of the thin film and the TI surface at the three different limits of detuning.When δ̃≪ 0, Z^b ≈ 1 and Z^t ≈ 0. This implies that the bottom hybrid band is the thin film transverse band while the top band is the surface Dirac cone. On the other hand, when δ̃≫ 0, the bottom band is the surface Dirac cone and the top band is the thin film transverse band. This is clearly understood once we look at the band dispersion shown in Fig.<ref>(a). In these two limits, the tunneling effects are perturbative. One can notice that the renormalization factor Z^b, which follows the surface band when δ̃≪ 0. is nearly zero in this limit. Similar is the case with Z^t when δ̃≫ 0. This implies that the surface electrons do not experience a substantial attractive interaction when |δ̃| ≫ 0. But as δ̃→ 0 from either side, things begin to change. We find that both the renormalization factors approach 1/2 from either side. This clearly implies that the tunneling gets stronger and is non-perturbative. One can trace the surface Dirac cone by Z^b when δ̃ < 0 and Z^t when δ̃ > 0. We see that both the quantities rise up as δ̃ approaches zero and reach a maximum equal to 1/2 at δ̃ = 0. Recall that the interaction strength between the helical fermions is proportional to Z^2. Thus this spike at δ̃ = 0 is clear evidence of the surface fermions experiencing a maximum effective attractive interaction at δ̃ = 0.On the other hand, the electrons that used to be in the thin film side when tunneling was zero now experience comparatively weaker attractive interaction. This is evident if we observe the evolution of Z^b when δ̃ < 0 and Z^t when δ̃ > 0. The two renormalization factors reach a minimum at δ̃ = 0 implying that the effective attractive interaction got weaker. In conclusion, by studying the evolution of the renormalization factors as a function of the detuning parameter, we showed that the effective attractive interaction acquired by the surface fermions near the Dirac point is the strongest when the thin film QW band is in quantum-well resonance with the surface Dirac cone. The fact that the Z-factors approach 1/2 at resonance suggests that there is no clear difference between the thin film fermions and the surface fermions at quantum-well resonance. This is clear evidence of our earlier proposition that the electronic states at the quantum-well resonance are hybridized. The eigenstates are a quantum superposition of the thin film and the surface states. They acquire a helical spin structure from the surface side and an effective attractive interaction between them from the thin film side. We shall be studying the superconductivity of these helical hybridized fermions within the BCS mean-field theory in the coming sections. § EFFECTIVE MEAN-FIELD HAMILTONIAN AND THE GAP EQUATION §.§ Mean-field approximationHere we shall use the mean-field theory to decouple the four-fermion interaction Hamiltonian. Let ^hbd_α(k) be the order parameter on the helical hybrid band of index m(t or b) and helicity λ(=+ or -). Note that α = (m,λ). Similarly, define ^tf_n be the order parameter on the thin film band of index n ≠ N. Now we apply mean-field approximation to the 4-fermion interaction Hamiltonian in Eqn.<ref>,^hbd_k,α = ∫d^2p/(2π)^2[∑_β = {m',λ' }λ' e^iϕ_pJ^α,β_k,p<a_p,βa_-p,β>+ ∑_n≠ NK^n,α_k,p< c^T_p,n(is_y)c_-p,n> ] ^tf_k,n = ∫d^2p/(2π)^2[∑_α = {m,λ}λ e^iϕ_pK^n,α_k,p<a_p,αa_-p,α> + ∑_n'≠ N V^n,n'_k,p< c^T_p,n'(is_y)c_-p,n'> ] ℋ_MF = ∫d^2k/(2π)^2[∑_α = {m,λ}λ^hbd_k,α e^-iϕ_k a^†_k,αa^†_-k,α + ∑_n ≠ N^tf_k,n c^†_k,n(-is_y)c^† T_-k,n]Interestingly, the order parameters on the helical bands are of odd parity. So we find that the helical fermions have an 'effective' p-wave pairing even though we started with a purely s-wave interaction. This is because the spin rotation symmetry(SRS) is broken by the induced spin-orbit coupling, while the time-reversal symmetry is preserved<cit.>. On the other hand, the pairing amplitude on the n≠ N thin film transverse bands are of even parity.§.§ The superconducting gap equation Using the mean-field theory, we derived the most general expression for the superconducting order parameter on the four helical hybrid bands and the remaining spin-degenerate thin film transverse bands. Note here that in our case, the fundamental origin of the attractive interaction is the electrons coupling to phonons. Since Debye frequency sets the UV cut-off for phonon modes, only electrons whose energy lies within the range [μ - ω_D, μ + ω_D] can experience the attractive interaction. Here μ is the chemical potential. Here we focus on the limit ω_D≪μ. This puts a strict constraint on the number of bands and the number of electrons participating in the pairing interaction. Only those bands that cross the Fermi level needed to be considered for pairing interaction. All those bands that lie above the Fermi level can be ignored.Before hybridization, the number of bands that cross the Fermi level can be calculated by taking the integer value of the expression, d/π√(2 m μ/ħ^2) + 1/2. This integer will turn out to be the same as N, the index of the band that is hybridized with the surface Dirac cone. Hence before hybridization, we essentially have 2N Fermi surfaces because the thin film bands are spin-degenerate. Now the chemical potential should be set within the bulk energy gap of the topological insulator. Once the thin film is deposited over the TI surface, the Nth band is hybridized and we effectively have a 4-band model within the bulk gap. By fine-tuning the chemical potential further, it is possible that one can have the system with either three hybrid Fermi surfaces or just one Fermi surface (see fig.<ref>). In the latter case,both the positive and negative helicity branches of the top band lie above the Fermi level and therefore do not participate in pairing. We shall derive the superconducting gap equation for these two cases separately here.§.§.§ 3 hybrid Fermi surfaces + 2N - 2 thin film Fermi surfacesNow consider the case when the Fermi level is adjusted such that the hybrid has three Fermi surfaces within the thickness regime that we like to explore. We shall write down a gap equation for this specific case. The innermost Fermi surface(FS) was formed by the positive helicity branch of the t (top) band while the next FS was formed by the negative helicity branch of the t band. The outermost FS is formed by the positive branch of the b (bottom) band. At this point, it is more convenient to express the superconducting gap and the coupling strength as functions of Fermi surface indices rather than the band indices.In the weak-pairing limit (ω_D ≪μ), only electronic states very close to the Fermi surface take part in pairing. Thus, the electron renormalization factor that enters the pairing potential matrix can be re-expressed in terms of the Fermi momenta of the respective Fermi surfaces rather than the band indices. To support this, let us define three quantities Z_1, Z_2 and Z_3 for the three Fermi surfaces such that,Z_1 =Z^(t,+)_k_F1,Z_2 = Z^(t,-)_k_F2Z_3 =Z^(b,+)_k_F3where 1, 2, and 3 are the hybrid Fermi surface indices from smallest to largest in terms of size. Thus, k_F1, k_F2 and k_F3 are the Fermi momenta on these three hybrid Fermi surfaces.Since the renormalization factor depends only on the magnitude of momentum, Z_i is the same for all electrons in the Fermi surface indexed by i. The approximation we will do here is that we assume Z_i factor is the same for all the electronic states lying within the energy window [-ω_D,ω_D] measured from the chemical potential, given that the electronic states lie near the ith hybrid Fermi surface. This approximation allows us to re-express the interaction potential matrix in terms of the Fermi surface indices rather than the band indices. Let us define,𝒥^i,j_k,p = V^N,N_k,p Z_i Z_j𝒥^ij_k,p is the interaction matrix element that gives the scattering strength of Cooper pair from the ith hybrid Fermi surface to the jth hybrid Fermi surface.One can also redefine K^nα_k,p in terms of the Fermi surface indices. From Eqn.<ref>, we have,𝒦^n,i_k,p = V^n,N_k,p Z_nZ_iwhere 𝒦^n,i_k,p determines the scattering of Kramer's doublets from the ith hybrid Fermi surface to the 2nth or (2n - 1)th (n < N) thin film Fermi surface. Here 2nth and (2n - 1)th Fermi surfaces are formed by the helicity subbands of the nth spin-degenerate band. Due to this spin-degeneracy, the two helical Fermi surfaces overlap and hence the interaction parameters are the same for both. From the definition of V^N,N_k,p in Eqn.<ref>, we find that the matrix elements 𝒥^i,j_k,p and 𝒦^n,i_k,p are independent of momenta for electronic states lying within the Debye frequency measured from the Fermi level and zero otherwise. That is, we can write down the effective interaction potential in the following simple way,V^n,n'_k,p =V^n,n'θ(ω_D - ξ^tf_k,n)θ(ω_D - ξ^tf_p,n') 𝒥^i,j_k,p = 𝒥^i,jθ(ω_D - ξ^hbd_k,i)θ(ω_D - ξ^hbd_p,j) 𝒦^n,i_k,p = 𝒦^n,iθ(ω_D - ξ^tf_k,n)θ(ω_D - ξ^hbd_p,i)where θ(x) is the Heavyside step function and the coupling matrix elements V^n,n', 𝒥^i,j and 𝒦^n,i are independent of momenta. Also ξ^tf(hbd)_k,n = ϵ^tf(hbd)_k,n - μ is just the energy of the thin film(hybrid) fermions measured from the chemical potential, involved in the interaction. With these definitions, it is straightforward to derive the superconducting gap equation. We shall also redefine the superconducting order parameters of the hybrid fermions also in terms of the Fermi surface indices as follows:^hbd_k,1 ≈ ^hbd_k,t,+,^hbd_k,2≈^hbd_k,t,- ^hbd_k,3 ≈ ^hbd_k,b,+It has the form, ^hbd_k,i - ∑^3_j=1∫d^2p/(2π)^2𝒥^i,j_k,p^hbd_p,j/2√((ξ^hbd_p,j)^2 + (^hbd_p,j)^2)= ∑^N-1_n=1∫d^2p/(2π)^2𝒦^n,i_k,p^tf_p,n/2√((ξ^tf_p,n)^2 + (^tf_p,n)^2) ^tf_k,n - ∑^N-1_n'=1∫d^2p/(2π)^2V^n,n'_k,p^tf_p,n'/2√((ξ^tf_p,n')^2 + (^tf_p,n')^2)= ∑^3_i=1∫d^2p/(2π)^2𝒦^n,i_k,p^hbd_p,j/2√((ξ^hbd_p,i)^2 + (^hbd_p,i)^2) ξ^hbd_k,1 = ϵ^hbd_k,t,+ - μ, ξ^hbd_k,2 = ϵ^hbd_k,t,- - μ ξ^hbd_k,3 = ϵ^hbd_k,b,+ - μ, ξ^tf_k,n = ϵ^tf_k,n - μ With the weak-pairing approximation discussed above, the magnitude of the superconducting order parameters at all the Fermi surfaces turns out to be momentum-independent. The only possible momentum dependence on the gap magnitude could come from the restriction set by the Debye frequency. With this in mind, we shall define the parameters ^hbd_i and ^tf_n such that,^hbd_k,i = ^hbd_i θ(ω_D - ξ^hbd_k,i) ^tf_k,n = ^tf_n θ(ω_D - ξ^tf_k,n)where θ(x) is the Heavyside step function. In all the future computations, we shall be representing the order parameters in dimensionless form as ^hbd_i = ^hbd_i/ω_D and ^tf_n = ^tf_n/ω_D where ω_D is the Debye frequency of the thin film metal.A schematic picture of the coupling of Cooper pairs of electrons between different Fermi surfaces within the weak-coupling approximation before and after the tunneling is introduced is shown in fig.<ref>.§.§.§ 1 hybrid Fermi surface + 2N - 2 thin film Fermi surfaces Suppose that the Fermi level is fine-tuned to one hybrid Fermi surface within the bulk gap. That is, both the helicity branches of the top band are above the Fermi level (see fig.<ref>). Hence the top band does not contribute to the pairing at all. It is only the positive ( or the negative) helicity branch of the bottom band that crosses the Fermi level. One can observe that in the N=1 limit when there are no QW bands crossing the Fermi level, we effectively have a single band of helical fermions subject to attractive interaction. We shall study this limit more carefully in the next section. Since there is just one hybrid band crossing the Fermi level, the superconducting gap equation becomes far easier in this limit. Consider that it is the positive helicity branch of the b band that crosses the Fermi level.In this case, only the coupling constant 𝒥^33_k,p survives. All the other elements vanish in this limit. For the interaction with thin film fermions, only 𝒦^n,3 is needed to be taken into account.Hence in this limit, the superconducting gap equation becomes, ^hbd_k - ∫d^2p/(2π)^2𝒥^3,3_k,p^hbd_p/2√(ξ^2_p,3 + (^hbd_p)^2)= ∑^N-1_n=1∫d^2p/(2π)^2𝒦^n,3_k,p^tf_p,n/2√((ξ^tf_p,n)^2 + (^tf_p,n)^2) ^tf_k,n - ∑^N-1_n'=1∫d^2p/(2π)^2V^n,n'_k,p^tf_p,n'/2√((ξ^tf_p,n')^2 + (^tf_p,n')^2)= ∫d^2p/(2π)^2𝒦^n,3_k,p^hbd_p/2√(ξ^2_p,3 + (^hbd_p)^2) where ^hbd_k≈^hbd_k,b,+. Here too we define ^hbd such that, ^hbd_k = ^hbdθ(ω_D - ξ^hbd_k) § THE N = 1 FOUR BAND MODEL Here we shall present our work's simple yet most interesting result. Consider the case when the thin film transverse band of quantum number N = 1 is in resonance with the Dirac point of the topological insulator. Quantitatively from Eqns.<ref> and <ref>, we find that the following condition should be satisfied: ϵ^tf_0,n=1 = ϵ^surf_0,±. In other words, the detuning parameter δ̃(d) = 0. If the material parameters of the topological insulator are fixed, then a practical way to achieve this condition is to tune the thin film thickness. So once the thickness is set and the thin film is deposited over the TI surface, the tunneling results in the hybridization of the electronic states near k = 0 resulting in the formation of four hybrid bands. Since we are in the N=1 limit, there are no trivial (or off-resonance) QW bands of index n≠ N crossing the Fermi level. That is, only the hybridized fermions are present near the Fermi level. We know that the thin film favors an effective attractive interaction between electrons at zero temperature mediated by phonons. Therefore, we essentially have an effective model with helical hybridized fermions interacting via an effective attractive interaction between them. The full BCS interaction Hamiltonian in this N=1 limit attains the form, ℋ_I = ℋ^hbd-hbd_I = - ∑_α, β∫d^2k/(2π)^2d^2p/(2π)^2 e^i(ϕ_p - ϕ_k) λλ'J^α,β_k,pa^†_k,αa^†_-k,αa_-p,βa_p,β We have seen in the previous section that by fine-tuning the Fermi level, we essentially have phases with either three hybrid Fermi surfaces or just one hybrid Fermi surface as shown in Fig.<ref>. In this N=1 limit, these are the only Fermi surfaces present in the system. In the first part, we shall put forward the theoretical model in the two cases separately. In the last part, we shall tune various material parameters and look for possible enhancement of the superconducting gap.§.§ Theoretical models§.§.§ Single Fermi surface modelHere we consider the case when the Fermi level is tuned to one Fermi surface. This Fermi surface can be formed by either the positive or negative helicity branch of the bottom band. Since the interaction is mediated by the phonons, only the electronic states that lie within the energy window ω_D measured from the Fermi level experiences an attractive interaction. In this context, if the magnitude of the energy difference between the chemical potential and the emergent Dirac point of the bottom band is greater than the Debye frequency, then only the positive(negative) helicity states of the b band experience attractive interaction. The negative(positive) branch is essentially non-interacting. Therefore, the projected Hamiltonian in the helicity basis resembles a single-band BCS problem for 'spinless fermions'. If the Fermi level crosses the positive helicity branch as shown in Fig.<ref> the Hamiltonian attains the following simple form,ℋ =∫d^2k/(2π)^2[a^†_k,α[ϵ^hbd_k,α - μ]a_k,α- ∫d^2p/(2π)^2𝒥^3,3_k,p e^i(ϕ_p - ϕ_k) a^†_k,αa^†_-k,αa_-p,αa_p,α]where 𝒥^3,3_k,p is defined in Eqn.<ref>. α = {b,+} is the band index.Following the procedure explained in Section V, mean-field Hamiltonian becomes, ℋ_MF = ∫d^2k/(2π)^2[ ^hbd_k e^-iϕ_ka^†_k,αa^†_-k,α + h.c ] ^hbd_k = ∫d^2p/(2π)^2𝒥^3,3_k,pe^iϕ_p<a_p,αa_-p,α> 𝒥^3,3_k,p =V^N,N_k,p Z_3 Z_3 Here N=1 and 𝒥^3,3 is the renormalized interaction potential between the helical fermions. Recall that V^N,N is the thin film interaction potential matrix element between electrons in the Nth band.Z_3 is the renormalization factor of the electrons in the positive helicity branch of the bottom hybrid band at the Fermi momentum k_F. Z_3 essentially calculates the probability amplitude of a Kramer's pair of fermions to be in the thin film side of the interface. Since the hybrid fermions are a linear superposition of the thin film and the TI surface states, they acquire a helical spin texture from the TI surface side while also experiencing an effective attractive interaction mediated by the thin film phonons. The superconducting order is of odd parity as expected. Here we shall present certain limits where simple analytical results for the superconducting gap can be derived. We will also show a limit where the effective pairing essentially goes back to singlet order. To identify these limits, let us define a parameter called μ_b with the following definition, μ_b = μ - ϵ^hbd_0,b,+ It is the difference in energy between the Fermi level and the emergent Dirac point of the bottom band. μ_b = 0 implies the Fermi level is aligned with the Dirac point and the Fermi surface reduces to just a Fermi point. So one can call this term an 'effective' chemical potential of the bottom band. Let us represent μ_b in dimensionless form by dividing it with the tunneling strength t_d defined in Eqn.<ref>. That is,μ̃_b = μ_b/t_d Here the thickness d is fixed. When μ̃_b ≪ 1, we find that the energy dispersion of the states that cross the Fermi level is essentially a linear function of k. That is, the energy of Fermi electrons can be approximated as, ϵ^hbd_k,b,+ - μ≈ +A_b |k| - μ_b A_b = A_0/2 is the effective spin-orbit coupling on the helical fermions in the b band near the Dirac point. When the Debye frequency ω_D < μ_b, only the positive helicity branch is interacting. In this limit, one can solve Eqn.<ref> analytically to arrive at a simple expression for the magnitude of the p-wave pairing gap, ^hbd = 2 ω_DExp[-4π A^2_b/μ_b 𝒥^3,3]Note here that if μ_b<ω_D, then both the negative and the positive helicity branches of the b band fall within the energy window [μ_b - ω_D, μ_b + ω_D ]. This implies that the electronic states of both helicities that fall within this window will be interacting. The effective theory described in Eqn.<ref> does not explain the full physics in this limit.A rather interesting limit is when the chemical potential μ_b = 0. In this limit, hybrid electronic states of both the helicity branches experience attractive interaction on an equal footing. Therefore, the triplet component of the order parameter cancels out. That is, we essentially have a purely singlet-pairing superconducting phase of helical Dirac fermions. In the limit when ω_D ≪ t_d, the effective low-energy interacting Hamiltonian in this limit has the form: ℋ = ∫d^2k/(2π)^2 [A_b d^†_k,b[ s×k.ẑ]d_k,b- ∫d^2p/(2π)^2 𝒱_k,pd^†_p,b s_y d^† T_-p,b d^T_-k,bs_y d_k, b] 𝒱_k,p ≈ V^1,1_k,p/4where V^1,1_k,p is the thin film phonon-mediated interaction potential between the electronic states in the transverse bands indexed by N = 1. Its definition is given in Eqn.<ref>. The factor of 4 is because in the limit when ω_D ≪ t_d,the renormalization factor is diagonal in the spin basis with both the diagonal elements equal to 1/2. In other words, the electrons involved in the interaction are in quantum-well resonance. d_k,b = [ d_k,b,↑ d_k,b,↓ ] is the 2-component spinor representing the annihilation operator for emergent Dirac fermions of the b band in the spin basis. This effective theory has extra emergent symmetries in contrast to the finite chemical potential case. One can see that it has both the particle-hole symmetry and the Lorentz symmetry. Since there are no Fermi electrons in this limit to induce Cooper instability, the coupling constant must be greater than a critical value for the superconducting phase transition to happen<cit.>. The critical value of the interaction strength is given by,𝒱_c = 4π A^2_b/ω_DIf the interaction strength is tuned to the quantum critical point, the effective theory possesses emergent surface supersymmetry(SUSY). So what we have here is essentially a very practical platform to study the dynamics of the emergent supersymmetric quantum matter.§.§.§ Three Fermi surface model Now consider the case when the Fermi level is adjusted in such a way that we effectively have three Fermi surfaces. A schematic picture of such a possibility is shown in Fig.<ref>. To realize a three Fermi surface model, the effective chemical potential of the b band defined as μ_b in Eqn.<ref> has to be greater than 2t_d. In this limit, the Fermi surface closest to the Dirac point is formed by either positive or negative helicity branches of the t band depending on the fine-tuning of the chemical potential. This is indexed by 1. The second and third Fermi surfaces are formed by the negative helicity branch of the t band (band index - (t,-))and the positive helicity branch of the b band (band index - (b,+)) respectively. They are indexed as 2 and 3 respectively. Since the attractive interaction is mediated by phonons, only the electronic states lying within the energy window ±ω_D measured from the chemical potential actually experience an attractive interaction. Since we are working in the limit where ω_D≪μ, the absolute value of the chemical potential, essentially only the electrons in and around the Fermi level take part in the interaction. Also note that, since we are in the N=1 limit, only the helical hybrid fermions are present in the system. The mean field Hamiltonian then takes the form,ℋ_MF = ∫d^2k/(2π)^2[ ^hbd_k,1 e^-iϕ_ka^†_k,t,+a^†_-k,t,+- ^hbd_k,2 e^-iϕ_ka^†_k,t,-a^†_-k,t,-+ ^hbd_k,3 e^-iϕ_ka^†_k,b,+a^†_-k,b,+ + h.c ]Here we assumed that the Fermi level crosses the positive helicity branch of the top band to form the Fermi surface that is closest to the Dirac point as shown in Fig.<ref>(a). Also, the energy difference between the Dirac point of the t band and the Fermi level must be higher than the Debye frequency for the above Hamiltonian to effectively describe the pairing physics. Otherwise, the electrons in the negative helicity branch of the t band near k=0 will also be interacting. This is not taken into account in the effective Hamiltonian defined here. As long as the three hybrid Fermi surfaces do not overlap in the momentum space, the superconducting order on each of them is of p-wave symmetry. Notice that since the Fermi surface indexed by 2 is formed by the negative helicity branch of the t band, the sign of the order parameter is negative. That is, it differs from the order parameter on the positive helicity branch by a phase of π. If this Fermi surface happens to overlap with a positive helicity branch of the b band, which could happen in case the tunneling is zero or negligible, then one can find that the triplet component of the order parameter cancels out. In that case, we are left with an even-parity spin-singlet pairing phase. The superconducting gap equation satisfied by ^hbd_i's is similar to what is given in Eqn.<ref>. But since there are no thin film FSs, the RHS of Eqn.<ref> vanishes. So we finally obtain a simple form for the gap equation which we shall write down below for clarity,^hbd_k,i - ∑^3_j=1∫d^2p/(2π)^2𝒥^ij_k,p^hbd_p,j/2√(ξ^2_p,j + (^hbd_p,j)^2)=0where i=1,2,3. The matrix elements of Ĵ are given in Eqn.<ref>. It describes the scattering strength of Kramer's doublets from the Fermi surface indexed by i to j. So we find here that we have to effectively solve a set of 3 non-linear coupled integral equations to find the superconducting order parameters in each Fermi surface. A simple analytical solution as was done in the single Fermi surface case is difficult to realize here. §.§ Numerical results: Solving the gap equation The objective of this part of the section is to study the evolution of the superconducting order in the N=1 limit as a function of various tuning parameters. Basically, our goal is to look for various ways to enhance the superconductivity. The role of the thin film in this hybrid system is to induce an effective attractive interaction between the helical surface fermions. Therefore, a straightforward way to enhance the pairing interaction between the helical hybrid fermions will be to tune the electron-phonon coupling strength of the thin film metal. In the case of a topological insulator, it is the spin-orbit interaction that decides the Fermi velocity of the surface Dirac fermions. So understanding the evolution of the superconducting order as a function of the spin-orbit coupling strength is important.Here we begin by emphasizing again the role played by quantum-well resonance in realizing a ground state with attractively interacting helical fermions and in enhancing the superconducting order. This is a continuation of the physics discussed in section IV. There we discussed how the effective attractive interaction attained by the surface fermions through tunneling reaches its maximum when the two systems are in quantum-well resonance. We used the evolution of the Z-factors of the two hybrid bands as a function of the detuning parameter to prove this point. Having derived the pairing gap equation, we can finally study how the pairing gap on the Fermi surfaces evolves as a function of the detuning parameter. This will give a rather concrete idea of why we must tune the thin film thickness to quantum-well resonance for a given N to study the interacting physics of surface fermions.In short, we essentially write down the p-wave superconducting gap on the hybrid bands as a function of the three tuning parameters,^hbd_i≡^hbd_i(δ̃(d), λ̃^bulk, ṽ)Here λ̃^bulk is the dimensionless form of the phonon-mediated interaction strength of the 3D bulk counterpart of the metal thin film. In terms of the electron-phonon coupling strength G_fp defined in Eqn.<ref>,λ̃^bulk = m k^bulk_F/2π^2ħ^2 G^2_fp k^bulk_F is the bulk Fermi momentum of the metal for a given chemical potential. In the calculations here, we shall only tune the electron-phonon coupling strength of the metal while keeping all other parameters constant. The dimensionless detuning parameter is defined in Eqn.<ref>. ṽ here is the dimensionless form to represent the Fermi velocity of the surface fermions. For the class of topological insulators that we consider, it is proportional to the SOC strength of the TI. It has the following definition,ṽ = A_0/ħ cwhere A_0 is the SOC strength of the topological insulator and c is the speed of light. Tuning down ṽ is essentially equivalent to moving towards the flat band limit of the TI surface. Now we shall study the evolution of the superconducting order as a function of these dimensionless tuning parameters. For numerical purposes, we shall be using material parameters corresponding to Pb(lead) for the thin film except in the section where we tune the interaction strength.§.§.§ Resonance effectHere we shall study the evolution of the p-wave pairing gaps as a function of the dimensionless detuning parameter at k = 0 defined in Eqn.<ref>. The detuning parameter is varied by tuning the thin film thickness. We shall solve the gap equation both before and after the tunneling is turned on. The Fermi level is set at 0.05 eV above the Dirac point of the topological insulator. Essentially, we set the Fermi level close to the Dirac point because we are tuning the detuning parameter defined at k=0. If the Fermi level is much above or below the Dirac point, then the detuning parameter should be defined at the Fermi momentum instead of at k=0.Fig.<ref> shows the results. Here the detuning parameter is varied from -2 to 2. We have studied the evolution of the pairing gaps ^hbd_1(Red), ^hbd_2(Green) and ^hbd_3(Blue)on the three Fermi surfaces(if present) before and after the tunneling is switched on. Before the tunneling is turned on, the innermost Fermi surface is formed by the surface Dirac cone. The second and third Fermi surfaces are formed entirely by the two helicity branches of the thin film band and hence they overlap. Essentially in this limit, the TI surface is non-interacting, which means we are studying just the thin film superconductivity. The purpose is just to set a benchmark for the study of the superconducting order once the tunneling is turned on.Therefore, ^hbd_1 is always zero. And we have ^hbd_2 = ^hbd_3. The triplet component of the order parameter cancels out and we have the trivial s-wave superconducting order as expected. When the detuning parameter is increased, the thin film band starts moving up. This is because, in our convention, increasing the detuning parameter is equivalent to reducing the thin film thickness. At a particular thickness, the bottom of the band crosses the Fermi level. Beyond this point, there are no interacting Fermi electrons. Hence superconductivity vanishes as the detuning parameter is increased further. Now when the tunneling is turned on, the surface band and the thin film band get hybridized. From the figure, we understand that the pairing physics is not very different from the zero-tunneling result when |δ̃| ≫ 0. But as we fine-tune δ̃ to zero, we start seeing the effects of electronic hybridization. The electrons in the innermost Fermi surface, which essentially is the surface Dirac cone start interacting and a superconducting gap opens up. The magnitude of the gap increases as we fine-tune to δ̃ = 0 from the left side. One can identify that ^hbd_1(the red points in the plot) is the effective pairing gap on the Dirac cone. Note that the contribution to the pairing gap also comes from the scattering of Cooper pairs to the other two Fermi surfaces as well. When the detuning parameter is increased further, the bottom of the t hybrid band crosses theFermi level. This means, there is essentially a crossover from the three Fermi surface to the single Fermi surface limit. Both the 1st and the 2nd Fermi surfaces vanish beyond this limit. When the tunneling was zero, there was no superconductivity in this limit because the surface was essentially non-interacting. But here we see that a superconducting gap exists on the Fermi surface formed by the Dirac cone (the blue-colored points on the plot). This is clear evidence of the effective attractive interaction between the surface Dirac fermions. Also, we see that the magnitude of the gap decreases as the detuning parameter is tuned away from zero. This clearly proves that the quantum-well resonance is the ideal point to study the attractive interacting physics of surface Dirac fermions.§.§.§ Dependence on the interaction strengthIn part 1, we understood the importance of quantum-well resonance to realize a phase with attractively interacting helical surface fermions. So from here onwards, we fine-tune the thickness to quantum-well resonance at the Dirac point. In this limit, the electronic states close to the Dirac point on both sides of the interface are strongly hybridized. There is no clear difference between the thin film and the TI surface fermions. These resonating hybrid fermions acquire the emergent spin-orbit coupling from the thin film side and an effective attractive interaction from the thin film side. We effectively have helical fermions with an effective attractive interaction between them.Here we tune the electron-phonon coupling strength G_fp of the thin film metal and study the evolution of the pairing gap on the hybrid Fermi surfaces. To represent the tuning parameter in a dimensionless form, we defined the bulk coupling constant of the metal λ̃^bulk in Eqn.<ref>. We keep all other material parameters including Debye frequency, effective electron mass, etc. constant. Here we used the material parameters of the Pb metal for numerical calculations. The cases of single and three Fermi surfaces were considered separately. The effective chemical potential was fine-tuned further for each of the two cases to understand its significance.Single Fermi surfaceFig.<ref> shows the results in the case when the chemical potential is tuned to a single Fermi surface. Here we plotted the magnitude of the p-wave superconducting gap represented in a dimensionless form(with respect to the Debye frequency) at three different chemical potential values, μ̃_b = 0.25, 0.50, 0.75. Here chemical potential is expressed in a dimensionless form as μ̃_b = μ_b/t_d where the tunneling strength t_d is fixed at t_d = 0.2 eV. The chemical potential is set very close to the Dirac point because the Fermi electrons then will be at quantum well resonance. In addition, the electron band will be linear, resembling a surface Dirac cone. The corresponding energy spectrum is shown in fig.<ref> We set the spin-orbit coupling strength at A_0 = 1.5 eVÅ. To arrive at this result, we numerically solved Eqn.<ref> self-consistently at different values of the coupling strength.As expected, we find an exponential enhancement of the superconducting gap as the coupling constant λ̃^bulk is increased. Increasing chemical potential also enhances the superconducting gap. The results can be explained in the following way: Since the chemical potential is set close to the Dirac point(μ̃_b < 1), the band is nearly linear when it crosses the Fermi level. Hence the approximate analytical expression for the pairing gap magnitude derived in Eqn.<ref> works well in these cases. There we found that ^hbd∝ e^-1/μ_b 𝒥^3,3. Here 𝒥^3,3 is proportional to the electron-phonon coupling constant. Thus both the chemical potential and the interaction strength have a similar enhancement effect on the superconducting gap magnitude. This is in contrast to a 2D quadratic electronic dispersion. There the density of states is independent of the chemical potential. Note here that, if μ_b ≥ t_d, then the band is no longer linear. In this case, the analytical result derived in Eqn.<ref> is no longer a good approximation. In addition, since the Fermi electrons lie away from the quantum-well resonance, the tunneling effects will be perturbative. Three Fermi surfaces Fig.<ref> shows the results when the chemical potential is tuned to three Fermi surfaces. As discussed before, the effective chemical potential, μ_b must be of the order of 2t_d or greater than that to realize a three Fermi surface model. The corresponding energy spectrum is given in Eqn.<ref> Here we studied the evolution of the p-wave pairing gaps on the three Fermi surfaces as a function of the electron-phonon coupling strength of the thin film metal at three different values of the chemical potential. Here ^hbd_i(i=1,2,3) is the SC gap magnitude on the ith Fermi surface. Here 1 is the closest and 3 is the farthest from the Dirac point. They are represented in a dimensionless form by dividing them with the Debye frequency of the thin film metal. We used the dimensionless parameter μ̃_b to represent the chemical potential. The tunneling strength and the spin-orbit coupling strength of the TI surface are all fixed with the same numerical values as in the single Fermi surface case. We numerically solved the coupled set of superconducting gap equations given in Eqn.<ref> to arrive at these results. We see a much-anticipated enhancement in the superconducting gap magnitude as the interaction strength is increased. We alsonotice that the magnitude of the superconducting gap is substantially larger compared to the single Fermi surface case for a given strength of interaction. This is because there is a larger number of Fermi electrons involved in the interaction for the three Fermi surface cases, leading to an enhancement in the superconducting order.§.§.§ Dependence on the spin-orbit coupling strength Here we shall study the evolution of the superconducting order on the helical hybrid bands as a function of the spin-orbit coupling strength of the TI surface. As we did in the previous part, the Dirac point of the TI surface is fixed at quantum-well resonance with the N=1 transverse band of the thin film. The bulk interaction strength is fixed at λ̃^bulk = 0.39. As before, the tunneling strength is fixed at t_d = 0.2 eV. The spin-orbit coupling strength is expressed in a dimensionless form given by ṽ = A_0/ħ c. The logic here is that for a given SOC strength A_0, the Dirac velocity of the surface fermions is given by v = A_0/ħ. So tuning the SOC strength is equivalent to tuning the Dirac velocity of the surface fermions.We study the SOC dependence for the two cases separately: when the Fermi level is set to a single Fermi surface and when the Fermi level is set to three Fermi surfaces. Even though we expect a monotonic increase in the superconducting gap as the SOC strength is decreased due to the obvious increase in the density of states, we shall find here that it is not the case. The change in the hybrid band structure has huge consequences on the renormalization factors Z_iwhich substantially affects the pairing interaction. Single Fermi surface Here we shall study the evolution of the pairing gap as a function of the spin-orbit coupling parametrized by ṽ at different values of μ̃_b. Since the magnitude of the superconducting gap in our case is mostly decided by the density of states at the Fermi level and the renormalization factor Z_3, we have plotted both of them as a function of ṽ. This helps us better understand the behavior of ^hbd as ṽ is tuned. The density of states at the Fermi level has the following definition,𝒩^hbd = ∫d^2k/(2π)^2δ(ϵ^hbd_k,b,+ - μ) By tuning ṽ, we shall expect the density of states at the Fermi level to increase thus enhancing the superconductivity. But here we shall find that it is not always the case as evident from Fig.<ref>. Here we plotted ^hbd as a function of ṽ at two different values of the dimensionless chemical potential μ̃_b.We find that the pairing gap increases when ṽ is reduced, reaches a peak, and then decreases to zero in the flat band limit when chemical potential μ̃_b = 0.50 and μ̃_b = 0.75. But when chemical potential is very low(μ̃_b = 0.25), the peak is reached only when ṽ≈ 0. This rather surprising result has to do with the renormalization factor in the interaction constant. It essentially gives the probability amplitude of the given electronic state to be in the thin film side of the interface. Its definition is given in Eqn.<ref>. Here Z_3 is defined as the renormalization factor of the electrons on the Fermi surface. Since the Dirac point is in resonance with the thin film transverse band, Z_3 is exactly 1/2 at k = 0. But if the Fermi momentum is much greater than zero, then the renormalization factor changes from 1/2. This is equivalent to detuning away from resonance. If the hybrid band is adiabatically connected to the thin film band at large k, then Z_3 → 1 at large Fermi momentum. On the other hand, if the hybrid band is connected to the surface Dirac cone, then Z_3 → 1 at large Fermi momentum. This change in the renormalization factor can substantially affect the magnitude of the SC gap.So what we observe here essentially is an interplay between the density of states at the Fermi level and the renormalization factor of the electronic states on the thin film side. The density of states increases with decreasing ṽ in a monotonic fashion for any value of μ̃_b. This is evident from the density of states plot in Fig.<ref>(b). The density of states increases in a power law fashion in both cases of chemical potential as the ṽ is lowered. On the other hand, the renormalization factor Z_3 decreases as ṽ is lowered(see Fig.<ref>(c)). This can be explained in the following way: Here the Fermi level crosses the positive helicity branch of the bottom band(band index - (b,+)). Consider the large ṽ limit, which is defined as the limit when Z_3(ṽ) > 1/2. In this limit, the electrons in this band are adiabatically connected to the thin film band at a large k limit, where they are out-of-resonance. So if the Fermi level crosses this band at large k, then Z_3 ≈ 1. Also, notice that the range of momentum states around the Dirac point which experience strong hybridization decreases as ṽ is increased. As a result of these two factors, one can see why Z_3 increases when ṽ is increased. On the other hand, in the limit of ṽ when Z_3(ṽ) < 1/2, the hybrid band under consideration(band index - (b,+)) is adiabatically connected to the non-interacting surface Dirac cone. This is the reason why Z_3 → 0 as ṽ→ 0. At Z_3(ṽ) = 1/2, the electrons in the Fermi surface are in quantum-well resonance.The variation in Z_3 will be more substantial for cases with higher chemical potential than those with lower ones. Due to the higher chemical potential, the Fermi electrons are detuned away from resonance and hence the Z_3 factor will be different from 1/2. This is the reason why we see a peak in the pairing gap for μ̃_b = 0.50, 0.75(fig.<ref>)(a). On the other hand,Z_3 ∼ 1/2 for μ̃_b = 0.25 at all values of ṽ, implying that the electrons lying in the Fermi surface are in quantum-well resonance throughout. As a result, the monotonic behavior of the density of states 𝒩^hbd is also reflected in the evolution of the pairing gap. Three Fermi surfacesHere we study the evolution of the p-wave pairing gaps on the three FSs as a function of ṽ. Just like in the previous case of a single FS, the tunneling strength and the thin film's material parameters are kept fixed.Note that since the tunneling results in a 2-level splitting of the top and bottom bands at k=0 by a factor of 2t_d, the effective chemical potential μ_b(defined in Eqn.<ref>) should be of the order of 2t_d or greater than that to realize a three Fermi surface model. In other words, the dimensionless parameter μ̃_b≥ 2. See the energy spectrum in fig.<ref> for details. In our calculations, we fixthe effective chemical potential at μ̃_b = 2.25. We fix the tunneling strength at t_d = 0.2 eV. The results are shown in Fig.<ref>(a). Here we plotted the p-wave superconducting gaps on the three FSs as a function of the SOC strength of the TI surface, represented in a dimensionless form as ṽ (defined in Eqn.<ref>). As before, we numerically solved the self-consistent superconducting gap equations defined in Eqn.<ref> to calculate the pairing amplitudes on the three Fermi surfaces. The pairing gaps have been represented in a dimensionless form by dividing it with the Debye frequency of the thin film metal. In Fig.<ref>(b), we plotted the density of states at the Fermi level for each Fermi surface as a function of ṽ. The definitions are given by,𝒩^hbd_1 = ∫d^2k/(2π)^2δ(ϵ^hbd_k,t,+ - μ) 𝒩^hbd_2 = ∫d^2k/(2π)^2δ(ϵ^hbd_k,t,- - μ) 𝒩^hbd_3 = ∫d^2k/(2π)^2δ(ϵ^hbd_k,b,+ - μ)where 𝒩^hbd_i(i=1,2,3) implies the density of states at the Fermi surface indexed by i with i=1 being the closest to the Dirac point. In Fig.<ref>(c), we plotted the renormalization factor Z_i(i=1,2,3) of the three Fermi surfaces. We studied the variation of the renormalization factors of the Fermi electrons on each Fermi surface as a function of ṽ. Similar to what we saw in the single Fermi surface case, the magnitude of the SC gaps on the three Fermi surfaces is determined by the interplay of the electron density of states at the Fermi level and the renormalization factors Z_i. One can notice here by observing the Figs.<ref>(a) and <ref>(c) that it is the Z-factors in three FSs that play the dominant role here. To realize a three-Fermi surface model, we require μ̃_b ≥ 2. Thus the Fermi momentum of the 2nd and 3rd FSs are already much greater than zero. Thus the tunneling effect on these Fermi electrons becomes lesser and lesser significant as the spin-orbit coupling strength is tuned up, no matter what the absolute value of the tunneling strength is. In addition, we also notice that the two Fermi surfaces get closer with increasing ṽ.This is also reflected in the magnitude of the pairing gap. We find here that |^hbd_2-^hbd_3| → 0 as ṽ→ 1. One can notice here that the triplet component of the pairing amplitude, which is proportional to the difference in the pairing amplitude on the positive and negative helicity branches for a given k, vanishes as a result. Thus as ṽ→ 1, the tunneling effect on the two Fermi surfaces is negligible, effectively leading to a trivial singlet pairing order on the two Fermi surfaces which essentially overlaps. On the other hand, the electrons on the 1st Fermi surface have their Z-factor nearly equal to 1/2, implying the electronic states are near resonance even if we increase ṽ. This is because the Fermi momentum is very close to zero. But notice here that the density of states 𝒩_1 is nearly zero as ṽ is increased. This implies that the superconducting gap is dominated by the scattering of Cooper pairs from the other two Fermi surfaces, rather than the intra-band scattering.When ṽ is decreased, we are effectively moving toward the flat band limit of the TI surface. The density of states at each hybrid Fermi surface shows a monotonic increase as expected. However, this is not reflected in the SC gap magnitude. We find here that the pairing amplitude on the third Fermi surface vanishes in the limit ṽ→ 0. On the other hand, the pairing amplitudes on the first and the second Fermi surfaces converge. That is, we observe that |^hbd_1-^hbd_2| → 0 as ṽ→ 0. This implies that the two Fermi surfaces overlap to form the trivial thin QW band and the superconductivity on them will turn out to be of the trivial s-wave order. Since the superconductivity on the third Fermi surface vanishes as ṽ→ 0, the topological superconductivity is absent in the flat band limit.So in conclusion, we explored the evolution of the pairing gaps as a function of the SOC strength on the three Fermi surfaces at a fixed chemical potential and tunneling strength. We found that in the limit of large ṽ(ṽ→ 1), the second and the third Fermi surfaces overlap and the pairing on them is of spin-singlet order. The SC pairing on the innermost Fermi surface still maintains the p-wave character. Thus the topological character is still maintained. In the limit when ṽ→ 0, we found that the electrons in the 3rd Fermi surface lie entirely on the TI surface side. Hence they are effectively non-interacting. The first and the second Fermi surfaces overlap and we effectively have singlet pairing superconductivity on them. Hence in the flat band limit, the hybrid is no longer topological. § THE LARGE N LIMITHere we consider the situation when the thin film band which is in quantum-well resonance with the surface Dirac point has its band index N very much greater than one. Physically, this limit can be realized by increasing the thickness of the thin film. This is because, the energy difference between the successive quantum well bands, |ϵ_k,n - ϵ_k,n-1| ∝ 1/d^2. In this situation, given that the Fermi level is adjusted close to the Dirac point, there will be N - 1 off-resonance degenerate thin film bands crossing the Fermi level. Hence after hybridization, we shall have 2N - 2 off-resonance Fermi surfaces plus one or three hybrid Fermi surfaces.When N ≫ 1, we anticipate that the dominant contribution to the superconducting gap on the hybrid bands is coming from the scattering of the singlet pair of electrons from the trivial thin film Fermi surfaces. The pairing between the helical fermions of the hybrid bands will only have a negligible effect on the pairing gap on off-resonance thin film bands in this limit. Effectively, one can describe this limit as equivalent to an external s-wave pairing field acting on the hybrid bands. So this is similar to the well-known superconducting proximity effect but in the momentum space.In the first part, we shall derive an analytical expression for the pairing gap on the hybrid Fermi surface(s) by employing the large N approximation. Using this, we essentially study how far the interaction between the hybrid fermions can enhance the superconducting gap on the hybrid Fermi surface. In the last part of this section, we show that the momentum space proximity effect smoothly transforms into the real space proximity effect in the perturbative limit of tunneling. The surface interaction only gives a higher-order correction to the proximity-induced superconducting gap. §.§ Momentum space proximity effect Consider the case when the Fermi level is adjusted such that it crosses just a single hybrid Fermi surface. So we have 2N - 2 off-resonance Fermi surfaces and one hybrid Fermi surface. The exact gap equation in the limit when ω_D ≪μ is given in Eqns.<ref>, <ref>. In the large N limit, we could make substantial simplifications to arrive at an analytical expression. Recall that in all our calculations, we considered the attractive interaction in the thin film to be mediated by confined phonons as explained in section <ref>A. But as N→∞ which is attained by increasing the film thickness, it is a good approximation to replace the confined phonons with the bulk phonons. This essentially makes the interaction potential V^n,n'_k,p isotropic. In the limit when the thickness d→∞, the interaction potential is defined in Eqn.<ref> attains the following isotropic form, V^n,n'_k,p≈G^2_fp/d(1 + δ_n,n'/2)θ(ω_D - ξ^tf_k)θ(ω_D - ξ^tf_p)where δ_n.n' here is the Dirac-delta function. Since the interaction potential is isotropic, the superconducting gap will also turn out to be the same on all the thin film QW bands.Now we shall plug this back into Eqn. <ref>. Also in the large N limit,scattering of Cooper pairs from the hybrid Fermi surface will have only a negligible effect on the s-wave thin film superconducting gap. This means the second term in the LHS of Eqn.<ref> is neglected. With all these approximations, we obtain the following simple analytical form for the thin film s-wave superconducting gap,^tf ≈2ω_DExp[-d/G^2_fp𝒩^tf(N-1/2)]where^tf_n = ^tf_n' = ^tf, ∀ n, n' ≤ NHere we used ^tf for the s-wave superconducting gap on the thin film bands. 𝒩^tf = m/2πħ^2 is the density of states at the Fermi level of a thin film transverse band, given the electronic dispersion is quadratic. Now let us plug this back into the gap equation for the magnitude of the effective p-wave superconducting order parameter on the hybrid Fermi surface. After doing some algebra, we get,^hbd = Z_3 ^tf/1 - λ̃^hbdln2ω_D/^hbd whereλ̃^hbd = 𝒥^3,3𝒩^hbdHere λ̃^hbd is the dimensionless coupling strength of interaction between the helical hybrid fermions. 𝒩^hbd is the density of states at the hybrid Fermi surface. 𝒥^3,3 defined in Eqn.<ref> is the renormalized interaction potential between the hybrid fermions. Z_3 in the numerator is the renormalization factor of the hybrid Fermi electrons(defined in Eqns.<ref>, <ref>). This factor comes from the scattering matrix element 𝒦^n,3 that determines the scattering of singlet pair of electrons from the off-resonance thin film Fermi surface to the hybrid Fermi surface. Let us analyze the large N result given in Eqn.<ref> more carefully. The numerator and the denominator come from different sources. The numerator is essentially the contribution to the superconducting gap due to the scattering of singlet-pair electrons from the off-resonance thin film Fermi surfaces.The denominator is due to the attractive interaction between the helical hybrid fermions. Hence it is this term that actually results in the Cooper instability on the hybrid Fermi surface. The numerator could open up a gap but does not lead to actual Cooper instability.The numerator in the above expression is analogous to the proximity-induced superconductivity observed in several TI-SC heterostructures<cit.>. In the proximity effect, the superconducting gap opens up on the Dirac cone due to the tunneling of Cooper pairs across the junction. The difference here is that the coefficient Z here is nearly equal to 1/2. In fact, as we shall demonstrate soon, the numerator turns out to be the proximity-induced superconducting gap in the perturbative limit of tunneling. The difference we notice in the resonance regime is that we observe an enhancement in the superconducting gap due to the attractive interaction between the helical hybrid fermions. The amount of enhancement is determined by the coupling strength λ̃^hbd. The fig.<ref> below illustrates this enhancement effect on the p-wave pairing gap due to the interaction between the helical fermions. To do this, we defined the following functions,y_hbd(x)=1 - λ̃^hbdln2/xy_prxmt(x)= Z_3^tf/x Here we replaced ^hbd/ω_D in Eqn.<ref> by a variable x. So ^tf is also represented in a dimensionless form as ^tf = ^tf/ω_D. y_hbd(x) is the contribution to the pairing gap due to the interaction between hybrid fermions. y_prxmt(x) is the contribution due to the momentum-space proximity effect. The actual value for x is found by solving the equation y_hbd(x) = y_prxmt(x). We shall call the actual solution by x_0. One can call the solution to the equation y_prxmt(x) = 1 as the proximity limit of the superconductivity. This would have been the actual solution if the coupling constant λ̃^hbd = 0. Then we plotted the function y_hbd(x) at different values of the coupling constant λ̃^hbd in fig.<ref>. Here we find that as the coupling constant is increased, the crossing point moves farther away from the proximity limit. This shows strong evidence of enhancement in the superconducting order due to interaction between hybridized fermions To further emphasize this enhancement effect due to interaction between the hybrid fermions, we solved the equation y_hbd(x) = y_prxmt(x) and plotted the resulting superconducting order parameter magnitude ^hbd as a function of the hybrid coupling constant λ̃^hbd. The results are shown in fig.<ref>. Here the red dashed lines are the proximity limit of the superconductivity obtained by solving y_prxmt = 1, while the black dashed lines give the BCS limit of the hybrid FS given by y_hbd(x) = 0. The enhancement due to the surface interaction exists even in the weakly interacting limit. As λ̃^hbd approaches unity, we find that the order parameter attains an exponential form. But there are practical limitations in enhancing λ̃^hbd to strongly interacting limit. The interaction potential 𝒥^3,3 is predetermined by the bulk coupling constant of the thin film. At resonance, it is of the form 𝒥^3,3 = Z^2_3 V^N,N≈ V^N,N/4, which means it is always less than 𝒦^n,3 for any n. So the only tunable parameter is the density of states at the Fermi level given by 𝒩^hbd. If the energy dispersion of the hybrid band is linear when it crosses the Fermi level, then 𝒩^hbd = μ_b/2π A^2_b(Refer to Eqn.<ref>). Ideally, one could tune down the SOC strength of the TI surface to enhance the surface interaction. But as we discussed in section <ref>B, reducing the SOC strength will detune the Fermi electrons away from resonance for a fixed chemical potential, driving the Fermi surface back to the perturbative limit of tunneling. In short, what we like to convey here is that there are practical limitations in increasing the coupling strength λ̃^hbd. So in the large N limit, the dominant contribution to the superconducting gap on the hybrid Fermi surface comes from the momentum space proximity effect due to the off-resonance thin film bands. There is an enhancement due to the Cooper instability on the hybrid Fermi surface, but that is not very substantial compared to the proximity effect.§.§ Perturbative limit of tunneling: Connection to the Fu-Kane model Here we shall consider the perturbative limit of tunneling by detuning away from the quantum-well resonance of the TI-thin film hybrid. Our objective here is to show that the momentum space proximity effect discussed in the previous section transforms into the real-space superconducting proximity effect in the perturbative limit of tunneling. The perturbative regime is characterized by the limit δ̃≫ 0. Here δ̃ is the dimensionless detuning parameter at k=0 defined in Eqn.<ref>. So for convenience, we shall define a new parameter to study the perturbative limit given by,t̃ = 1/δ̃where we can call t̃ as the dimensionless tunneling strength. This quantity essentially gives the probability amplitude of an electronic state in the thin film side to tunnel to the TI surface and vice versa.In the perturbative regime, the single-particle hybridization effects are negligible. This implies that we should treat the surface fermions and the thin film fermions separately. This is evident from the discussions we had in section IV regarding the Z-effect. There we saw that on tuning δ̃→ -∞, the top hybrid band transforms to the surface Dirac cone and the bottom hybrid band transforms to the thin film band. Correspondingly Z^b approaches unity while Z^t approaches zero. It happens the other way when δ̃→ +∞. Now we shall see how the expression for ^hbd derived in the large N limit at quantum-well resonance(see Eqn.<ref>) changes when detuned to the perturbative limit. We shall be studying the perturbative limit for the case when δ̃≥ 0. But the qualitative conclusions do not change when δ̃≤ 0 also. If the Fermi momentum of the surface Dirac cone is very small, then Z_3 is essentially equal to Z^b defined in Eqn.<ref>. For clarity, let us rewrite the expression again here. When Fermi momentum of the surface Dirac cone k_F ≈ 0,Z_3 = Z^b(δ̃) = 1/2( 1 - δ̃/√(1 + δ̃^2))Now expanding Z_3 in powers of t̃, we arrive at,Z_3 = t̃^2 + 𝒪(t̃^4)Thus Z_3 scales as t̃^2 in the perturbative limit of tunneling. Recall that the coupling strength λ̃^hbd determines the interaction between the surface fermions. Since there is no hybridization in this limit, let us call λ̃^hbd as λ̃^surf. This is to emphasize that the coupling constant determines the attractive interaction strength between the surface fermions. Since the interaction potential is proportional to the square of the Z factor, we see that in the perturbative limit, λ̃^surf = αt̃^4where α = V^N,N𝒩^surf. Here N is the index of the thin film band that is closest to the TI surface. 𝒩^surf is the density of states at the Fermi level of the surface Dirac cone. Plugging this back to the Eqn.<ref>, the expression for the superconducting gap at the surface Dirac cone when expanded in powers of t̃ has the form,^surf≈t̃^2 ^tf[1 + αt̃^4 ln2ω_D/t̃^2^tf + ....]It is straightforward to find out that the first term is exactly the gap opening on the Dirac cone due to the superconducting proximity effect. Since the first term is proportional to the square of the tunneling strength, it has the most dominating effect on the SC gap magnitude on the surface. The second term is the lowest order correction to the gap magnitude due to a possible Cooper instability on the TI surface. We can see here that it has only a negligible contribution to the SC gap opening in the weak tunneling limit. In conclusion, by tuning our effective theory to the perturbative limit of tunneling, we could make connections to Fu-Kane's proposal. The momentum-space proximity effect we discovered in the large N limit at the resonance transforms smoothly to the real-space proximity effect in the perturbative limit of tunneling. We also found that even in the perturbative limit, there is still an effective attractive interaction between surface fermions mediated by the thin film phonons. But this effect is so weak that the dominant contribution to the superconducting gap at the TI surface comes from the proximity effect. § GENERAL N DEPENDENCEIn the previous sections, we studied the superconducting phase of the TI-thin film hybrid in the two extreme limits of N, the N = 1 limit, and the large N limit.Here we shall probe the superconducting order parameter on the hybrid Fermi surfaces as a function of N. Tuning N is implemented by increasing the thin film thickness. For each N, the thickness is further fine-tuned so that the Dirac point of the TI surface is at quantum-well resonance with the Nth band of the thin film. So essentially we are studying the thickness dependence of the superconducting order parameter when the hybrid is fine-tuned to quantum-well resonance. Given that the hybrid is at quantum-well resonance for a given N, It is the following three quantities that would play a significant role as N is tuned: thin film interaction potential matrix V^n,n,(n,n' are thin film band indices), the number of off-resonance thin film Fermi surfaces (equals 2N - 2 for a given N) and the effective tunneling strength t_d. Recall from Eqn.<ref> that the thin film interaction potential scales as 1/d as a function of thickness. So even for a fixed bulk coupling constant λ̃^bulk, the interaction potential in the thin film decreases as a consequence of the electron confinement. But this is compensated by the increase in the number of bands that cross the Fermi level as thickness is tuned. This results in a jump in the superconducting order parameter each time a new band crosses theFermi level. These two features have been studied extensively in the context of thin film superconductivity in previous works<cit.>. Recall from Eqn.<ref> that the electron confinement in the thin film leads to 1/√(d) scaling behavior of the tunneling strength. Thus, the effect of tunneling decreases with increasing thickness. Even though we would still see a splitting of the energy state at the Dirac point, the magnitude of the splitting substantially decreases at large N. Hence the evolution of the superconducting order parameters on the hybrid Fermi surface(s) as a function of N will be a result of the interplay of these three factors. We shall study the N dependence for the single hybrid Fermi surface and three hybrid Fermi surfaces separately.For numerical calculations, we used the material parameters of Pb for thin film. The spin-orbit coupling strength of the TI surface is fixed at A_0 = 1.5 eVÅ. §.§.§ Single hybrid Fermi surfaceFig.<ref> shows the results when the Fermi level is tuned to one hybrid Fermi surface. Here the dimensionless effective chemical potential μ̃_b(see Eqn.<ref>) is fixed at μ̃_b = 0.25. Note that fixing μ̃_b requires fine-tuning the Fermi level every time N is increased. This is because the tunneling strength changes with thickness and μ̃_b = μ_b/t_d. So since we keep μ̃_b fixed, the absolute value of the chemical potential is not constant and changes with N. The p-wave superconducting gap on the hybrid Fermi surface for a given N is found by solving the coupled self-consistent gap equation given in Eqns.<ref>,<ref>numerically. We calculated the SC order parameter value for N values ranging from 1 to 10 by fine-tuning the thickness to quantum-well resonance for each N. Here ^tf0(Grey) is the s-wave superconducting order parameter on the Nth transverse band of the thin filmbefore the tunneling was turned on. This can be found easily using the same set of coupled equations by just setting tunneling strength to zero.Here we find an enhancement in the gap magnitude as N is increased from one. But from N=3 onwards, we find that the order parameter saturates to a constant value and it is a fraction of the thin film gap magnitude. This implies that the superconducting order on the hybrid Fermi surface approaches the large N limit right from N=2 onwards. From our discussions in the previous section on the large N limit, we can conclude that the superconducting order from N=2 onwards is dominated by the scattering of singlet pairs of electrons from the off-resonance thin film bands. So to conclude, at intermediate N we find an enhancement in the pairing gap due to the off-resonance thin film Fermi surfaces which start appearing as N is increased from one. At large N, the superconducting gap saturates to a constant value and is fixed by the thin film superconducting gap due to the momentum space proximity effect.§.§.§ Three hybrid Fermi surfaces Fig.<ref> shows the results when the Fermi level is tuned to three hybrid Fermi surfaces limit. From our previous discussions on the three hybrid Fermi surfaces model in the N=1 limit, we understand that μ̃_b ≥ 2 to realize this model. Hence we set μ̃_b = 2.25 for all N. We solve the coupled self-consistent equations given in Eqns.<ref>,<ref> numerically for a given N at quantum-well resonance. ^hbd_i gives the magnitude of the p-wave superconducting order parameter on the ith hybrid Fermi surface with i=1 being the closest one to the Dirac point. Unlike the single Fermi surface case, here the three superconducting order parameters decrease with increasing N and then saturate to a constant value. Since we have three hybrid Fermi surfaces in the N=1 limit, the density of states is sufficiently high compared to the single Fermi surface case. So in this case, it is the 1/d scaling of the interaction potential that has a dominating effect on the superconducting order in the intermediate N limit than the increase in the number of off-resonance bands.As N is increased, we observe that the superconducting gap on the second and the third Fermi surfaces start converging to the thin film gap value. This can be attributed to the1/√(d) scaling of the tunneling strength. The tunneling gets weaker as N is increased so that the electrons lying away from k=0 experience only a perturbative effect. This is evident from the energy spectrum of the hybrid bands in the N=1 and the N=10 limits shown in Fig.<ref>.As a result, the second and the third Fermi surfaces overlap and become degenerate. So the triplet component of the order parameter in the Zeeman basis cancels out and we are left with a trivial s-wave superconducting order on these two Fermi surfaces. In short, the two Fermi surfaces essentially became off-resonance. But the pairing gap on the first Fermi surface is still of p-wave symmetry. Hence the hybrid is still in the topological phase.So to conclude, the superconducting order parameter on the three hybrid Fermi surfaces decreases with increasing N at intermediate values of N. This is a result of the 1/d scaling of the interaction potential. As N is increased further, it is only the Fermi surface closest to the Dirac point that exhibits topological superconductivity. The other two Fermi surfaces which turn out to be at the off-resonance overlap and hence the superconducting order on them becomes trivial s-wave-like. § CONCLUSIONIn this paper, we proposed a TI-thin film hybrid as a practical platform to realize a system with attractively interacting surface fermions. By depositing the thin film on top of the TI surface, we essentially allowed the surface electrons to be exported to the interacting thin film. We found that for a given thin film and the topological insulator, when the surface fermions resonate with the quantum-well states of the thin film, the interaction between surface fermions is maximally enhanced. Then we studied the superconductivity of these resonating hybrid states in the N=1 limit. In this limit, we effectively have a four-band model of interacting helical hybrid fermions. By fine-tuning the Fermi level in this limit, we showed that it is possible to construct an effective low-energy theory of a single flavor of 2-component Dirac fermions subject to attractive interaction, whose quantum critical point possesses emergent supersymmetry(SUSY). Then we studied the evolution of the superconducting gapas a function of the interaction strength of the thin film and the effective speed of light of the surface fermions. We find an enhancement of the superconducting gap when the interaction strength is increased. On the other hand, the evolution of the superconducting gap as the TI surface is tuned to the flat band limit is rather non-monotonic. We showed that when the Fermi level is tuned to the single Fermi surface limit, as a result of the interplay between the density of states at the Fermi level and the renormalization factor in the interaction strength Z_3, the superconducting gap shows a peak at an intermediate value of ṽ and then dies off to zero in the flat band limit. But if the effective chemical potential μ̃_b ≈ 0, the peak is seen in the flat band limit.We also showed that in the large-N limit, the superconductivity of the resonating hybrid fermions is dominated by the scattering of the singlet pair of electrons from the off-resonance thin film bands. This effect is similar to the superconducting proximity effect but in the momentum space. However, interaction among the surface fermions can further enhance the superconducting gap.In the strongly interacting limit of the surface, the enhancement effect can be very significant. We also studied the general N dependence of the superconducting gap on the resonating helical hybrid bands. We found that when the Fermi level is tuned to three hybrid Fermi surfaces, the dominating effect is the 1/d scaling of the thin film interaction potential. The consequence of this scaling relation is that at resonance, the attractive interaction between the surface fermions is also at its maximum when N = 1. Apart from the theoretical interest in realizing a ground state of attractively interacting surface fermions, the proposed model also has practical applications in the context of Majorana-based quantum computation. Given that at resonance, the topological superconductivity is observed in the thin film side of the interface also, enhances the feasibility of experimental detection<cit.>. Moreover, the amplitude of the superconducting order can be systematically adjusted by manipulating either the material's intrinsic properties or the geometric dimensions, as thoroughly discussed within the confines of this article. Such findings could pave the way for tangible advancements in quantum information technologies. §First, let us project the Hamiltonian to the d^†_k,N,t(b)|0⟩ states.This is made possible by the unitary transformation d_k = U_kΓ_k, N given in Eqn.<ref>. Here the 2-component thin film spinor c_k, N can be projected out of the 4-component Γ using the relation c_k,N = 1+ σ_z/2Γ_k, N. Putting these two relations together, we get a relation connecting the c basis with the d basis. Then the singlet pair creation operator in the thin film basis c^†_k, Ns_yc^† T_-k, N transforms as: c^†_k,N(-is_y)c^† T_-k,N =d^†_k U_k1+ σ_z/2(-is_y)1+ σ_z/2U^T_-kd^† T_-k= ([ d^†_k,t d^†_k,b ]) ([cos^2θ_k/2 (-is_y)- cosθ_k/2sinθ_k/2 (-is_y);; - sinθ_k/2cosθ_k/2(-is_y)sin^2θ_k/2 (-is_y) ])([ d^† T_-k,t; d^† T_-k,b ]) where d^†_k,t(b) = ([ d^†_k,t(b), ↑ d^†_k, t(b),↓ ]) are the 2-component spinors in the spin-1/2 space representing the creation operators of the top(bottom) band. cosθ_k/2 and sinθ_k/2 are nothing but the projection of the 'top' and 'bottom hybrid states into the thin film state. That is,cosθ_k/2 = ⟨0|c_k,Nd^†_k,t|0⟩sinθ_k/2 = ⟨0|c_k,Nd^†_k,b|0⟩ Remember that both these matrices have off-diagonal elements in the laboratory spin basis due to the induced spin-orbit coupling on these bands. The exact expression of cosθ_k is given in Eqn.<ref>. The off-diagonal elements in the above matrix suggest the possibility of inter-band pairing. Since we are only interested in the weak pairing limit where only the pairing between the Fermi electrons is considered, the inter-band pairing does not occur in this limit. The weak-pairing approximation allows us to treat the pair creation operators for the top and bottom bands separately. Let us define P̂_t and P̂_b as the pair creation operators for the top and bottom bands respectively. We have, P̂_k,t =d^†_k,tcos^2θ_k/2 (-is_y) d^† T_-k,t P̂_k,b =d^†_k,bsin^2θ_k/2 (-is_y) d^† T_-k,b Due to the induced helical spin structure of the hybrid bands, the corresponding single-particle Hamiltonian is diagonal in the helicity basis. As we said before, in the weak-pairing limit, the study of interaction will be easier if we project the interaction Hamiltonian also into the helicity basis. To implement this, let us write down the unitary matrix in the spin-1/2 space that can rotate the coordinates from the laboratory spin basis to the helicity basis,d^†_k,t(b) = a^†_k,t(b)Π^†_k, Π_k = 1/√(2)([ 1 1;e^iϕ_k -e^iϕ_k ]) Now we shall plug this back into the set of pair creation operators defined above. Here we observe that the matrices cos^2 θ_k/2 and sin^2 θ_k/2 are diagonal in the helicity basis. This is because the only way an off-diagonal term can appear in these matrices is through the spin-orbit coupling term of the TI surface. With this information and after doing some algebra, we get, P̂_k,t = ([ a^†_k,t,+ a^†_k,t,- ]) ([ Z^t_k,+ 0; 0 - Z^t_k,- ])([ e^-iϕ_k a^†_-k,t,+; e^-iϕ_k a^†_-k,t,- ]) P̂_k,b = ([ a^†_k,b,+ a^†_k,b,- ]) ([ Z^b_k,+ 0; 0 - Z^b_k,- ])([ e^-iϕ_k a^†_-k,b,+; e^-iϕ_k a^†_-k,b,- ]) Z^t_k,± = 1/2( 1 + δ_k,±/√(δ^2_k,± + t^2/d))Z^b_k,± = 1/2( 1 - δ_k,±/√(δ^2_k,± + t^2/d)) δ_k,± = 1/2(ϵ^tf_k,N - ϵ^surf_k,±).*
http://arxiv.org/abs/2310.17847v1
{ "authors": [ "Saran Vijayan", "Fei Zhou" ], "categories": [ "cond-mat.supr-con", "cond-mat.str-el" ], "primary_category": "cond-mat.supr-con", "published": "20231027015646", "title": "Realizing attractive interacting topological surface fermions: A resonating TI- thin film hybrid platform" }
1Department of Mathematics, University of Manchester, Oxford Road, Manchester M13 9PL, UK 2Department of Mechanical Engineering, University of California, Santa Barbara, CA 93106, USA 3Andlinger Center for Energy and the Environment, Princeton University, Princeton, NJ 08544, USA Phase-space entropy cascade and irreversibility of stochastic heating in nearly collisionless plasma turbulence William D. Dorland January 14, 2024 ===============================================================================================================Recognising that surfactants may impede the drag reduction resulting from superhydrophobic surfaces (SHSs), and that surfactant concentrations can fluctuate in space and time, we model the unsteady transport of soluble surfactant in a channel bounded by two SHSs.The flow is laminar, pressure-driven, and the SHSs are periodic in the streamwise and spanwise directions. We assume that the channel length is much longer than the streamwise period, the streamwise period is much longer than the channel height and spanwise period, and bulk diffusion is sufficiently strong for cross-channel concentration gradients to be small. By combining long-wave and homogenisation theories, we derive an unsteady advection–diffusion equation for surfactant flux transport over the length of the channel, which is coupled to a quasi-steady advection–diffusion equation for surfactant transport over individual plastrons. As diffusion over the length of the channel is typically small, the leading-order surfactant flux is governed by a nonlinear advection equation that we solve using the method of characteristics.We predict the propagation speed of a bolus of surfactant and describe its nonlinear evolution via interaction with the SHS. The propagation speed can fall significantly below the average streamwise velocity as the surfactant adsorbs and rigidifies the plastrons.Smaller concentrations of surfactant are therefore advected faster than larger ones, so that wave-steepening effects can lead to shock formation in the surfactant-flux distribution. These findings reveal the spatio–temporal evolution of the slip velocity and enable prediction of the dynamic drag reduction and effective slip length in microchannel applications.Marangoni convection, drag reduction, microfluidics § INTRODUCTIONSurfactants are chemical compounds that are advected and diffuse throughout a fluid, where they adsorb onto liquid–liquid or liquid–gas interfaces <cit.>. They have been shown toimpair the effective slip length and drag reduction in superhydrophobic microchannels <cit.>. Superhydrophobic surfaces (SHSs) use chemically-coated microscopic structures to suspend a fluid over a series of gas pockets <cit.>.The combination of no-slip structures and shear-free liquid–gas interfaces generates the drag reduction for microchannel flows.Hence, SHSs have been considered for applications in biofluidics <cit.>, heat transfer <cit.> and marine hydrodynamics <cit.>, both in laminar and turbulent flows.Field studies have shown that surfactant is present in the ocean and that the surfactant concentration can vary significantly in space and time <cit.>. Traces of surfactant havebeen measured in rivers, estuaries and fog <cit.>. They are also present in most industrial and laboratory environments <cit.>. Surfactants that have been absorbed onto the liquid–gas interfaces of SHSs are advected downstream by the flow and accumulate at stagnation points (i.e. liquid–solid contact lines), generating an adverse Marangoni force that may negate any drag-reducing effects in laminar <cit.> or turbulent <cit.> flows. In order to better understand how environmental surfactants may compromise drag reduction by SHSs in applications, this paper addresses unsteady surfactant transport in a laminar pressure-driven channel flow bounded between streamwise- and spanwise-periodic SHSs, investigating how surfactant is advected and diffuses over length scales and time scales that are large compared to the dimensions of the SHS texture.Experimental studies first suggested that naturally-occurring surfactants could affect channel flows bounded by SHSs comprising spanwiseridges <cit.>, as well as finite-length streamwise ridges <cit.>.They found that the flow rate and wall shear stress closely resembled a channel with solid walls, and thus their SHSs offered only a modest drag reduction <cit.>. <cit.> showed that experimentally-measured slip lengths on SHSs consisting of pillars were much smaller than predicted by surfactant-free simulations; this was true whether surfactant was explicitly added or not, suggesting that naturally-occurring surfactants played a key role. As noted earlier, a requirement for surfactant effects to manifest on SHSs is the presence of stagnation points perpendicular to the flow, at which surfactant can accumulate to generate a surface tension gradient.<cit.> showed that surface tension gradients emerged in their experiments for finite streamwise ridges, increasing the drag compared to those configurations with concentric ridges that lack stagnation points perpendicular to the flow.To investigate the effect of weak surfactant concentrations, <cit.> introduced simulations inclusive of surfactant dynamics; they showed that a plastron interface could be immobilised by concentrations below levels commonly occurring in the environment and in engineered systems.For example, <cit.> showed that polydimethylsiloxane (PDMS) surfaces can release uncrosslinked oligomers that behave as surfactants.The simulations of <cit.> also predicted that surfactant impairment would decrease as the streamwise plastron interface length increased; this was confirmed by their experiments <cit.>.These experimentsalso showed that, if the driving pressure was suddenly removed, a reverse flow established at the interface, decaying with time as 1/t at intermediate times.This time scaling was predicted by a similarity solution driven by surfactant relaxation, assuming advection-dominated flow.In contrast to these plastron-scale findings, there is presently no theory that includes the combined effects of solubility, advection and diffusion, that describes inhomogeneous surfactant transport acrossmultipleplastrons, or that can model the effects of unsteady surfactant concentration at the inflow.Steady scaling theories were constructed for a pressure-driven channel flow with two-dimensional gratings <cit.>, as well as for long gratings with finite spanwise extent, assuming spatially periodic flow <cit.>. Both theories are in agreement with the slip velocity and drag predicted in full numerical simulations.<cit.> further validated their theory by performing experiments with SHS gratings of various lengths, finding that surfactant effects decrease with the square of the interface length. These theories assume that the surfactant concentration is small (as may be expected when surfactant is not explicitly added) and that the shear stressis approximately uniform at the liquid-gas interface.They do not consider the stagnant cap regime, first exhibited in air bubbles rising in surfactant-contaminated water <cit.>.To examine effects of non-uniform shear stresses at the liquid–gas interface, <cit.> considered gratings of finite spanwise extent, and assumed that bulk diffusion was strong enough to suppress cross-channel concentration gradients, allowing systematic asymptotic approximations to be developed.Several dimensionless groups were identified by <cit.> and <cit.> that influence the drag in superhydrophobic channels. <cit.> showed that surfactant impairment in their simulations and experiments was well predicted by a single dimensionless group, when the surfactant properties, SHS dimensions and flow velocities are constrained within physically realizable ranges. Using these physical constraints, scaling analysis provided the dimensionless group as the ratio between the streamwise length of the interface and a surfactant-determined lengthscale, labelled “mobilization length”. Without constraints on surfactant or flow properties, <cit.> found several other relevant dimensionless groups by calculating asymptotic solutions for the concentration field and drag across the whole parameter space; these depend on a velocity scale generated by interfacial Marangoni effects, the surfactant diffusivity and the flow rate. Another dimensionless group found by <cit.> can be used to predict whether the surfactant concentration field is in the stagnant-cap regime. Both <cit.> and <cit.> assumed that the velocity and bulk concentration fields are steady and spatially periodic. That is, they did not allow surfactant to enter the channel with a non-uniform distribution that varies in space and time <cit.> over multiple periods. To address this case, we use multiscale homogenisation techniques <cit.> to study the time- and space-varying effects of surfactant over the whole SHS, without needing to numerically resolve the small details over each texture period, which can be computationally very expensive.We show how a time-dependent one-dimensional asymptotic theory, derived from the three-dimensional Stokes and surfactant transport equations, can be adapted to describe the unsteady evolution of slip and drag in a laminar pressure-driven channel flow with streamwise- and spanwise-periodic grooves, allowing for time-dependent distributions of surfactant flux at the channel inlet. The problem exhibits multiple length and time scales, which we exploit to derive and solve a quasi-steady advection–diffusion problem for surfactant concentration over moderate length scales (i.e. the streamwise period of the SHS) and an unsteady advection–diffusion problem for surfactant flux over long length scales (i.e. the streamwise length of the channel), whilst assuming that bulk diffusion is strong enough for cross-channel concentration gradients to be small <cit.>.The surfactant concentration transport equations are nonlinear and of mixed hyperbolic-parabolic type; the unsteady evolution of the surfactant flux over the length of the channel is predominantly hyperbolic, allowing the formation of shocks. The problem possesses a number of distinct asymptotic regimes, which we exploit to reveal how the shocks forming in the space- and time-dependent surfactant-flux distribution affect the slip length and drag reduction.The slip length and drag reduction are key quantities of interest for practical applications that can be shown to satisfy their own unsteady partial differential equations over long length scales. We predict the propagation speed of a disturbance to the surfactant flux and investigate how excess surfactant can be advected out of the channel to maximise the time-averaged drag reduction for laminar microchannel applications. Furthermore, by investigating the unsteady transport of surfactant in laminar channel flows, the theory developed here is a step towards describing the unsteady transport of surfactant in unsteady turbulent boundary-layer flows over SHSs, for applications in marine hydrodynamics, for instance. The paper is arranged as follows.In <ref>, the problem is formulated and homogenised to derive an unsteady advection–diffusion equation for surfactant flux transport through the channel.At leading order, we derive a purely advective transport equation for the surfactant flux, valid at the channel scale. In <ref>, results are presented for the surfactant flux, drag reduction, propagation speed, slip velocity and concentration field.We describe the parameter space and identify regions of high and low drag reduction.We detail results for two (bell-shaped) canonical distributions of surfactant flux.In particular, one profile induces a transition between the high and low drag reduction regions of the parameter space, giving rise to shock formation.We study these cases using both theoretical and numerical methods, providing closed-form asymptotic predictions of drag reduction. In <ref>, we summarise and discuss our main results.We provide a table with closed-form asymptotic predictions for the flux propagation speed in the different parts of the parameter space.These predictions are expressed both using relevant non-dimensional and dimensional parameters, and are intended as a useful guide for applications. § FORMULATION §.§ Governing equations We consider a laminar pressure-driven fluid flow, contaminated with soluble surfactant, in a channel bounded between two SHSs that are periodic in the streamwise and spanwise directions, as illustrated in figure <ref>. We use hats to indicate dimensional quantities. The streamwise, wall-normal and spanwise directions are denoted by x̂-, ŷ- and ẑ-coordinates, where x̂ = (x̂, ŷ, ẑ) is the space vector and t̂ is time. Assuming that the fluid is incompressible and Newtonian, we define the velocity vector û=(û(x̂, t̂), v̂(x̂, t̂), ŵ(x̂, t̂)), pressure field p̂(x̂, t̂), bulk surfactant concentration field ĉ(x̂, t̂) and interfacial surfactant concentration field Γ̂(x̂, ẑ, t̂). The streamwise length of the channel is 2 L̂_x and the periodic cell has streamwise (spanwise) period length 2 P̂_x (2 P̂_z), liquid–gas interface length (width) 2 ϕ_x P̂_x (2 ϕ_z P̂_z) and gas fraction ϕ_x (ϕ_z). The channel height is 2Ĥ.The SHSs are made up of 2N+1 periodic cells in the streamwise direction.For n ∈{-N, ..., N}, the nth periodic cell is split into two subdomains along the streamwise direction, similarly to <cit.>,𝒟̂^n_1= {x̂- 2nP̂_x∈ [-ϕ_xP̂_x, ϕ_xP̂_x]}×{ŷ∈ [0,2 Ĥ]}×{ẑ∈ [- P̂_z,P̂_z]},𝒟̂^n_2= {x̂- 2nP̂_x∈ [ϕ_x P̂_x,(2 - ϕ_x)P̂_x]}×{ŷ∈ [0,2 Ĥ]}×{ẑ∈ [- P̂_z,P̂_z]}. At the SHS, ŷ = 0 and ŷ= 2Ĥ, we define the nth interface, ridge and solid region, asℐ̂^n= {x̂ - 2nP̂_x ∈ [-ϕ_xP̂_x,ϕ_xP̂_x]}×{ẑ∈ [- ϕ_z P̂_z,ϕ_z P̂_z]},ℛ̂^n= {x̂- 2nP̂_x ∈ [-ϕ_xP̂_x, ϕ_xP̂_x]}×{ẑ∈ [- P̂_z, -ϕ_z P̂_z]∪ [ϕ_z P̂_z, P̂_z]},𝒮̂^n= {x̂- 2nP̂_x ∈ [ϕ_xP̂_x,(2 - ϕ_x)P̂_x]}×{ẑ∈ [- P̂_z,P̂_z]}. The steady equations that govern the fluid and surfactant in each periodic cell are described in detail by <cit.>.Here, we highlight differences due to the unsteady transport of surfactant, whilst allowing the concentration of surfactant to vary over the long length scale and slow time scale associated with the channel.The bulk surfactant is coupled to the steady incompressible flow through an unsteady advection–diffusion equation.In 𝒟̂_1^n and 𝒟̂_2^n,equation∇̂·û = 0, μ̂∇̂^2 û- ∇̂p̂ = 0, D̂∇̂^2 ĉ - û·∇̂ĉ- ĉ_t̂ = 0, a–cwhere μ̂ is dynamic viscosity and D̂ is the surfactant bulk diffusivity.The interfacial surfactant is coupled to the flow through an unsteady advection–diffusion equation and a linear equation of state, such that (σ̂_x̂,σ̂_ẑ) = (-ÂΓ̂_x̂, -ÂΓ̂_ẑ), where σ̂ is the surface tension and  is the surface activity <cit.>. At ŷ = 0 (ŷ= 2Ĥ) and along ℐ̂^n, the boundary and bulk–interface coupling conditions for surfactant and flow and the interfacial surfactant transport equation are equationμ̂n·∇̂û - ÂΓ̂_x̂ = 0, v̂=0, μ̂n·∇̂ŵ - ÂΓ̂_ẑ = 0, D̂n·∇̂ĉ - K̂_a ĉ + K̂_d Γ̂ =0,D̂_I ( Γ̂_x̂x̂ + Γ̂_ẑẑ ) + K̂_a ĉ - K̂_d Γ̂ -(ûΓ̂)_x̂ - (ŵΓ̂)_ẑ- Γ̂_t̂ = 0, a–e where n is the unit normal to the interface (pointing into the channel), D̂_I is the surfactant interfacial diffusivity, K̂_a is the adsorption rate and K̂_d is the desorption rate.At ŷ = 0 (ŷ = 2Ĥ) on ∂ℐ̂^n, there is no flux of surfactant equation ûΓ̂ - D̂_I Γ̂_x̂ = 0 atx̂ = ±ϕ_x P̂_x, ŵΓ̂ - D̂_I Γ̂_ẑ = 0 atẑ = ±ϕ_z P̂_z. a, bAt ŷ = 0 (ŷ = 2Ĥ) along ℛ̂^n∪𝒮̂^n, the flow and surfactant boundary conditions are equationû = 0, v̂ =0, ŵ = 0, ĉ_ŷ =0. a–dThroughout 𝒟̂^n_1 and 𝒟̂^n_2, we assume that the flow and concentration fields are periodic in the spanwise directions,q̂(x̂, ŷ, -P̂_z) = q̂(x̂, ŷ, P̂_z).Across interfaces between 𝒟̂_1^n, 𝒟̂_2^n and 𝒟̂_1^n+1 the flow and concentration fields are assumed to be continuous between subdomains,q̂(((2n + ϕ_x)P̂_x)^-, ŷ, ẑ)= q̂(((2n + ϕ_x)P̂_x)^+, ŷ, ẑ ),q̂(((2(n+1) -ϕ_x)P̂_x)^-, ŷ, ẑ)= q̂(((2(n+1) - ϕ_x )P̂_x)^+, ŷ, ẑ ), where we have defined q̂ = (û,v̂,ŵ,ĉ). We can integrate (<ref>)–(<ref>) across the channel to derive equations relating the bulk and interfacial flux of fluid and surfactant.The unsteady surfactant transport equations model how the bulk and interfacial surfactant fluxes change as surfactants adsorb and desorb at the liquid–gas interface and the concentration field evolves in time,∫_𝒜̂_nĉ_t̂ d + d/dx̂∫_𝒜̂_n (ûĉ - D̂ĉ_x̂)d - 2 ∫_ℐ̂_n (K̂_d Γ̂ - K̂_a ĉ)dẑ = 0 in𝒟_1^n,∫_ℐ̂_nΓ̂_t̂ dẑ + d/dx̂∫_ℐ̂_n (ûΓ̂ - D̂_I Γ̂_x̂)dẑ +∫_ℐ̂_n (K̂_d Γ̂ - K̂_a ĉ)dẑ = 0 in𝒟_1^n,∫_𝒜̂_nĉ_t̂ d + d/dx̂∫_𝒜̂_n (ûĉ - D̂ĉ_x̂)dÂ= 0 in𝒟_2^n. Here, ∫_𝒜̂_n· dÂ≡∫_ẑ=-P̂_z^P̂_z∫_ŷ=0^2Ĥ· dŷ dẑ and ∫_ℐ̂_n· dẑ≡∫_ẑ=-P̂_z^P̂_z· dẑ for x̂ - 2nP̂_x ∈ [-ϕ_xP̂_x, (2 - ϕ_x)P̂_x].For a steady flow driven in the streamwise direction, the cross-channel integrated streamwise velocity field, referred to hereafter as the flux of fluid, Q̂, is uniform along the length of the channel,Q̂ = ∫_𝒜̂_nû dÂ.In contrast, the cross-channel integrated total flux of surfactant, referred to hereafter as the flux of surfactant, K̂=K̂(x̂,t̂), can vary along the length of the channel due to unsteady effects, according to equation ∫_𝒜̂_nĉ_t̂ d + 2 ∫_ℐ̂_nΓ̂_t̂ dẑ + K̂_x̂ = 0 in𝒟̂_1^n, ∫_𝒜̂_nĉ_t̂ d + K̂_x̂ = 0 in𝒟̂_2^n, a, bwhere we have reformulated (<ref>) and definedK̂ = ∫_𝒜̂_n (ûĉ - D̂ĉ_x̂)d + 2 ∫_ℐ̂_n (ûΓ̂ - D̂_I Γ̂_x̂)dẑin𝒟̂_1^n,K̂ = ∫_𝒜̂_n (ûĉ - D̂ĉ_x̂)dÂin𝒟̂_2^n. We also define K̂_m = max(K̂(x̂, 0)) to be the maximum initial surfactant flux along the length of the channel. Defining the cross-channel-averaged pressure drop per period Δ_n p̂(t̂) ≡⟨p̂⟩((2n-ϕ_x) P̂_x) - ⟨p̂⟩((2(n+1)-ϕ_x)P̂_x) > 0 where ⟨·⟩≡∫_ẑ=-P̂_z^P̂_z∫_ŷ=0^2 Ĥ· dŷ dẑ / (4 P̂_z Ĥ) is the cross-channel average, we can define the normalised drag reduction over the nth cell asDR_n(t̂) = Δ_n p̂_I - Δ_n p̂/Δ_n p̂_I - Δ_n p̂_U,where Δ_n p̂ = Δ_n p̂_I when the liquid–gas interface is immobilised by surfactant and is no-slip (DR_n=0) and Δ_n p̂ = Δ_n p̂_U when the liquid–gas interface is unaffected by surfactant and is shear-free (DR_n=1).§.§ Non-dimensionalisation and scalingsIn table <ref>, we summarise the different length, time and velocity scales of interest in the transport problem described in (<ref>)–(<ref>) and figure <ref>, assuming that the channel has an order-one cross-channel aspect ratio Ĥ∼P̂_z, but small channel-height-to-streamwise-period ratio ϵ = Ĥ/P̂_x ≪ 1 and small streamwise-period-to-channel-length ratio ℰ = P̂_x/L̂_x ≪ 1. Defining ϵÛ = Q̂/(Ĥ^2) as a velocity scale, P̂ = μ̂Û/Ĥ as a pressure scale, Ĉ = K̂_m/Q̂ as a bulk concentration scale and Ĝ = K̂_a Ĉ/K̂_d as an interfacial concentration scale, we non-dimensionalise (<ref>)–(<ref>) using multiple timeand spatial scales: equation t = t̂/ϵP̂_x / Û,T = t̂/P̂_x / ϵÛ, τ = t̂/P̂_x / (ϵℰÛ), x_⊥ = x̂_⊥/ϵP̂_x,x = x̂/P̂_x, χ = x̂/P̂_x / ℰ,u_⊥ = û_⊥/Û,u = û/ϵÛ,p = p̂/P̂,c = ĉ/Ĉ, Γ = Γ̂/Ĝ,K = K̂/K̂_m, a–lwhere x̂_⊥≡ (ŷ,ẑ) and û_⊥≡ (v̂, ŵ). In this paper, we focus on the distinguished limit in which ℰ = λϵ^2 as ϵ→ 0, where λ is an O(1) constant (this scaling clarifies the asymptotics and is reasonable from an applications point of view). This non-dimensionalisation yields a long-wave theory with rapid cross-channel transport of surfactant over each period, which will be homogenised to describe the slow transport of surfactant over multiple periods.For n ∈{-N, ..., N}, the nth periodic cell becomes, in dimensionless form (using quantities without hats), 𝒟_1^n= {x- 2n∈ [-ϕ_x,ϕ_x]}×{y∈ [0,2]}×{z ∈ [- P_z,P_z]},𝒟_2^n= {x- 2n∈ [ϕ_x,2 - ϕ_x]}×{y∈ [0,2]}×{z ∈ [- P_z,P_z]}, where P_z = P̂_z/Ĥ.At y = 0 (y= 2), the regions of the SHS are given by ℐ^n= {x - 2n ∈ [-ϕ_x,ϕ_x]}×{z ∈ [- ϕ_z P_z,ϕ_z P_z]},ℛ^n= {x- 2n∈ [-ϕ_x,ϕ_x]}×{z ∈ [- P_z,-ϕ_z P_z]∪ [ϕ_z P_z,P_z]},𝒮^n= {x- 2n∈ [ϕ_x,2 - ϕ_x]}×{z ∈ [- P_z,P_z]}. The length of the channel becomes 2 L_x = 2 L̂_x / P̂_x = 2/ℰ = 2 / (λϵ^2).We then assume that the flow and surfactant variables are functions of the short length scale and rapid time scale (x_⊥ and t respectively), moderate length scale and intermediate time scale (x and T respectively) and long length scale and slow time scale (χ and τ respectively), where these six variables are treated as independent of each other. In 𝒟_1^n and 𝒟_2^n, the incompressible Stokes and surfactant transport equations in (<ref>) become ϵ^2 (u_x + λϵ^2 u_χ) + ∇_⊥·u_⊥ = 0,ϵ^2 (u_xx + 2λϵ^2 u_xχ +λ^2 ϵ^4 u_χχ) + ∇^2_⊥ u -p_x - λϵ^2 p_χ = 0,ϵ^2 (u_⊥ xx + 2λϵ^2 u_⊥ xχ +λ^2 ϵ^4 u_⊥χχ) + ∇^2_⊥u_⊥ -∇_⊥ p = 0, (ϵ^2 (c_xx + 2 λϵ^2 c_xχ +λ^2 ϵ^4 c_χχ)+ ∇^2_⊥ c)/ - ϵ^2 u (c_x + λϵ^2 c_χ) - u_⊥·∇_⊥ c- c_t - ϵ^2(c_T +λϵ^2 c_τ) = 0, with = ÛĤ/D̂ the bulk Péclet number, ∇_⊥≡ (∂_y, ∂_z) and ∇^2_⊥≡∂_yy + ∂_zz.At y = 0 (y=2) and along ℐ^n, the boundary conditions for flow and surfactant, the coupling conditions, and the interfacial surfactant transport equations in (<ref>) give n·∇ u -(Γ_x + λϵ^2 Γ_χ) = 0,v =0,n·∇ w - Γ_z = 0,n·∇ c - Da (c -Γ) = 0,(ϵ^2 (Γ_xx + 2λϵ^2 Γ_xχ +λ^2 ϵ^4 Γ_χχ) + Γ_zz )/_I- ϵ^2 (u Γ)_x - λϵ^4 (u Γ)_χ- (w Γ)_z - Γ_t - ϵ^2(Γ_T +λϵ^2 Γ_τ) - ( c - Γ) = 0, with = ÂĜ/μ̂Û the Marangoni number, = K̂_a Ĥ/D̂ the Damköhler number, _I= ĤÛ/D̂_I the interfacial Péclet number and = K̂_d Ĥ/Û the Biot number.At y = 0 (y=2) on ∂ℐ^n, the no-flux interfacial surfactant boundary conditions in (<ref>) become equationu Γ - (Γ_x + λϵ^2 Γ_χ)/_I = 0 at x = ±ϕ_x,w Γ - Γ_z/_I = 0 atz = ±ϕ_z P_z. a, bAt y = 0 (y=2)along ℛ^n∪𝒮^n, the no-flux bulk flow and bulk surfactant boundary conditions in (<ref>) give equation u = 0,v = 0,w =0,c_y =0. a–dDefining q = (u, v, w, c), across 𝒟_1^n and 𝒟_2^n, the spanwise (<ref>) and streamwise (<ref>a) continuity conditions for the flow and surfactant becomeq(x, y, -P_z, t, T, χ, τ)= q(x, y, P_z, t, T, χ, τ),q((2n + ϕ_x)^-, y, z, t, T, χ, τ)= q((2n + ϕ_x)^+, y, z, t, T, χ, τ). The streamwise flow and surfactant continuity condition for q between one cell and the next, i.e. between 𝒟_2^n and 𝒟_1^n+1, in (<ref>b) is replaced by a stronger assumption to allow the use of homogenisation theory <cit.>, namely that q is a periodic function of the moderate length scale x, such thatq(2n -ϕ_x, y, z, t, T, χ, τ) = q(2(n + 1) - ϕ_x, y, z, t, T, χ, τ).Slow variations of flow properties from cell to cell will be accommodated via dependence of the flow and surfactant variables on the long length scale χ and slow time scale τ. The bulk and interfacial surfactant fluxes (<ref>) satisfy ∫_𝒜_n (c_t + ϵ^2(c_T + λϵ^2 c_τ)) dA + ϵ^2 d/dx∫_𝒜_n( u c -c_ x/ -λϵ^2 c_χ/)d A + λϵ^4 d/dχ∫_𝒜_n( u c -c_ x/ - λϵ^2c_χ/)d A - 2 /∫_ℐ_n ( Γ -c)d z = 0 in𝒟_1,∫_ℐ_n (Γ_t + ϵ^2(Γ_T + λϵ^2 Γ_τ))d z+ ϵ^2 d/d x ∫_ℐ_n (u Γ - Γ_x/_I- λϵ^2 Γ_χ/_I)d z+ λϵ^4 d/dχ∫_ℐ_n (u Γ - Γ_x/_I- λϵ^2 Γ_χ/_I)d z + ∫_ℐ_n( Γ -c) d z = 0 in𝒟_1,∫_𝒜_n (c_t + ϵ^2(c_T + λϵ^2 c_τ)) dA + ϵ^2 d/dx∫_𝒜_n( u c -c_ x/ - λϵ^2c_χ/)d A + λϵ^4 d/dχ∫_𝒜_n( u c -c_ x/ - λϵ^2 c_χ/)d A = 0in𝒟_2, where ∫_𝒜_n· dA ≡∫_z=-P_z^P_z∫_y=0^2H· d yd z and ∫_ℐ_n· dz ≡∫_z=-P_z^P_z· d z for x -2n ∈ [ϕ_x, 2 - ϕ_x]. In 𝒟_1^n and 𝒟_2^n, the flux of fluid (<ref>) is given by∫_𝒜_n ud A = 1.The flux of surfactant, K = K(x, t, T,χ,τ), is related to changes in the bulk and surface concentration via (<ref>), which becomes∫_𝒜_n (c_t + ϵ^2(c_T + λϵ^2 c_τ))dA + 2/∫_ℐ_n (Γ_t + ϵ^2(Γ_T + λϵ^2 Γ_τ))dz+ ϵ^2(K_x + λϵ^2 K_χ) = 0 in𝒟_1^n,∫_𝒜_n (c_t + ϵ^2(c_T + λϵ^2 c_τ))d A + ϵ^2(K_x + λϵ^2 K_χ) = 0 in𝒟_2^n, where the flux of surfactant (<ref>) is given byK= ∫_𝒜_n ( u c -c_ x/ - λϵ^2 c_χ/)d A + 2/∫_ℐ_n (u Γ - Γ_x/_I - λϵ^2 Γ_χ/_I)d zin𝒟_1^n,K= ∫_𝒜_n ( u c -c_ x/ - λϵ^2c_χ/)d A in𝒟_2^n, and max(K(x, t, T,χ,τ)) = 1 at t = T = τ = 0. The normalised drag reduction (<ref>) over the nth periodic cell becomesDR_n(t, T,χ,τ) = Δ^n p_I - Δ^n p/Δ^n p_I - Δ^n p_U,where Δ^n p≡⟨ p⟩(2n-ϕ_x) - ⟨ p⟩(2(n+1)-ϕ_x) and ⟨·⟩≡∫_z=- P_z^ P_z∫_y=0^2· d y d z/ (4 P_z). §.§ Asymptotic homogenisationWe assume that ∼_I ∼∼ O(1) and ∼∼ O(ϵ^2) in the limit ϵ≪ 1, so that bulk–surface exchange is comparable to advection, diffusion and Marangoni effects in 𝒟_1^n and 𝒟_2^n for n ∈{-N, ..., N}.As discussed in <cit.>, this scaling means that we arrive at the most general form of the surfactant transport equations with moderate exchange, whereas, if we had assumed that ∼∼ O(1), then we would arrive at a sublimit with strong exchange. In the limit ϵ→ 0, we rescale = ϵ^2 ℬ and = ϵ^2 𝒟, where ℬ and 𝒟 are positive O(1) constants. We then substitute the asymptotic expansion[ u; v; w; p; c; Γ; K ] = [ u_0; v_0; w_0; p_0; c_0; Γ_0; K_0 ] + ϵ^2 [ u_1; v_1; w_1; p_1; c_1; Γ_1; K_1 ] + ϵ^4 [ u_2; v_2; w_2; p_2; c_2; Γ_2; K_2 ] + ...,into (<ref>)–(<ref>).The leading-order, first-order and second-order problems are addressed in <ref>–<ref> respectively. §.§.§ Leading-order problemIn the leading-order problem, we simplify the dependence of the velocity, pressure, and concentration field on the space and time variables in (<ref>). In 𝒟_1^n and 𝒟_2^n, streamwise gradients of the velocity and bulk concentration are small compared to cross-channel gradients.Hence, cross-channel diffusion balances advection and unsteady effects in the bulk equation, through the two-dimensional problem equation∇_⊥·u_0⊥ = 0, ∇^2_⊥u_0 -∇p_0 = 0, ∇^2_⊥ c_0/ - u_0⊥·∇_⊥ c_0- c_0t = 0. a–cAt y = 0 (y = 2) and along ℐ^n, streamwise gradients of the streamwise velocity and surface concentration are small compared to spanwise gradients.Hence, spanwise diffusion balances advection and unsteady effects in the interfacial equation, via equationn·∇ u_0 - Γ_0x = 0, v_0 =0, n·∇ w_0 - Γ_0z = 0,n·∇ c_0 = 0, Γ_0zz/_I- (w_0 Γ_0)_z - Γ_0t = 0. a–eAt y = 0 (y=2) and on ∂ℐ^n,equation u_0 Γ_0 - Γ_0x/_I = 0 at x = ±ϕ_x,w_0 Γ_0 - Γ_0z/_I = 0 at z = ±ϕ_z P_z. a, bAt y = 0 (y=2) and along ℛ^n ∪𝒮^n,equation u_0 = 0,v_0 = 0, w_0 =0,c_0y =0. a–dAs there are no streamwise gradients of u_0 in (<ref>)–(<ref>), the two-dimensional problem does not capture inner regions near x=2n ±ϕ_x, governed by the three-dimensional Stokes equations to ensure continuity of u_0 across domains 𝒟_1^n and 𝒟_2^n.In 𝒟_1^n and 𝒟_2^n, the surfactant field evolves faster in time t than any changes to the flux of surfactant and bulk–surface exchange at leading-order, so that (<ref>)–(<ref>) giveequation ∫_𝒜_n c_0t dA = 0, ∫_ℐ_n Γ_0t d z= 0, ∫_𝒜_n u_0d A = 1. a–cAccording to (<ref>), the flux of surfactant, K_0 = K_0(x, t, T, χ,τ), is given by,K_0= ∫_𝒜_n( u_0 c_0 -c_0 x/)d A+ 2𝒟/ℬ∫_ℐ_n (u_0 Γ_0 - Γ_0x/_I)d zin𝒟_1^n, K_0= ∫_𝒜_n ( u_0 c_0 -c_0 x/)d A in𝒟_2^n. The leading-order solution can be expected to decay exponentially fast in time t to Γ_0=Γ_0(x, T, χ,τ), c_0 = c_0(x, T, χ,τ), p_0 = p_0(x, T, χ,τ), v_0=w_0=0 and K_0 = K_0(x, T,χ,τ) <cit.>, satisfying (<ref>). That is, at moderate (T) and slow (τ) time scales, the concentration field does not vary in the spanwise direction, and there are no concentration gradients to generate velocities in the cross-plane. Using linear superposition, we can decompose u_0 into a contribution from the streamwise pressure gradient p_0x which drives the flow and the streamwise interfacial concentration gradient Γ_0x which inhibits it owing to adverse Marangoni forces, via equation u_0 = Ũ p_0x + U̅Γ_0xin𝒟_1^n and u_0 = Ŭ p_0xin𝒟^n_2, a, bwhere the steady velocity profiles Ũ(y, z), U̅(y, z) and Ŭ(y, z) are described in <cit.> and satisfy a Poisson-type problem in the cross-section. Substituting (<ref>) into (<ref>c), we obtain relations between the volume flux, pressure gradient and interfacial surfactant gradient in 𝒟_1 and 𝒟_2, equationQ̃ p_0x + Q̅Γ_0x =1,q = q̃ p_0x + q̅Γ_0xin𝒟_1^n, Q̆ p_0x = 1 in𝒟_2^n, a–cwhere the fluxes Q̃, Q̅, Q̆, q̃, q̅ and q are described by <cit.>.§.§.§ First-order problemIn the first-order problem, we relate the surfactant concentration over individual plastrons to quantities that vary over the long length scale and slow time scale associated with the channel.Solvability conditions are imposed on the first-order problem to constrain u_0, c_0 and Γ_0.These conditions are provided by the conservation arguments that result in the surfactant transport equations at O(ϵ^2). Hence, (<ref>) gives∫_𝒜_n c_1t dA= - ∫_𝒜_n c_0T dA -d/dx∫_𝒜_n( u_0 c_0 -c_0 x/)d A+ 2 𝒟/∫_ℐ_n( Γ_0 -c_0)d zin𝒟_1,∫_ℐ_n Γ_1t d z= - ∫_ℐ_n Γ_0T d z - d/d x ∫_ℐ_n (u_0Γ_0 - Γ_0x/_I)d z-ℬ∫_ℐ_n( Γ_0 -c_0) d zin𝒟_1,∫_𝒜_n c_1t dA= - ∫_𝒜_n c_0T dA -d/dx∫_𝒜_n( u_0 c_0 -c_0 x/)d A in𝒟_2, and (<ref>) becomesK_1= ∫_𝒜_n ( u_0 c_1 + u_1 c_0 -c_1 x/ -λc_0 χ/)d A+ 2𝒟/ℬ∫_ℐ_n (u_0 Γ_1 + u_1Γ_0 - Γ_1x/_I- λΓ_0χ/_I)d zin𝒟^n_1, K_1= ∫_𝒜_n ( u_0 c_1 + u_1 c_0 -c_1 x/ -λc_0 χ/)d A in𝒟^n_2. To avoid secular growth of the net mass of surfactant in (<ref>), we require that the right-hand sides of (<ref>) are zero. Hence, the bulk and interfacial concentrations evolve over moderate time scales according to∫_𝒜_n c_0T dA + d/dx∫_𝒜_n( u_0 c_0 -c_0 x/)d A - 2 𝒟/∫_ℐ_n( Γ_0 -c_0)d z = 0 in𝒟_1^n,∫_𝒜_nΓ_0T dA + d/d x ∫_ℐ_n (u_0Γ_0 - Γ_0x/_I)d z +ℬ∫_ℐ ( Γ_0 -c_0) d z = 0 in𝒟_1^n,∫_𝒜_n c_0T dA + d/dx∫_𝒜_n( u_0 c_0 -c_0 x/)d A =0 in𝒟_2^n. Substituting the velocity and flux conditions (<ref>)–(<ref>) into the surfactant transport equations (<ref>) gives us ODEs that govern the unsteady advection, diffusion and exchange of surfactant over one period, θ c_0T + c_0x - α c_0xx - ν(Γ_0 - c_0)= 0 in𝒟_1^n,σΓ_0T + βΓ_0x - γ (Γ_0Γ_0x)_x - δΓ_0xx - ν(c_0 - Γ_0)= 0 in𝒟_1^n,θ c_0T + c_0x - α c_0xx = 0 in𝒟_2^n. We describe (<ref>) as the unsteady moderate-exchange equations.The steady-state problem was solved in <cit.>, where the transport coefficients α, β, γ, δ and ν (specified below) were defined in terms of physical and geometrical parameters and the fluxes Q̃, Q̅, q̃ and q̅. Combining (<ref>) with (<ref>) gives a set of constraints on the total surfactant flux over one period, θ c_0T + σΓ_0T + K_0x = 0,where K_0 = c_0 - α c_0x + βΓ_0 - γΓ_0Γ_0x - δΓ_0x in 𝒟_1^n,θ c_0T + K_0x = 0,whereK_0 = c_0 - α c_0x in 𝒟_2^n. We solve (<ref>)–(<ref>) subject to boundary conditions which enforce continuity and periodicity, of both the surfactant concentration and flux, between subdomainsc_0(2n+ϕ_x^-, T, χ, τ)= c_0(2n+ϕ_x^+, T, χ, τ),c_0(2n-ϕ_x, T, χ, τ)= c_0(2(n+1) - ϕ_x, T, χ, τ), K_0(2n+ϕ_x^-, T, χ, τ)= K_0(2n +ϕ_x^+, T, χ, τ), K_0(2n -ϕ_x, T, χ, τ)= K_0(2(n+1) - ϕ_x, T, χ, τ),[βΓ_0 - γΓ_0 Γ_0x - δΓ_0x](2n ±ϕ_x, T, χ, τ)= 0. In (<ref>)–(<ref>), we have introduced the following transport coefficients: α = 4 P_z/ (bulk diffusion);β = 2 𝒟q̃/ℬ Q̃ (partition coefficient); γ =2 𝒟(q̃Q̅/Q̃ - q̅)/ℬ (surfactant strength);δ = 4 ϕ_z P_z 𝒟/ℬ _I (surface diffusion); ν = 4ϕ_z P_z 𝒟/ (exchange strength); θ = 4 P_z (bulk capacitance); σ = 4 ϕ_z P_z 𝒟/ℬ(surface capacitance).The bulk (surface) diffusion coefficient α > 0 (δ>0) compares the strength of bulk (interfacial) streamwise diffusion to advection. The partition coefficient β > 0 characterises the distribution of the surfactant flux, where for β≫ 1 (β≪ 1) the interfacial (bulk) surfactant flux dominates. The surfactant strength γ > 0 characterises the impact of Marangoni stresses on the interfacial surfactant flux. The exchange strength ν > 0 compares the rate of adsorption to advection. The remaining parameters, θ and σ, are associated with time-dependent variations and were not reported in <cit.>. The bulk capacitance coefficient θ>0 characterises the transverse aspect ratio of the channel and specifies the bulk response to time-dependent changes in the surfactant flux.The surface capacitance σ>0 is the rescaled (by 4ϕ_z P_z) surfactant depletion depth L_d = 𝒟/(ℬ); σ captures the manner in which solubility regulates the surface response to gradients in the surfactant flux.The dependence of the transport coefficients on dimensional parameters will be discussed later in <ref>. We solve (<ref>)–(<ref>) subject to the initial condition c_0(x, 0,χ, 0) = 1 to illustrate the dependence of the bulk concentration field on the intermediate time scale T in figure <ref>(a); convergence to a steady state for different values of θ = σ is illustrated using c_0(ϕ_x, T,χ,τ) in figure <ref>(b). The initially uniform concentration falls to an equilibrium state, periodic over the unit cell, in which the positive gradient (c_0x> 0) in 𝒟_1 generates an interfacial stress opposing the mean flow. The time taken to reach a steady state increases with θ and σ.However, as our primary objective is to investigate surfactant transport over the full length of the channel, we assume that the leading-order solution is close to equilibrium and the concentration field no longer depends on the intermediate time T, i.e. Γ_0 = Γ_0(x,χ,τ) and c_0 = c_0(x,χ,τ).As the concentration field does not depend on T, from(<ref>), we have that K_0x = 0 in 𝒟_1 and 𝒟_2, and therefore the problem in each period simplifies to finding c_0 = c_0(x; K_0) and Γ_0 = Γ_0(x; K_0) for a given surfactant flux K_0 = K_0(χ,τ). Hence, we solve the steady moderate-exchange equations from <cit.>, given byc_0x - α c_0xx - ν(Γ_0 - c_0)= 0 in𝒟_1^n,βΓ_0x - γ (Γ_0Γ_0x)_x - δΓ_0xx - ν(c_0 - Γ_0)= 0 in𝒟_1^n, c_0x - α c_0xx = 0 in𝒟_2^n,subject to the steady surfactant flux conditionsequation K_0 = c_0 - α c_0x + βΓ_0 - γΓ_0Γ_0x - δΓ_0xin𝒟_1^n,K_0 = c_0 - α c_0xin𝒟_2^n, a, band boundary conditions given in (<ref>).The solution to the surfactant concentration transport equations (<ref>, <ref>, <ref>) exhibits multiple asymptotic regimes, which are discussed in detail in <cit.>. Briefly, we distinguish a strong-exchange problem (ν≫max(1,α,δ)), where the c_0 and Γ_0 fields are in equilibrium (c_0 ≈Γ_0), from a moderate-exchange problem (ν = O(1,α,δ)), where c_0 and Γ_0 are distinct. In the strong-exchange problem, we identify three primary areas of parameter space and two significant boundaries between them; these are summarised in figure <ref>(a). In the Marangoni-dominated (M) region (analysed in Appendix <ref>), the interfacial surfactant gradient immobilises the liquid–gas interface (leading to low drag reduction); in the advection-dominated (A) region (Appendix <ref>), the interfacial surfactant is swept to the downstream stagnation point of each plastron and the liquid–gas interface is mostly shear-free (high drag reduction); and in the diffusion-dominated (D) region (Appendix <ref>), the surfactant gradient is attenuated by diffusion and the liquid–gas interface is mostly shear-free (high drag reduction). Across the advection–Marangoni (AM) (Appendix <ref>) and the diffusion–Marangoni (DM) boundaries (Appendix <ref>), these effects compete to partially immobilise the liquid–gas interface (moderate drag reduction). Each of these regions has an analogue when exchange is weak (ν≪min(1,α,δ); see figure <ref>b).These sub-regions have the same leading-order physics as regions M, D and A and are referred to as Marangoni–exchange (M_E), diffusion–exchange (D_E) and advection–exchange(A_E) sub-regions.The link between surfactant flux K_0 and surfactant concentration is evident from (<ref>), by noting that K_0 can be scaled to unity under the mapping c_0→ K_0 c^*_0, Γ_0 → K_0 Γ^*_0 and γ→γ^*/K_0.Equivalently, by solving the surfactant transport equations (<ref>, <ref>) with K_0=1, we can capture variations in the surfactant flux parametrically through variations in γ. For instance, increasing K_0 from 1/2 to 1 for fixed α = 1 and γ=100 is equivalent (for the rescaled concentrations c^*_0 and Γ^*_0) to setting K_0=1 and increasing γ from 50 to 100 (illustrated by the right white line in figure <ref>a), thus moving away from the DM boundary and further into the M region. Similarly, increasing K_0 from 0.01 to 1 for fixed α = 0.1 and γ=10 has the more dramatic effect of moving from the A region (high drag reduction) to the M region (low drag reduction), by varying γ from 0.1 to 10 with K_0=1 (illustrated by the left white line in figure <ref>a). While we could reduce the number of parameters in the plastron-scale problem by using the rescaled concentrations c^*_0 and Γ^*_0 and by subsuming the parameter K_0 into the rescaled surfactant strength parameter γ^*=γ K_0, we choose to retain K_0 explicitly and use the concentrations c_0 and Γ_0, because it provides a crucial link between the plastron-scale and large channel-scale problems.We will return to these examples in <ref>.§.§.§ Second-order problemIn the second-order problem, we allow quantities such as the slip length and drag reduction to vary over the long length scale of the channel and slow time scale. Solvability conditions are imposed on the second-order problem to constrain u_1, c_1 and Γ_1 appearing in (<ref>)–(<ref>).These conditions are provided by the conservation arguments that result in the surfactant transport equations at O(ϵ^4). In 𝒟_1^n, (<ref>a, b) give bulk and interfacial equations, which can be combined into∫_𝒜_n (c_2t + c_1T) dA + 2𝒟/ℬ∫_ℐ_n(Γ_2t + Γ_1T)d z + K_1x = - λ( ∫_𝒜_n c_0τ dA +2𝒟/ℬ∫_ℐ_n Γ_0τ d z + K_0χ - λϵ^2(∫_𝒜_nc_0χχ/ dA +2𝒟/ℬ∫_ℐ_n Γ_0χχ/_I d z )),and in 𝒟_2, (<ref>c) gives∫_𝒜_n (c_2t+ c_1T)dA + K_1x = - λ(∫_𝒜_n c_0τ dA +K_0χ - λϵ^2 ∫_𝒜_nc_0χχ/ dA),using the definition of the leading- and first-order surfactant fluxes K_0 and K_1 given in (<ref>) and (<ref>), respectively. In (<ref>)–(<ref>), we have retained O(ϵ^2) diffusion terms with respect to χ in order to regularise any shocks that may arise because of the nonlinear dependence of c_0 and Γ_0 on K_0 (discussed further in <ref> below). To avoid secular growth of the net mass of surfactant, we require that the combined right-hand sides (i.e. source/sink terms) of the surfactant transport equations, (<ref>)–(<ref>), integrate to zero along one period. We know from <ref> that c_0 = c_0(x; K_0) and Γ_0 = Γ_0(x; K_0)where K_0 = K_0(χ,τ). As c_0 is assumed to be periodic in x, we can use the cell with n=0 as representative of all others. Hence, integrating the surfactant transport equations (<ref>)–(<ref>) over one period, using the velocity fields and fluxes from (<ref>)–(<ref>) and using the definition of the transport coefficients in (<ref>), we obtaind C_0/dτ + d A_0/dχ - d M_0/dχ - d D_0/dχ -λϵ^2 d^2 D_1/dχ^2 = 0,where C_0(K_0)=θ∫_x=-ϕ_x^2-ϕ_x c_0dx + σ∫_x=-ϕ_x^ϕ_x Γ_0 dx, (total weighted concentration), A_0(K_0)= ∫_x=-ϕ_x^2-ϕ_x c_0dx + β∫_x=ϕ_x^ϕ_x Γ_0dx (advective flux), M_0(K_0)= γ[Γ_0^2/2]_x = -ϕ_x^ϕ_x (Marangoni flux),D_0(K_0)= α[c_0]_x = -ϕ_x^2-ϕ_x+ δ[Γ_0]_x = -ϕ_x^ϕ_x (primary diffusive flux), D_1(K_0)= α∫_x=-ϕ_x^2-ϕ_x c_0dx + δ∫_x=-ϕ_x^ϕ_x Γ_0 dx, (secondary diffusive flux). We can express (<ref>) as a nonlinear advection–diffusion equation for the leading-order surfactant flux:∂ C_0/∂ K_0∂ K_0/∂τ+ (∂ A_0/∂ K_0 - ∂ M_0/∂ K_0 - ∂ D_0/∂ K_0)∂ K_0/∂χ - λϵ^2 ∂/∂χ(∂ D_1/∂ K_0∂ K_0/∂χ) = 0.Equation (<ref>) describes the spatio–temporal evolution of a disturbance to the flux of surfactant.It predicts how such disturbances are advected and spread by the flowover the long length scale (χ) and slow time scale (τ) that are characteristic of the channel flow.This equation is motivated by environmental surfactant concentrations that can vary significantly in space and time across the large length scales involved in applications <cit.>.As the initial surfactant flux evolves with respect to χ and τ, the surfactant flux transport (<ref>) is coupled to the surfactant concentration transport (<ref>, <ref>) over a given streamwise period. We illustrate the relationship between the surfactant flux and the resulting surfactant bulk concentration in figure <ref>, which shows how they both vary over 500 plastrons, in a case which is representative of the numerical simulations we perform in this study. Where K_0 is elevated, stronger adsorption can be expected to lead to interfacial rigidification, reducing the proportion of net flux K_0 carried by the interface and increasing the local drag.The impact of these changes on the evolution of the flux field is described by (<ref>). While K_0 varies smoothly with χ, the underlying concentration field has a wavy multiscale structure (inset). We now aim to solve this coupled transport problem to compute the time- and space-varying leading-order drag reduction and slip associated for specific initial and boundary conditions of relevance to applications.We can solve (<ref>) using analytical methods and numerical techniques.Analytically, we will neglect O(ϵ^2) terms.This yields a hyperbolic problem for which the solution is found using the method of characteristics, generating simple formulae that can be used by experimentalists and practitioners. Numerically, we will retain secondary diffusion terms to regularise any shocks that may arise in hyperbolic problem and to validate the analytical results.We solve (<ref>) subject to an initial condition, K_0(χ,0), satisfying the constraint max(K_0(χ, 0)) = 1. However, bar this constraint, we can choose any initial condition for the distribution of surfactant flux in the channel. Here, we choose a Gaussian distribution, rather than a step or a ramp function, as it constitutes a classical example for which behaviours in purely advective transport systems are observed, such as: wave steepening,wave expansion or shock formation <cit.>.We solve (<ref>) subject to the initial and boundary conditions equation K_0(χ, 0) = K_b + (1 - K_b)exp(- (10 χ + 15/2)^2)),K_0(± L_x,τ) = K_b, a, bwhere L_x is sufficiently large for (<ref>b) to be approximately valid and K_b is the background surfactant flux. Taking K_b = 1/2, the distribution defined in (<ref>) is characteristic of a channel that is contaminated with a bolus of surfactant that locally doubles the background surfactant flux. In this case, the drag-reduction values remain in region M (see figure <ref>), as we will show in <ref>. We also take K_b = 0.01, representing an almost clean channel that is contaminated with a bolus of surfactant. We investigate K_b=0.01, as an alternative to K_b=1/2, as in this case values of the drag reduction can transition between region A (or D) and region M (figure <ref>).As we show in <ref>, the surfactant-flux distribution can exhibit shocks with such initial conditions. The dependence of C_0, A_0, M_0 and D_0 in (<ref>) on K_0, for different values of α, β, γ, δ, θ and σ, is illustrated in figure <ref>.We choose parameter values for α, β, γ and δ such that drag reduction values are generally at the AM boundary in the parameter space (see figure <ref>).By increasing K_0, drag reduction values transition from regions A to M when K_0 = O(2ϕ_x β/γ) (see Appendix <ref>). In figure <ref>(a, b), the relationship between C_0 and A_0 with K_0 is linear in regions M (see (<ref>a, b)) and A (see (<ref>a, b)), and nonlinear in between. In figure <ref>(c, d), the relationship between M_0 andD_0 with K_0 can be nonlinear in regions M and A, however, this nonlinearity does not affect the leading-order surfactant-flux distribution and the drag reduction in regions M and A, because γ≫max(1, α, δ) and max(α,δ,γ) ≪ 1, respectively (see Appendices <ref> and <ref>). In figure <ref>(d), when α = 100, we transition to region D because min(α, δ)≫max(1, γ) (see Appendix <ref>). In figure <ref>, we see that all the coefficients C_0, A_0, M_0 and D_0 have a nonlinear dependence on K_0 at the AM boundary, which will affect the leading-order surfactant-flux distribution and drag reduction.We will discuss this further in <ref>.§.§ Solving the surfactant flux evolution equationFirst, we neglect the O(ϵ^2) terms in (<ref>) and seek closed-form expressions for the surfactant-flux distribution and drag reduction using the method of characteristics.dK_0/dτ = 0 on the characteristic curves of (<ref>), which are solutions of the ODE <cit.>dχ/dτ = a(K_0(χ,τ)) = A'_0-M'_0 - D'_0/C'_0,where primes denote derivatives of the functions defined in (<ref>) with respect to K_0. The propagation speed, a, characterises how fast changes in the surfactant-flux distribution will be transported in space and time along the length of the channel. The characteristic curves χ = χ(τ) are straight lines and K_0 is constant along each characteristic. We can solve (<ref>) subject to (<ref>) provided that the characteristics do not intersect.The characteristic through (χ,τ) and (ξ, 0) has gradient (χ - ξ)/τ = dχ / dτ = a(K_0(χ,τ)) = a(K_0(ξ, 0)), and therefore,χ = ξ + a (K_0(ξ, 0)) τforξ∈ℝ,which gives ξ implicitly as a function of χ and τ, i.e. ξ =ξ(χ,τ). Hence, the solution to (<ref>) subject to (<ref>) is given by K_0(χ,τ) = K_0(ξ, 0). For ξ_1,ξ_2∈ℝ, the characteristic curves χ = ξ_1 + a(K_0(ξ_1, 0))τ and χ = ξ_2 + a(K_0(ξ_2, 0))τ intersect when τ = (ξ_2 - ξ_1)/(a(K_0(ξ_1, 0)) - a(K_0(ξ_2, 0))).In the limit ξ_2 →ξ_1, the shock time converges to τ = -1/a_ξ(K_0(ξ_1, 0)). Hence, the time τ_b when a shock first forms is given byτ_b = min_ξ∈ℝ{-1/a_ξ(K_0(ξ, 0))},and (<ref>) gives the streamwise location χ_b where a shock first forms.The integral form of (<ref>) can be used to derive the Rankine–Hugoniot condition, u_s = [[A_0 - M_0 - D_0]]/[[C_0]], where u_s is the shock speed, with the jump bracket defined as [[q]] = q(χ_s^+,τ) - q(χ_s^-,τ) and χ_s is the location of the shock <cit.>. The jump condition can then be integrated to determine the location of the shock for times τ > τ_b, for admissible shocks that satisfy the entropy condition,χ_s(τ) = u_s τ + B,where the integration constant B is determined using χ_s = χ_b at τ = τ_b from (<ref>)–(<ref>).Second, we retain O(ϵ^2) diffusive terms in (<ref>) and solve it numerically subject to the initial and boundary conditions (<ref>).We use the method of lines and a backwards-in-time and centered-in-space scheme.As discussed earlier, retaining small diffusive terms avoids numerical difficulties associated with shock formation and regularises the shocks through a small amount of diffusion.This procedure is outlined in detail in Appendix <ref>.§.§ Quantities of interest for applications As discussed in <ref>, the main quantities of interest in SHS applications, ranging from microchannel to marine transport, are the effective slip length and drag reduction. Following <cit.>, the leading-order drag reduction (over a plastron) depends on the total flux of surfactant viaDR_0 = 1 - γΔΓ_0/2ϕ_x β, whereΔΓ_0 ≡Γ_0(ϕ_x; K_0) - Γ_0(-ϕ_x; K_0). The drag reduction inherits a dependence on τ and χ from K_0; we therefore define the space- and time-averaged drag reduction as equation ⟨DR_0 ⟩_χ (τ) = 1/2L_x∫_χ = -L_x^L_xDR_0dχ, DR_0(χ) = 1/𝒯∫_τ = 0^𝒯DR_0dτ, a, bwhere [-L_x, L_x] is the length of the channel and [0,𝒯] is the time interval over which the drag reduction is measured. Given a surfactant-flux distribution K_0, we can calculate the corresponding drag-reduction distribution DR_0 by solving the surfactant transport equations (<ref>, <ref>, <ref>) for each K_0 to get Γ_0(x; K_0), and then use (<ref>) to calculate DR_0. To evaluate the effective slip length λ_e over a plastron <cit.>, we integrate the leading-order streamwise momentum equation (<ref>b) for an equivalent channel with the mixed boundary conditions (<ref>a, <ref>a) replaced by λ_e u_0y - u_0 = 0.We obtain u_0 = Ǔ p_0x, where Ǔ = y(y-2)/2 - λ_e and p_0x is the same pressure gradient as in the SHS channel.The volume flux is Q̌ = ∫_z = - P_z^P_z∫_y = 0^2Ǔ p_0xdydz = (Q̆ - 2 P_z λ_e)p_0x, or by integrating over one period Q̌ = (Q̆ - 2 P_z λ_e) Δ p_0 / 2. Equating the volume flux of the equivalent channel with the volume flux of the SHS channel, Q̃p_0x + Q̅Γ_0x = 1, we find λ_e = DR_0(Δ p_I - Δ p_U)/P_z Δ p_I(Δ p_U DR_0+ Δ p_I (1-DR_0)),which can be used to convert results from DR_0 to λ_e. Therefore, with λ_e and DR_0 being directly related to K_0, the governing equations, (<ref>)–(<ref>), can also be rewritten as initial boundary value problems for either the effective slip length or drag reduction, that vary over the long length scale and slow time scale of the channel.§ RESULTSIn <ref>–<ref>, we investigate how the surfactant flux (K_0), drag reduction (DR_0), propagation speed (a), streamwise velocity (u_0) and surfactant concentration (c_0 and Γ_0) vary with the bulk diffusion (α), partition coefficient (β), surfactant strength (γ), surface diffusion (δ), exchange strength (ν), bulk capacitance (θ), surface capacitance (σ) and streamwise gas fraction (ϕ_x). We discuss the quantities that vary over the length of the channel (K_0, DR_0 and a) and their effect on the flow and surfactant transport (u_0, c_0 and Γ_0) over each period.Using asymptotic (detailed in Appendix <ref>) and numerical (Appendix <ref>) techniques, we evaluate the solution to (<ref>, <ref>) in the main regions and boundaries illustrated in figure <ref> and discussed in <ref>: the Marangoni-dominated (M) region (<ref>); the advection-dominated (A) region (<ref>); the diffusion-dominated (D) region (<ref>); the advection–Marangoni (AM) boundary (<ref>); and the diffusion–Marangoni (DM) boundary (<ref>).Shocks in the surfactant-flux and drag-reduction distribution can arise in regions AM and DM. Throughout <ref>, we fix the length of the channel L_x = 1, channel-height-to-streamwise-period ratio ϵ = 0.1, spanwise gas fraction ϕ_z = 0.5 and spanwise period width P_z=1. We construct asymptotic solutions for any L_x, ϕ_z and P_z in Appendix <ref> where ϵ≪ 1. §.§ Marangoni–dominated region§.§.§ Flow and surfactant flux transport at the channel scale Figure <ref>(a) shows how a bolus of surfactant flux, using initial and boundary conditions (<ref>), is advected along the length of the channel at increasing times. Taking K_b= 1/2 in (<ref>), K_0 remains sufficiently large for the flow to be everywhere in the Marangoni-dominated (M) region.As the surfactant flux and surfactant concentration increase, the leading-order drag reduction in figure <ref>(b) decreases, implying that the liquid–gas interface is more immobilised. We plot in figure <ref>(a, b)asymptotic solutions for the leading-order surfactant flux and drag reduction, (<ref>), derived in the limit where both Marangoni effects and bulk–surface exchange are strong compared to advection and diffusion (see region M in figure <ref>a). As the flux of surfactant is conserved in (<ref>), the space-averaged drag reduction (⟨DR_0⟩_χ) defined in (<ref>a) is constant provided the bolus of surfactant flux remains in the channel for the given time interval.However, if the bolus of surfactant flux is advected out of the channel, then the space- and time-averaged drag reduction ⟨DR_0⟩_χ defined in (<ref>) is maximised by maximising the propagation speed (a_M) in region M. In figure <ref>(b), ⟨DR_0⟩_χ=⟨DR_0 ⟩_χ = 0.06 for χ∈ [-1, 1] and τ∈ [0, 1]; the drag reduction is small as the liquid–gas interface is mostly immobilised with the background surfactant flux. In region M, the bolus of surfactant flux, (<ref>), is advected downstream by the flow with a constant propagation speed, a_M≈ 1/(θ + σϕ_x) (derived in Appendix <ref>), and therefore, the distribution of surfactant flux and drag reduction in figure <ref>(a, b) do not change shape as the bolus moves through the channel. For σ≪θ, we have that a_M≈ 1/θ, which corresponds to a dimensional speed Û_m = Q̂/ (4P̂_zĤ). This is the cross-channel-averaged bulk propagation speed and indicates bulk-dominated surfactant transport.For a fixed θ, adsorption at the liquid–gas interface causes the propagation speed of the bolus of surfactant flux to fall significantly compared to the cross-channel-averaged bulk propagation speed, reducing to a_M≈ 1 / (σϕ_x) for σ≫θ. Dimensionally, this corresponds to a reduction in the bulk propagation speed by a factor ϕ_x ϕ_z L_d and indicates surface-dominated transport, where L_d = K̂_a / (ĤK̂_d) is the normalised surfactant depletion length and ϕ_x ϕ_z is the area gas fraction of the SHS.Hence, the propagation of disturbances to the surfactant concentration field is significantly slower for more insoluble surfactants (large L_d) and when the area of adsorption, i.e. the liquid–gas interface 0<ϕ_x ϕ_z<1, is maximised.For either bulk or surface-dominated transport, the propagation speed in region M decreases with the bulk (θ=4 P_z) and surface capacitance (σ = 4 ϕ_z P_z L_d). For a fixed volume flux of fluid, an increase in the cross-channel area (spanwise surface area) will reduce the streamwise bulk (surface) velocity, implying that the bolus of surfactant flux will be advected slower throughout both 𝒟_1 and 𝒟_2. §.§.§ Flow and surfactant transport at the period scale The magnitude of the background surfactant flux, K_b= 1/2, means that the liquid–gas interface is almost immobile along the entire SHS and the leading-order bulk and interfacial concentrations (Γ_0 and c_0) in each period are linear with a shallow gradient, as shown in equation (<ref>) and figure <ref>(c).In (<ref>), we see that c_0 depends linearly on K_0 at leading-order, however, Δ c_0 does not, as the liquid–gas interface is already immobilised when K_0=O(1). As the bolus of surfactant flux passes over an individual plastron and K_0 varies from 1/2 up to 1 and back down to 1/2, the concentration rises (from times τ=0.2 to τ=0.4) and then falls (from times τ=0.4 to τ=0.7). We observe adsorption and desorption inside boundary layers around x = ±ϕ_x=± 0.5 where the bulk and interfacial concentrations deviate from each other, generating local surfactant gradients that reduce the streamwise slip velocity (u_0) close to the stagnation points (x=±ϕ_x=± 0.5) in figure <ref>(d).The streamwise slip velocity inherits a dependence on K_0 through c_0, as more surfactant increases the amount of immobilisation at the liquid–gas interface.Thus, the streamwise velocity at the interface falls and then rises as the bolus passes over an individual plastron (see curves from τ=0.2 to τ=0.7 in the graph in figure <ref>d).As mentioned in <ref>, the present long-wave theory does not capture inner regions close to the stagnation points where the streamwise velocity satisfies u_0 = 0, explaining the non-zero value exhibited by u_0(x,0,0) in figure <ref>(d) near x=±ϕ_x=± 0.5. As described in (<ref>a), we have instead imposed no flux of surfactant in the streamwise direction at these stagnation points.§.§ Advection and diffusion–dominated regions We also briefly discuss asymptotic results (derived in Appendix <ref> and <ref>) for the advection-dominated (A) and diffusion–dominated (D) regions (see regions A and D in figure <ref>a).When bulk–surface exchange is strong, the bolus of surfactant flux in (<ref>) propagates in a similar manner to figure <ref>(a), but is advected with speeds a_A≈ (β+1)/(ϕ_x(σ +θ)+ θ (1 - ϕ_x)(β+1)) and a_D≈ (α (1+β) + δ(1-ϕ_x))/((θ+σϕ_x)(α + δ(1-ϕ_x))) in the A and D regions, respectively, and with a new parameter dependency on the partition coefficient β = 2 L_d q̃ / Q̃. In regions A and D, the propagation speed increases with β.For β≫ 1 (β≪ 1), the flux of surfactant along the liquid–gas interface is greater (smaller) than the flux of surfactant in the bulk, which is non-dimensionalised to unity in (<ref>). Hence, as L_d grows, the localised concentration of surfactant will be advected faster along the liquid–gas interface, and therefore, the surfactant will be advected faster throughout 𝒟_1. When bulk–surface exchange is weak, e.g. regions M_E and D_E depicted in figure <ref>(b), the propagation speed is the same in all regions M, A and D (see Appendices <ref>, <ref> and <ref>), such that a_M = a_D = a_A≈ 1 / (θ + σϕ_x). §.§ Advection–Marangoni boundary§.§.§ Flow and surfactant flux transportat the channel scale In figure <ref>(a), we use asymptotic (<ref>) and numerical solutions (outlined in Appendix <ref>) to examine the spatio–temporal evolution of DR_0 along the length of the channel at the advection–Marangoni (AM) boundary, where Marangoni effects, interfacial advection and bulk–surface exchange are strong compared to diffusion (see the AM boundary in figure <ref>a).Taking K_b = 0.01, we focus on the case where γ K_0 ≤ 2ϕ_x β, which places the surfactant profile atthe plastron scale in the stagnant-cap regime <cit.>.The flow advects the bolus of surfactant flux (<ref>) through the channel with a propagation speed a_AM≈ 2 β / (γ (σ + θ) K_0 + 2 βθ (1 - ϕ_x)) that depends on the local flux K_0.As the bolus propagates in (χ,τ)-space, the wave steepens at the upstream edge (rearside of the wave) and ultimately a shock forms at some location and time along the channel, which we discuss further in <ref>. At the AM boundary, the liquid–gas interface is partially immobilised and the space-averaged drag reduction in figure <ref>(a) is given by ⟨DR_0⟩_χ= ⟨DR_0 ⟩_χ = 0.95 for χ∈ [-1, 1] and τ∈ [0, 0.29], which is close to the shear-free value for reasons that we now discuss. In Appendix <ref>, we derive that DR_0 = 0.5 when γ = βϕ_x/ K_0. Hence, for β = 10, γ = 5 and ϕ_x=0.5 in figure <ref>(a), we transition from region A (as DR_0 > 0.5 for K_0 < 1) to region M only when K_0=1, and therefore, the flow is mostly dominated by advection.Figure <ref>(b) shows how the propagation speed depends on the spatio–temporal evolution of the bolus of surfactant at the AM boundary, using (<ref>). The wave steepening in the drag-reduction distribution observed in figure <ref>(a) occurs because there is wave steepening in the surfactant-flux distribution, and a small surfactant flux is advected faster than a large surfactant flux. Physically speaking, greater surfactant concentrations at a given location in the channel imply that the liquid–gas interface is more immobilised than it is for small surfactant concentrations.This implies that the streamwise slip velocity and thus the propagation speed decreases with increasing K_0.The dependence of a_AM on β, θ and σ is similar as for a_M, which is discussed in <ref>. When K_0 ≪ 1, we have that a_AM≈ 1 / (θ(1-ϕ_x)), the cross-channel-averaged bulk propagation speed enhanced by a factor proportional to the streamwise groove length 1-ϕ_x. §.§.§ Shock formation and regularisation via streamwise diffusionThe time taken for the shock to form in the surfactant-flux distribution, τ_b, increases as the partition coefficient (β) decreases and the bulk and surface capacitance parameters (θ and σ) increase.The location in space where the shock in the surfactant-flux distribution forms, χ_b, does not vary with respect to θ and σ; however, χ_b increases with decreasing β, for reasons that we now explain. Recall that, as β increases, the propagation speed increases because the interfacial surfactant flux increases and the bulk surfactant flux is fixed, and as θ and σ decrease, the propagation speed increases because the cross-sectional area reduces and the volume flux is fixed. A larger propagation speed means that the difference between the total flux of surfactant when the concentration is small and large is greater (see figure <ref>b), and therefore, the larger difference causes the wave to steepen faster and the shock forms earlier. Taking K_b= 0.01 in the initial distribution of surfactant flux prescribed in (<ref>), we can use (<ref>)–(<ref>) to calculate that a shock will form when (χ_b,τ_b) ≈ (0.34, 0.13); as illustrated by the red dot in figure <ref>(c). We can evaluate the shock speed and location χ = χ_s(τ) for times τ > τ_b using the Rankine–Hugoniot condition in (<ref>), as shown by the green curve in figure <ref>(c).The solid curves for τ = 0.17, 0.21, 0.25 and 0.29 in figure <ref>(a) are composed of the solution found using the method of characteristics combined with the above results for χ = χ_s(τ). For τ > 0.25, the minimum in the drag reduction starts to increase owing to nonlinear interaction with the SHS.Alternatively, we can solve the advection–diffusion equation in (<ref>) numerically using the method in Appendix <ref>.Diffusion over the length of the channel regularises the flow in the vicinity of the shock. This methodology allows us to compute the distribution of DR_0 for τ≥τ_b, as shown by the dotted curves for τ≥ 0.17 in figure <ref>(a).The solutions, inclusive of a small amount of diffusion (dotted lines), remain fairly close to the solutions without diffusion (solid lines).However, diffusion marginally reduces the maximum (minimum) amplitude of the surfactant-flux (drag-reduction) distribution and widens the surfactant-flux (drag-reduction) distribution as time progresses.§.§.§ Flow and surfactant transportat the period scale As the bolus of surfactant flux passes over a given plastron at the AM boundary, the proportion of the liquid–gas interface that is shear-free at the upstream end and no-slip at the downstream end varies; as seen from the concentration field in figure <ref>(d) and the streamwise slip velocity in figure <ref>(e).From (<ref>), the length of the shear-free region increases as K_0 decreases, β increases and γ decreases, as less surfactant is held at higher concentrations at the downstream stagnation point, x = ϕ_x = 0.5. The maximum concentration at the downstream end of the interface increases with K_0 and Δ c_0 = K_0.Therefore, DR_0 attains a minimum when K_0 is at its maximum, where the amplitude (difference between the maximum and minimum value of c_0) and length (distance between the end of the plastron and the start of no-slip region on the plastron) of the stagnant cap are greatest.At the leading edge of the bolus of surfactant flux (where the expansion wave is), large stagnant caps slowly transition into small ones over a long range in χ, and at the trailing edge of the bolus of surfactant flux (on the rear side, where the shock is), small stagnant caps rapidly transition into large ones over a shorter range in χ.§.§ Diffusion–Marangoni boundaryWe derive asymptotic solutions at the diffusion–Marangoni (DM) boundary in Appendix <ref>, where both diffusion and Marangoni effects dominate over interfacial advection (see the DM boundary in figure <ref>a).The asymptotic solution for a_DM has a complex dependence on α, β, δ, θ, σ and K_0 that is different to a_AM. Nevertheless, we find that a_DM exhibits similar trends as a_AM with respect to α, β, δ, θ, σ and K_0, and furthermore, because of its nonlinear dependence on K_0, wave-steepening effects can lead to shock formation in the surfactant-flux and drag-reduction distribution.When bulk–surface exchange is weak, e.g. at the DM_E boundary depicted in figure <ref>(b), we find that a_AM = a_DM≈ 1/(θ+σϕ_x). § DISCUSSIONSuperhydrophobic surfaces (SHSs) can be contaminated by trace amounts of surfactant, which may reduce their drag-reducing potential for applications in microchannels and marine hydrodynamics <cit.>.Field studies have shown that environmental surfactant concentrations can vary significantly in space and time <cit.>.Some of these variations can occur over length scales comparable to the size of the SHS, causing local variations in slip and drag reduction.To explore the impact of these local spatio–temporal variations in surfactant concentration, we have derived an asymptotic theory to model the unsteady transport of soluble surfactant in a laminar pressure-driven channel flow bounded between two SHSs.The SHSs are textured with grooves that are periodic in the streamwise and spanwise directions. Exploiting the multiple length scales in the problem, we have derived and solved a quasi-steady nonlinear advection–diffusion equation for surfactant concentration transport over moderate length scales, which is coupled to an unsteady nonlinear advection–diffusion equation for surfactant flux transport over large length scales. The governing partial differential equations for surfactant flux transport can be rewritten in terms of the slip length or drag reduction, key quantities of interest for practical applications. When there is a disturbance to the surfactant flux that varies over a large number of periods <cit.> but over a length smaller or similar to the size of the SHS, our model allows us to make predictions about the propagation speed, the shape of the disturbance, and the evolving distribution of slip length and drag reduction.Furthermore, in certain regions of parameter space, higher surfactant concentrations can lead to surface immobilisation and therefore slower surfactant flux transport, leading to wave-steepening and shock formation in the distribution of surfactant flux, slip length and drag reduction. We have investigated the transport of the surfactant flux and the corresponding reduction in drag along the channel length in distinct asymptotic regimes (figure <ref>), defined by the relative strength of the bulk diffusion (α), partition coefficient (β), surfactant strength (γ), surface diffusion (δ), exchange strength (ν) and background surfactant flux (K_b). Extending the results of <cit.>, we also identify the bulk capacitance (θ) and surface capacitance (σ) as key parameters, which quantify the bulk and surface response to time-dependent changes in the surfactant flux, respectively.The speed of propagation of disturbances to the surfactant flux across different asymptotic regimes is summarised in tables <ref> and <ref>. If a bolus of surfactant flux is injected into the channel, the propagation speed is constant in the Marangoni- (M), advection- (A) and diffusion-dominated (D) regimes, and the shape of the bolus of surfactant flux does not change appreciably along the length of the channel (figure <ref>). In region M, the interfacial concentration profile is linear, the liquid–gas interface is immobilised and there is negligible drag reduction (DR_0 ≪ 1); in region A, the interfacial concentration is constant and then increases in a downstream boundary layer, the liquid–gas interface is shear-free and there is substantial drag reduction (1-DR_0 ≪ 1); and in region D, interfacial concentration profile is uniform, the liquid–gas interface is shear-free and there is substantial drag reduction (1-DR_0 ≪ 1). However, at the advection–Marangoni (AM) and diffusion–Marangoni (DM) boundaries, the values of DR_0 can span the whole range from 0 to 1 along the surface, depending on the local surfactant flux. Here, a bolus of surfactant flux steepens at its upstream end (rearside of the distribution) as smaller concentrations of surfactant are advected faster than larger concentrations, because the liquid–gas interface is more mobile at lower concentrations, resulting in the formation of a shock (figure <ref>).We anticipate that the results observed here for a Gaussian initial distribution in the surfactant flux would be applicable to other distribution profiles due to the dominant advective nature of the transport at the channel scale.Increasing profiles in the surfactant flux would behave similar to the rear part of the Gaussian distribution (the increasing part) that we studied, exhibiting wave steepening and potential shocks.Decreasing profiles would behave similar to the front of the Gaussian distribution, exhibiting wave expansion. These effects arise only in the strong-exchange limit (large ν).In contrast, when exchange is weak, the propagation speed given in table <ref> is the same for all α, β, γ, δ, θ, σ and ϕ_x. As a practical illustration, we now evaluate the propagation speeds (a) presented in tables <ref> and <ref> using parameters characteristic of microchannel applications. In the analysis that follows, the transport coefficients in (<ref>) have been appropriately adjusted for the geometry employed in <cit.>, which is bounded by one SHS and solid wall rather than the two SHSs considered in <ref>. In regions M, M_E, A_E and D_E, θ≈ 4, σ≈ 20.8, and therefore, the dimensionless propagation speed is predicted to be a ≈ 0.04.Using ϵÛ = 2.4 × 10^-4/ as the velocity scale, a surfactant flux perturbation is advected out of the channel at approximately 9.6 × 10^-6/ when Marangoni effects dominate, significantly slower than the fluid itself.For regions A and D, α≈ 0.4, β≈ 3.7, δ≈ 1, and therefore, a ≈ 0.19. Thus, when advection or diffusion dominates, we find that the surfactant flux disturbance is advected out of the channel at approximately 4.5 × 10^-5/, approximately five times faster than the propagation speed in region M. The difference in propagation speed is because shear-free liquid–gas interfaces (regions A and D) that lack surfactant gradients give rise to greater streamwise velocities than immobilised liquid–gas interfaces (region M) with substantial surfactant gradients. We can then vary these parameters to maximise the propagation speed in regions M, A and D, and therefore, if we suppose a bolus of surfactant enters and leaves the channel in the measurement time interval, we can evaluate and then minimise the space and time-averaged drag reduction for microchannel applications. Our theory rests on several assumptions, which we summarise below in order to also suggest possible extensions to this study.First, the asymptotic expansion requires ϵ = Ĥ / P̂_x ≪ 1 and ℰ = P_x / L_x ≪ 1.This seems reasonable based on the microchannel configurations considered herein, e.g. Ĥ≈ 1× 10^-4, P̂_x ≈ 1 × 10^-2 and L̂_x ≈ 1 × 10^-1 in <cit.>.However, this may need to be revised in other applications, e.g. marine hydrodynamics, where the boundary layer grows over the surface of the vessel and L̂_x ≫ 1 × 10^-1.Second, we have only considered the case where diffusion is strong enough to eliminate cross-channel concentration gradients.The reader is referred to <cit.> for a discussion of the parameter regimes where cross-channel concentration gradients first become important. Third, several potential physical complications can arise when considering surfactant-contaminated superhydrophobic channels, such as liquid–gas interface deformation, the interaction of the interior flow with the external gas subphase <cit.> or turbulence in the outer flow field <cit.>, which have been neglected in this study. To summarise, we have shown how a disturbance to the surfactant concentration field can undergo wave-steepening as it propagates under a laminar channel flow bounded by SHSs.This nonlinear evolution is shared by the distributions of the effective slip and drag reduction in microchannel applications, emphasising the importance of treating these as dynamic quantities in time-evolving flows. § ACKNOWLEDGEMENTS We acknowledge support from CBET–EPSRC (EPSRC Ref. EP/T030739/1, NSF #2054894), as well as partial support from ARO MURI W911NF-17-1-0306. For the purpose of open access, the authors have applied a Creative Commons Attribution (CCBY) licence to any Author Accepted Manuscript version arising. F. T-C. acknowledges support from a distinguished postdoctoral fellowship from the Andlinger Center for Energy and the Environment. § DECLARATION OF INTERESTS The authors report no conflict of interest. § ASYMPTOTIC SOLUTIONS§.§ Strong Marangoni effect: region MAssume that β=O(1), γ≫max(1, α, δ) and ν≫max(1, α, δ), so that exchange is strong, c_0 = Γ_0, and expand using c_0 = c_00 + c_01 / γ + .... At O(γ), Marangoni effects are comparable to bulk advection and diffusion, and (<ref>) reduces to equation c_00 c_00x = 0 in𝒟_1,c_00 - c_00x = K_0 in𝒟_2,subject to c_00(ϕ_x^-) = c_00(ϕ_x^+),c_00(-ϕ_x) = c_00(2 - ϕ_x), a–dwhich gives c_00=K_0 in 𝒟_1∪𝒟_2. At O(1), Marangoni effects are comparable to advection and bulk diffusion, and (<ref>) gives equation β - c_01x = 0 in𝒟_1,c_01 - c_01x = 0 in𝒟_2,subject to c_01(ϕ_x^-) = c_01(ϕ_x^+),c_01(-ϕ_x) = c_01(2 - ϕ_x), a–dsuch that c_01 = β (x - ϕ_x (E + 1)/(E - 1)) in 𝒟_1 where E ≡exp(2(1-ϕ_x)/α). Similarly, at O(1/γ), (<ref>) reduces to equation K_0 c_02x = (β+1)c_01 - c_01c_01x - (α + δ)c_01xin𝒟_1,c_02 - c_02x = 0 in𝒟_2,subject to c_02(ϕ_x^-) = c_02(ϕ_x^+),c_02(-ϕ_x) = c_02(2 - ϕ_x), a–dwhich gives c_02 = β x^2/(2 K_0) + β x (ϕ_x - α - δ - 2ϕ_x E/(E - 1)))/K_0 + M_1 where M_1 is an integration constant. Hence, we have that equation c_0 = K_0 + β (x - ϕ_x (E + 1)/(E - 1))/γ + ..., Δ c_0 = 2βϕ_x/γ + .... a–cSubstituting (<ref>) into (<ref>), at leading order we have that equation C_0 ≈ 2 (θ + σϕ_x) K_0,A_0 ≈ 2 (1 + βϕ_x)K_0, M_0 ≈ 2βϕ_x K_0,D_0 ≈ 0, a–dwhere K_0 = K_0(χ,τ) and D_0 = O(1/γ).Then the advection equation, (<ref>) with λ = 0, has the solutionK_0 = K_0 (χ - a_Mτ, 0) where a_M≈1/θ + σϕ_x.Note that when σ→ 0, a_M→ 1/θ. Next, assume that ν≪min(1, α, δ) so that exchange is weak, and expand using c_0 = c_00 + c_01 / γ + ... and Γ_0 = Γ_00 + Γ_01 / γ + .... At O(γ), Marangoni effects are comparable to bulk advection and bulk diffusion, and (<ref>) reduces to equation(c_00 - α c_00x)_x = 0, Γ_00Γ_00x = 0 in𝒟_1,c_00 - α c_00x = K_0 in𝒟_2,subject to c_00(ϕ_x^-) = c_00(ϕ_x^+),c_00(-ϕ_x) = c_00(2 - ϕ_x), c_00(±ϕ_x) - α c_00x(±ϕ_x) = K_0, Γ_00(±ϕ_x) Γ_00x(±ϕ_x) = 0, a–gwhich gives Γ_00 = c_00 = K_0 as ∫_x=-ϕ_x^ϕ_x (Γ_00 - c_00)dx = 0. At O(1), Marangoni effects are comparable to advection and bulk diffusion, and (<ref>) gives equation(c_01 - α c_01x)_x = 0, β - Γ_01x = 0 in𝒟_1,c_01 - α c_01x = 0 in𝒟_2,subject to c_01(ϕ_x^-) = c_01(ϕ_x^+),c_01(-ϕ_x) = c_01(2 - ϕ_x), c_01(±ϕ_x) - α c_01x(±ϕ_x) = 0, β - Γ_01x(±ϕ_x) = 0, a–gsuch that Γ_01 = β x and c_01 = 0 as ∫_x=-ϕ_x^ϕ_x (Γ_01 - c_01)dx = 0. Hence, equation c_0 = K_0 + ..., Γ_0 = K_0 + β x / γ..., ΔΓ_0 = 2βϕ_x/γ + .... a, bSubstituting (<ref>) into (<ref>) we recover (<ref>) and the propagation speed in (<ref>).§.§ Strong advection: region AIn the advection–dominated (A) region, assume that β=O(1), γ≪ 1 and ν≫max(1, α, δ), such that c_0 = Γ_0 and expand using c_0 = c_00 + γ c_01 + .... At O(1), advection is comparable to diffusion, and (<ref>) reduces to equation (β+1) c_00 - (α + δ) c_00x = K_0 in𝒟_1,c_00 -α c_00x = K_0 in𝒟_2, a, bsubject to (<ref>c, d), which gives c_00=K_0/(β+1) + K_0 βexp((β+1)(x-ϕ_x)/(α+δ)) / (β+1) in 𝒟_1 and c_00 = K_0 in 𝒟_2, for max(α,δ) ≪ 1. Hence, we have that equation c_0 = K_0/β+1(1+ βexp((β+1)(x-ϕ_x)/α+δ))+ ..., Δ c_0 = K_0 β/β + 1 + .... a, b From (<ref>), c_0 is constant over the upstream end of the liquid–gas interface and increases exponentially in a boundary layer close to the downstream stagnation point. The surfactant gradient, size of the boundary layer and drag reduction increase with decreasing K_0, as the channel and liquid–gas interface becomes less contaminated with surfactant, or with increasing α and δ, as diffusion eliminates the concentration gradient. Substituting (<ref>) into (<ref>), at leading order we have that equation C_0 ≈2 ϕ _x (σ +θ) K_0 /β +1 + 2 θ (1 - ϕ _x )K_0,A_0 ≈ 2 K_0 , M_0 ≈ 0,D_0 ≈ 0, a–dwhere K_0 = K_0(χ,τ) and M_0 = D_0 = O(γ).Then the advection equation, (<ref>) with λ = 0, has the solution K_0 = K_0(x - a_Aτ, 0) where a_A≈(β+1)/ϕ_x (σ +θ )+ θ (1 - ϕ_x)(β+1).Note that when β→ 0 and σ→ 0, a_A→ 1/θ. Next, assume that ν≪min(1, α, δ) and expand using c_0 = c_00 + γ c_01 + ... and Γ_0 = Γ_00 + γΓ_01+ .... At O(1), diffusion is comparable to advection, and (<ref>) reduces to equation(c_00 - α c_00x)_x = 0, βΓ_00 - δΓ_00x + c_00 - α c_00x = K_0 in𝒟_1, c_00 - α c_00x = K_0 in𝒟_2, subject to c_00(ϕ_x^-) = c_00(ϕ_x^+),c_00(-ϕ_x) = c_00(2 - ϕ_x), c_00(±ϕ_x) - α c_00x(±ϕ_x) = K_0, βΓ_00(±ϕ_x)- δΓ_00x(±ϕ_x) = 0. a–gwhich gives c_00 = K_0 and Γ_00 = 2 ϕ_x β K_0 exp((β(ϕ_x + x))/δ)/(δ(exp((2βϕ_x)/δ) - 1)) as ∫_x=-ϕ_x^ϕ_x (Γ_00 - c_00)dx = 0. Hence, equation c_0 = K_0 + ..., Γ_0 = 2 ϕ_x β K_0 exp(β(ϕ_x +x)/δ)/δ(exp(2 βϕ_x /δ)-1)+ ..., ΔΓ_0 = 2 ϕ_x β K_0/δ + .... a, bSubstituting (<ref>) into (<ref>), we have equation C_0 ≈ 2 (θ + σϕ_x)K_0 ,A_0 ≈ 2(1 + βϕ_x)K_0, M_0 ≈ 0,D_0 ≈ 2 ϕ_x β K_0, a–dand we recover (<ref>). §.§ Strong diffusion: region DAssume that β=O(1), min(α, δ)≫max(1, γ) and ν≫max(1, α, δ) such that c_0 = Γ_0.Let δ = d α where d = O(1) and expand using c_0 = c_00 + c_01 / α + .... At O(α), diffusion dominates and (<ref>) reduces to equation c_00x = 0 in𝒟_1,c_00x = 0 in𝒟_2, a, bsubject to (<ref>c, d). At O(1), diffusion is comparable to advection, and (<ref>) gives equation (β + 1) c_00 - (1 + d) c_01x = K_0 in𝒟_1,c_00 - c_01x = K_0 in𝒟_2, a, bsubject to (<ref>c, d).Integrating (<ref>) over 𝒟_1 and 𝒟_2 gives c_00 = (K_0(α + δ(1-ϕ_x)))/(α (βϕ_x + 1) + δ (1 - ϕ_x)). Hence, we have that equation c_0 = (α + δ(1-ϕ_x))K_0/(α (βϕ_x + 1) + δ (1 - ϕ_x)) + ..., Δ c_0 = 2 βϕ_x (1-ϕ_x)K_0/α (βϕ_x + 1) + δ(1 - ϕ_x) + .... a–cFrom (<ref>), the surfactant concentration and drag increase linearly as the flux of surfactant increases over the SHS, and the interface becomes completely shear-free as K_0 → 0. Substituting (<ref>) into (<ref>), at leading order we have that equationC_0 ≈2 (θ + σϕ_x)(α + δ(1-ϕ_x))K_0 /α (βϕ_x + 1) + δ (1 - ϕ_x),A_0 ≈2 (1 + βϕ_x)(α + δ(1-ϕ_x))K_0 /α (βϕ_x + 1) + δ (1 - ϕ_x),M_0 ≈ 0,D_0 ≈2βϕ_x δ(1-ϕ_x)K_0/α (βϕ_x + 1) + δ(1 - ϕ_x), a–dwhere K_0 = K_0(χ,τ) and M_0 = O(1/α). Then the advection equation, (<ref>) with λ = 0, has the solutionK_0 = K_0(x - a_Dτ, 0) where a_D≈α(1 + βϕ_x) + δ(1-ϕ_x)/(θ + σϕ_x)(α + δ(1-ϕ_x)).Note that when β→ 0, δ→ 0 and σ→ 0, a_D→ 1/θ. When ν≪min(1, α, δ), the expansion and solution are the same as in region A, such that we recover (<ref>).§.§ Strong advection and strong Marangoni effect: the AM boundaryAt the advection–Marangoni boundary, assume that β = O(γ), γ≫max(α,δ), max(α,δ) ≪ 1 and ν≫max(1, α, δ), such that c_0 = Γ_0.Rescale α = a/γ, β = bγ and δ = d/γ, where a, b and d are positive O(1) constants. Expand using c_0 = c_00 + c_01/γ + .... At O(γ), Marangoni effects are comparable to advection, and (<ref>) reduces to equation c_00(b - c_00x) = 0 in𝒟_1,c_00 = K_0 in𝒟_2, a, bsubject to (<ref>c, d).For K_0/b ≤ 2ϕ_x, (<ref>) givesc_00 = 0 for all -ϕ_x≤ x≤ϕ_x-K_0/b, c_00 = b(x - ϕ_x) + K_0for allϕ_x-K_0/b ≤ x ≤ϕ_x. At O(1), Marangoni effects are comparable to advection, and (<ref>) gives equation b c_01 + c_00 - c_00c_01x - c_01c_00x= K_0 in𝒟_1,c_01 = 0 in𝒟_2, a, bsubject to (<ref>c, d).For K_0/b ≤ 2ϕ_x, (<ref>) gives c_01 = K_0/b for all -ϕ_x≤ x≤ϕ_x-K_0/b, c_01 = x - ϕ_x - K_0 log(b(x-ϕ_x)/K_0 + 1)/b for allϕ_x-K_0/b ≤ x ≤ϕ_x. The solution for K_0/b > 2ϕ_x is outlined in <cit.> and is not included here as it gives a similar propagation speed to region M. Substituting (<ref>)–(<ref>) into (<ref>), we have equation C_0 ≈(σ + θ) K_0^2/2 b + θ (2 - 2ϕ_x)K_0 ,A_0 ≈γ K_0^2/2 + 2K_0,M_0 ≈γ K_0^2/2 ,D_0 ≈ 0, a–dwhere K_0 = K_0(χ,τ).Then the advection equation, (<ref>) with λ = 0, has the solutionK_0 = K_0(χ - a_AMτ, 0) where a_AM≈2b/K_0(σ+θ)+ 2 b θ(1 - ϕ_x).As b → K_0 / (2ϕ_x), the bulk concentration c_00→ K_0 and we recover (<ref>). As b →∞, the bulk concentration c_00→ 0 everywhere except at x ≈ϕ where c_00 = K_0 and we recover (<ref>). Furthermore, as K_0 → 2ϕ_x b, the propagation speed a_AM→ a_M, and as K_0 → 0, the propagation speed a_AM→ a_A for β≫ 1. Next, assume that ν / ϵ^2 ≪min(1,α,δ) and expand using c_0 = c_00 + γ c_01 + ... and Γ_0 = Γ_00 + γΓ_01+ .... At O(γ), Marangoni effects are comparable to advection, and (<ref>) reduces to equationc_00x = 0,b Γ_00 - Γ_00Γ_00x = 0 in𝒟_1,c_00 = K_0 in𝒟_2,subject to c_00(ϕ_x^-) = c_00(ϕ_x^+),c_00(-ϕ_x) = c_00(2 - ϕ_x), c_00(±ϕ_x) = K_0, bΓ_00(±ϕ_x) - Γ_00(±ϕ_x) Γ_00x(±ϕ_x) = 0. a–gFor K_0 γ≤βϕ_x, (<ref>) givesΓ_00 = 0 for all -ϕ_x≤ x≤ϕ_x-2(ϕ_x K_0/b)^1/2,Γ_00 = b(x - ϕ_x) + 2(b ϕ_x K_0)^1/2for allϕ_x-2(ϕ_x K_0/b)^1/2≤ x ≤ϕ_x, as ∫_x = -ϕ_x^ϕ_x (Γ_00 - c_00)d x = 0 and c_00=K_0.At O(1), Marangoni effects are comparable to advection, and (<ref>) gives equationc_01x = 0,b Γ_01 - Γ_00Γ_01x - Γ_01Γ_00x = 0 in𝒟_1,c_01 = 0 in𝒟_2,subject to c_01(ϕ_x^-) = c_01(ϕ_x^+),c_01(-ϕ_x) = c_01(2 - ϕ_x),c_01(±ϕ_x) = 0,b Γ_01(±ϕ_x) - Γ_00(±ϕ_x) Γ_01x(±ϕ_x) - Γ_01(±ϕ_x) Γ_00x(±ϕ_x) = 0. a–gFor K_0 γ≤βϕ_x, (<ref>) gives Γ_01 = 0 for all -ϕ_x≤ x≤ϕ_x as ∫_x = -ϕ_x^ϕ_x (Γ_01 - c_01)d x = 0 and c_01=0.Substituting (<ref>) into (<ref>), we have equation C_0 ≈ 2 ( θ + σϕ_x)K_0,A_0 ≈ 2(1 + βϕ_x)K_0, M_0 ≈ 2βϕ_x K_0,D_0 ≈ 0, a–dand we recover (<ref>).§.§ Strong diffusion and strong Marangoni effect: the DM boundaryAssume that β = O(1), γ≫ 1 and ν≫max(1,α,δ), so that c_0 = Γ_0.Rescale α = 𝒜γ and δ = d γ, where 𝒜 and d are positive O(1) constants.Expand using c_0 = c_00 + c_01/γ + .... At O(γ), Marangoni effects are comparable to diffusion, and (<ref>) reduces to equation c_00 c_00x + (𝒜+d) c_00x = 0 in𝒟_1,c_00x = 0 in𝒟_2, a, bsubject to (<ref>c, d), which means that c_00 is constant.At O(1), Marangoni effects are comparable to advection and diffusion, and (<ref>) gives equation (β+1)c_00 - c_00c_01x - (𝒜+d)c_01x = K_0 in𝒟_1,c_00 - 𝒜 c_01x = K_0 in𝒟_2, a, bsubject to (<ref>c, d).Integrating (<ref>) over 𝒟_1 and 𝒟_2 gives c_00 = c_00(K_0) as the solution to the quadratic equation(c_00 - K_0)(ϕ_x - 1)(𝒜 + d + c_00) = 𝒜ϕ_x(c_00(β + 1) - K_0).We can then substitute c_00 back into (<ref>), which gives c_01 = (x((β+1)c_00 - K_0))/(𝒜 + d + c_00) + D_1 in 𝒟_1, where D_1 is an integration constant. Hence, equation c_0 = c_00(K_0) + ..., Δ c_0 = 2 ϕ_x ((β+1)c_00 - K_0))/𝒜 + d + c_00 + .... a, bAt the DM boundary, the shear stress at the liquid-gas interface is not sufficient to completely immobilise the interface and there is partial drag reduction, as seen in (<ref>).The concentration field is approximately constant with a shallow gradient in 𝒟_1 and 𝒟_2. Substituting (<ref>) into (<ref>) gives equation C_0 ≈ 2 (θ + σϕ_x)c_00,A_0 ≈ 2 (1 + βϕ_x) c_00,M_0 ≈4 ϕ_x c_00 ((β+1) c_00 - K_0)/𝒜 + d + c_00 ,D_0 ≈2 ϕ_x d ((β+1) c_00 - K_0))/𝒜 + d + c_00, a–fwhere c_00 = c_00(K_0).Then the advection equation, (<ref>) with λ = 0, has the solutionK_0 = K_0(χ - a_DMτ, 0) where a_DM(K_0) ≈A'_0 - M'_0 - D'_0/C'_0,where primes denote derivatives of the functions defined in (<ref>). As 𝒜→ 0 and d → 0, the bulk concentration c_00→ K_0 and we recover (<ref>). As 𝒜→∞ and d →∞, the bulk concentration c_00→ (α + δ (1 - ϕ_x))K_0/(α (βϕ_x+1) + δ(1 - ϕ_x)) and we recover (<ref>). Furthermore, as K_0 →∞, we need c_00 = K_0 to satisfy (<ref>) and the propagation speed a_DM→ a_M, and as K_0 → 0, we linearise (<ref>) (neglecting terms O(c_00^2, c_00 K_0)) and the propagation speed a_DM→ a_D. Next, assume that ν≪min(1, α, δ) and expand using c_0 = c_00 + c_01 / γ + ... and Γ_0 = Γ_00 + Γ_01 / γ + .... At O(γ), Marangoni effects are comparable to diffusion, and (<ref>) reduces to equationc_00xx = 0, 𝒜 c_00x + Γ_00Γ_00x + d Γ_00x = 0 in𝒟_1,c_00x = 0 in𝒟_2,subject to c_00(ϕ_x^-) = c_00(ϕ_x^+),c_00(-ϕ_x) = c_00(2 - ϕ_x), c_00x(±ϕ_x) = 0, Γ_00(±ϕ_x) Γ_00x(±ϕ_x) + d Γ_00x(±ϕ_x) = 0, a–gwhich implies that Γ_00 and c_00 are constant. At O(1), Marangoni effects are comparable to surface advection and diffusion, and (<ref>) gives equationc_01xx = 0,c_00 - 𝒜 c_01x + βΓ_00 - (Γ_00+d) Γ_01x = K_0 in𝒟_1,c_01x = 0 in𝒟_2,subject to c_01(ϕ_x^-) = c_01(ϕ_x^+),c_01(-ϕ_x) = c_01(2 - ϕ_x), c_00 - 𝒜 c_01x(±ϕ_x) = K_0, βΓ_00 - (Γ_00+d) Γ_01x(±ϕ_x) = 0. a–gsuch that c_00 = Γ_00 = K_0, Γ_01 = (β x K_0)/(d+K_0) + 𝒞 and c_01 = 𝒞, as ∫_x=-ϕ_x^ϕ_x (Γ_00 - c_00)dx = 0 and ∫_x=-ϕ_x^ϕ_x (Γ_01 - c_01)dx = 0, where 𝒞 is an integration constant. Hence, equation c_0 = K_0 + ..., Γ_0 = K_0 + ..., ΔΓ_0 = 2 ϕ_x β/γ(d+K_0) + .... a, bSubstituting (<ref>) into (<ref>) we have equationC_0 ≈ 2 (θ + σϕ_x) K_0, A_0 ≈ 2 (1 + βϕ_x) K_0, M_0 ≈2 ϕ_x β K_0^2/d + K_0, D_0 ≈2 ϕ_x β d K_0/d+K_0, a–dand we recover (<ref>). § NUMERICAL SOLUTION TO THE ADVECTION–DIFFUSION EQUATION In <ref>, we solve (<ref>) whilst retaining the O(ϵ^2) secondary-diffusion operator, partly for numerical convenience and partly to provide a rational regularisation of shock-like structures that may arise.The unsteady advection–diffusion equation is solved numerically using the method of lines and a backwards-in-time and centered-in-space (BTCS) scheme. At each timestep, we iterate C_0, A_0, M_0, D_0 and D_1, using c_0, Γ_0 and K_0 at the current timestep <cit.>.Discretising space such that χ_i = i Δχ for i = 0, 1, ..., N_χ = 2 L_xΔχ, where 2 L_x is length of the channel, we write (<ref>) at each χ_i for i = 1, 2, ..., N-1:(∂ C_0/∂ K_0)_i(d K_0/d t)_i = (∂ A_0/∂ K_0 - ∂ M_0/∂ K_0 - ∂ D_0/∂ K_0 - λϵ^2 ∂^2 D_1/∂ K_0^2∂ K_0/∂χ)_i(K_0,i+1 - K_0, i-1/2Δχ) - λϵ^2 (∂ D_1/∂ K_0)_i(K_0,i+1 -2K_0, i + K_0, i-1/Δχ^2) + O(Δχ^3),with periodic boundary conditions applied at interior (i=1, N-1), boundary interior (i=0, N) and ghost nodes (i=-1, N+1):K_0,0 = K_0,N, K_0,1 - K_0, -1/2Δχ = K_0,N+1 - K_0, N-1/2Δχ + O(Δχ^3).Assembling (<ref>)–(<ref>) into a matrix problem, the advection–diffusion equation in (<ref>) reduces to solving the system of ODEsdK_0/d t = A(K_0) K_0.We solve the problem in (<ref>) using an implicit Euler scheme. Hence, defining τ^n = nΔτ for n = 1, 2, ..., N, we have that(I - ΔτA(K_0^n+1))K_0^n+1 = K_0^n.Note that (<ref>) is nonlinear, therefore we initialise (<ref>) with the solution at the previous step K_0^n such that A = A(K_0^n), we then solve (<ref>) for K_0^n+1 and substitute the new solution into A = A(K_0^n+1), until K_0^n+1 varies less than some specified tolerance.jfm
http://arxiv.org/abs/2310.18184v2
{ "authors": [ "Samuel D. Tomlinson", "Frédéric Gibou", "Paolo Luzzatto-Fegiz", "Fernando Temprano-Coleto", "Oliver E. Jensen", "Julien R. Landel" ], "categories": [ "physics.flu-dyn" ], "primary_category": "physics.flu-dyn", "published": "20231027145530", "title": "Unsteady evolution of slip and drag in surfactant-contaminated superhydrophobic channels" }
A general learning scheme for classical and quantum Ising machines [ ================================================================== In reinforcement learning, off-policy evaluation (ope) is the problem of estimating the expected return of an evaluation policy given a fixed dataset that was collected by running one or more different policies. One of the more empirically successful algorithms for ope has been the fitted q-evaluation (fqe) algorithm that uses temporal difference updates to learn an action-value function, which is then used to estimate the expected return of the evaluation policy. Typically, the original fixed dataset is fed directly into fqe to learn the action-value function of the evaluation policy. Instead, in this paper, we seek to enhance the data-efficiency of fqe by first transforming the fixed dataset using a learned encoder, and then feeding the transformed dataset into fqe. To learn such an encoder, we introduce an ope-tailored state-action behavioral similarity metric, and use this metric and the fixed dataset tolearn an encoder that models this metric. Theoretically, we show that this metric allows us to bound the error in the resulting ope estimate. Empirically, we show that other state-action similarity metrics lead to representations that cannot represent the action-value function of the evaluation policy, and that our state-action representation method boosts the data-efficiency of fqe and lowers ope error relative to other ope-based representation learning methods on challenging ope tasks. We also empirically show that the learned representations significantly mitigate divergence of fqe under varying distribution shifts. Our code is available here: <https://github.com/Badger-RL/ROPE>.§ INTRODUCTIONIn real life applications of reinforcement learning, practitioners often wish to assess the performance of a learned policy before allowing it to make decisions with real life consequences <cit.>. That is, they want to be able to evaluate the performance of a policy without actually deploying it. One approach of accomplishing this goal is to apply methods for off-policy evaluation (ope). ope methods evaluate the performance of a given evaluation policy using a fixed offline dataset previously collected by one or more policies that may be different from the evaluation policy.One of the core challenges in ope is that the offline datasets may have limited size. In this situation, it is often critical that ope algorithms are data-efficient. That is, they are able produce accurate estimates of the evaluation policy value even when only small amounts of data are available. In this paper, we seek to enhance the data-efficiency of ope methods through representation learning.While prior works have studied representation learning for ope, they have mostly considered representations that induce guaranteed convergent learning without considering whether data-efficiency increases <cit.>. For example, <cit.> introduce a method for learning Bellman complete representations for fqe but empirically find that having such a learned representation provides little benefit compared to fqe without the learned representation. Thus, in this work we ask the question, "can explicit representation learning lead to more data-efficient ope?"To answer this question, we take inspiration from recent advances in learning state similarity metrics for control <cit.>. These works define behavioral similarity metrics that measure the distance between two states. They then show that state representations can be learned such that states that are close under the metric will also have similar representations. In our work, we introduce a new ope-tailored behavioral similarity metric called Representations for Off-Policy Evaluation (rope) and show that learning rope representations can lead to more accurate ope.Specifically,rope first uses the fixed offline dataset to learn a state-action encoder based on this ope-specific state-action similarity metric, and then applies this encoder to the same dataset to produce a new representation for all state-action pairs. The transformed data is then fed into the fitted q-evaluation (fqe) algorithm <cit.> to produce an ope estimate. We theoretically show that the error between the policy value estimate with fqe + rope and the true evaluation policy value is upper-bounded in terms of how rope aggregates state-action pairs. We empirically show that rope improves the data-efficiency of fqe and leads to lower ope error compared to other ope-based representation learning baselines. Additionally, we empirically show that rope representations mitigate divergence of fqe under extreme distribution. To the best of our knowledge, our work is the first to propose an ope-specific state-action similarity metric that increases the data-efficiency of ope.§ BACKGROUND In this section, we formalize our problem setting and discuss prior work.§.§ Notation and Problem SetupWe consider an infinite-horizon Markov decision process (mdp) <cit.>, ℳ = ⟨, , ℛ, P, γ , d_0⟩, whereis the state-space,is the action-space, ℛ:×→Δ([0,∞)) is the reward function, P:×→Δ() is the transition dynamics function, γ∈[0,1) is the discount factor, and d_0∈Δ() is the initial state distribution, where Δ(X) is the set of all probability distributions over a set X. We refer to the joint state-action space as := ×. The agent acting, according to policy π, in the mdp generates a trajectory: S_0, A_0, R_0, S_1, A_1, R_1, ..., where S_0∼ d_0, A_t∼π(·|S_t),R_t∼ℛ(S_t, A_t), and S_t+1∼ P(·|S_t, A_t) for t≥0. We define r(s,a) := [ℛ(s, a)].We define the performance of policy π to be its expected discounted return, ρ(π) _[∑_t=0^∞γ^t R_t]. We then have the action-value function of a policy for a given state-action pair, q^π(s,a) = r(s,a) + γ_S' ∼ P(s,a), A' ∼π[q^π(S', A')], which gives the expected discounted return when starting in state s and then taking action a. Then ρ(π) can also be expressed as ρ(π) = _S_0 ∼ d_0, A_0 ∼π[q^π(S_0, A_0)].It is often more convenient to work with vectors instead of atomic states and actions. We use ϕ: 𝒮×𝒜→ℝ^d to denote a representation function that maps state-action pairs to vectors with some dimensionality d.§.§ Off-Policy Evaluation (OPE)In off-policy evaluation, we are given a fixed dataset of m transition tuples 𝒟 := {(s_i, a_i, s_i', r_i)}_i=1^m and an evaluation policy, . Our goal is to use 𝒟 to estimate ρ(). Crucially, 𝒟 may have been generated by a set of behavior policies that are different from , which means that simply averaging the discounted returns in 𝒟 will produce an inconsistent estimate of ρ(). We do not assume that these behavior policies are known to us, however, we do make the standard assumption that ∀ s ∈, ∀ a ∈ if (a|s) > 0 then the state-action pair (s,a) has non-zero probability of appearing in 𝒟.As done by <cit.>, we measure the accuracy of an ope estimator with the mean absolute error (mae) to be robust to outliers. Let ρ̂(, 𝒟) be the estimate returned by an ope method using 𝒟. The mae of this estimate is given as:mae[ρ̂] _𝒟[|ρ̂(, 𝒟) - ρ()|].While in practice ρ() is unknown, it is standard for the sake of empirical analysis <cit.> to estimate it by executing rollouts of .§.§ Fitted Q-Evaluation One of the more successful ope methods has been fitted q-evaluation (fqe) which uses batch temporal difference learning <cit.> to estimate ρ() <cit.>. fqe involves two conceptual steps: 1) repeat temporal difference policy evaluation updates to estimate q^(s,a) and then 2) estimate ρ() as the mean action-value at the initial state distribution. Formally, let the action-value function be parameterized by ξ i.e. q_ξ, then the following loss function is minimized to estimate q^:ℒ_FQE (ξ) := _(s, a, s', r)∼𝒟[(r(s,a) + γ_a' ∼(·|s') [q_ξ̅(s',a')] - q_ξ(s,a))^2]where ξ̅ is a separate copy of the parameters ξ and acts as the target function approximator <cit.> that is updated to ξ at a certain frequency. The learned q_ξ^* is then used to estimate the policy value: ρ̂() _s_0∼ d_0, a_0 ∼[q_ξ^*(s_0, a_0)]. While conceptually fqe can be implemented with many classes of function approximator to represent the q_ξ, in practice, deep neural networks are often the function approximator of choice. When using deep neural networks, fqe can be considered a policy evaluation variant of neural fitted q-iteration <cit.>.§.§ Related Work In this section, we discuss the most relevant prior literature on off-policy evaluation and representation learning. Methods for ope are generally categorized as importance-sampling based <cit.>, model-based <cit.>, value-function-based <cit.>, or hybrid <cit.>. Our work focuses on fqe, which is a representative value-function-based method, since it has been shown to have strong empirical performance <cit.>. We refer the reader to <cit.> for an in-depth survey of ope methods.Representation Learning for Off-policy Evaluation and Offline RLA handful of works have considered the interplay of representation learning with ope methods and offline RL. <cit.> benchmark a number of existing representation learning methods for offline RL and show that pre-training representation can be beneficial for offline RL. They also consider representation learning based on behavioral similarity and find that such representations do not enable successful offline RL. However, their study is focused on evaluating existing algorithms and on control. <cit.> introduced state abstraction <cit.> as an approach to lower the variance of ope estimates in importance-sampling based methods. However, their work made the strict assumption of granting access to a bisimulation abstraction in theory and relied on a hand-specified abstraction in practice. Only recently have works started to consider learning representations specifically for ope. <cit.> introduced a method for learning Bellman complete representations that enabled convergent approximation of q^ with linear function approximation. <cit.> show that using the output of the penultimate layer of 's action-value function provides realizability of q_, but is insufficient for accurate policy evaluation under extreme distribution shift. Our work explicitly focuses on boosting the data-efficiency of ope methods and lowers the error of ope estimates compared to<cit.> and <cit.>. Representation Learning via Behavioral Similarity The representation learning method we introduce builds upon prior work in learning representations in which similar states share similar representations.Much of this prior work is based on the notion of a bisimulation abstraction in which two states with identical reward functions and that lead to identical groups of next states should be classified as similar <cit.>.The bisimulation metric itself is difficult to learn both computationally and statistically and so recent work has introduced various approximations <cit.>.To the best of our knowledge, all of this work has considered the online, control setting and has only focused on state representation learning.In contrast, we introduce a method for learning state-action representations for ope with a fixed dataset.One exception is the work of <cit.>, which proposes to learn state-action representations for offline policy improvement. However, as we will show in Section <ref>, the distance metric that they base their representations on is inappropriate in the ope context. § ROPE: STATE-ACTION BEHAVIORAL SIMILARITY METRIC FOR OFF-POLICY EVALUATION In this section, we introduce our primary algorithm: Representations for ope (rope), a representation learning method based on state-action behavioral similarity that is tailored to the off-policy evaluation problem. That is, using a fixed off-policy dataset 𝒟, rope learns similar representations for state-action pairs that are similar in terms of the action-value function of .Prior works on representation learning based on state behavioral similarity define a metric that relates the similarity of two states and then map similar states to similar representations <cit.>. We follow the same high-level approach except we focus instead on learning state-action representations for ope.One advantage of learning state-action representations over state representations is that we can learn a metric specifically forby directly sampling actions frominstead of using importance sampling, which can be difficult when the multiple behavior policies are unknown. Moreover, estimating the importance sampling ratio from data is known to be challenging <cit.>.Our new notion of similarity between state-action pairs is given by the recursively-defined rope distance, d_(s_1, a_1;s_2, a_2) := |r(s_1,a_1)-r(s_2,a_2)| + γ_s_1',s_2'∼ P, a_1', a_2'∼π_e[d_(s_1', a_1'; s_2', a_2')]. Intuitively, d_ measures how much two state-action pairs, (s_1,a_1) and (s_2,a_2), differ in terms of short-term reward and discounted expected distance between next state-action pairs encountered by . In order to compute d_, we define the rope operator:Given an evaluation policy , the rope operator ℱ^π_e: ℝ^𝒳×𝒳→ℝ^𝒳×𝒳 is given by:ℱ^π_e(d)(s_1, a_1; s_2, a_2) := |r(s_1, a_1) - r(s_2, a_2)|_short-term distance + γ_s_1',s_2'∼ P, a_1', a_2'∼π_e[d(s_1', a_1'; s_2', a_2')]_long-term distancewhere d:𝒳×𝒳→ℝ, s_1' ∼ P(s_1'|s_1,a_1), s_2' ∼ P(s_2'|s_2,a_2), a_1' ∼(·|s_1'), a_2' ∼(· | s_2') Given the operator, ℱ^π_e, we show that the operator is a contraction mapping, computes the rope distance, d_, and that d_ is a diffuse metric. For the background on metrics and full proofs, refer to the Appendix <ref> and <ref>. propositionpropcontractionThe operator ℱ^π_e is a contraction mapping on ℝ^𝒳×𝒳 with respect to the L^∞ norm. propositionpropfixedpointThe operator ℱ^π_e has a unique fixed point d_π_e∈ℝ^𝒳×𝒳. Let d_0∈ℝ^𝒳×𝒳, then lim_t→∞ℱ_t^π_e(d_0) = d_π_e. Propositions <ref> and <ref> ensure that repeatedly applying the operator on some function d:𝒳×𝒳→ℝ will make d converge to our desired distance metric, d_. An important aspect of d_ is that it is a diffuse metric:propositionpropdiffuse d_π_e is a diffuse metric. where a diffuse metric is the same as a psuedo metric (see Definition <ref> in Appendix <ref>) except that self-distances can be non-zero i.e. it may be true that d_π_e(s,a;s,a) > 0. This fact arises due to the stochasticity in the transition dynamics and action sampling from π_e. If we assume a deterministic transition function and a deterministic π_e, d_π_e will reduce to a pseudo metric, which gives zero self-distance. In practice, we use a sample approximation of the rope operator to estimate d_.Given that d_ is well-defined, we have the following theorem that shows why it is useful in the ope context: theoremthmDboundFor any evaluation policy π_e and (s_1,a_1), (s_2, a_2)∈𝒳, we have that |q^π_e(s_1,a_1) - q^π_e(s_2,a_2)|≤ d_π_e(s_1,a_1,;s_2,a_2).Given that our goal is learn representations based on d_, Theorem <ref> implies that whenever d_ considers two state-action pairs to be close or have similar representations, they will also have close action-values. In the context of ope, if the distance metric considers two state-action pairs that have different action-values to be zero distance apart/have the same representation, then fqe will have to output two different action-values for the same input representation, which inevitably means fqe must be inaccurate for at least one state-action pair.§.§ Learning State-Action Representations with ROPE In practice, our goal is to use d_ to learn a state-action representation ϕ(s,a) ∈ℝ^d such that the distances between these representations matches the distance defined by d_. To do so, we follow the approach by <cit.> and directly parameterize the value d_(s_1,a_1; s_2,a_2) as follows:d_(s_1, a_1; s_2, a_2) ≈d̃_ω(s_1, a_1; s_2, a_2) ||ϕ_ω(s_1,a_1)||_2^2 + ||ϕ_ω(s_2,a_2)||_2^2/2 + βθ(ϕ_ω(s_1, a_1), ϕ_ω(s_2, a_2))in which ϕ is parameterized by some function approximator whose parameter weights are denoted by ω, θ(·, ·) gives the angular distance between the vector arguments, and β is a parameter controlling the weight of the angular distance. We can then learn the desired ϕ_ω through a sampling-based bootstrapping procedure <cit.>. More specifically, the following loss function is minimized to learn the optimal ω^*:ℒ_ROPE (ω) := _𝒟[(|r(s_1,a_1) - r(s_2,a_2)| + γ_[d̃_ω̅(s_1',a_1'; s_2',a_2')] - d̃_ω(s_1,a_1; s_2,a_2))^2]where ω̅ is separate copy of ω and acts as a target function approximator <cit.>, which is updated to ω at a certain frequency. Once ϕ_ω^* is obtained using 𝒟, we use ϕ_ω^*with fqe to perform ope with the same data. Conceptually, the fqe procedure is unchanged except the learned action-value function now takes ϕ_ω^*(s,a) as its argument instead of the state and action directly. With rope, state-action pairs are grouped together when they have small pairwise rope distance. Thus, a given group of state-action pairs have similar state-action representations and are behaviorally similar (i.e, have similar rewards and lead to similar future states when following π_e). Consequently, these state-action pairs will have a similar action-value, which allows data samples from any member of the group to learn the group’s shared action-value as opposed to learning the action-value for each state-action pair individually. This generalized usage of data leads to more data-efficient learning. We refer the reader to Appendix <ref> for rope's pseudo-code.§.§ Action-Value and Policy Value Bounds We now theoretically analyze how rope state-action representations help fqe estimate ρ(). For this analysis, we focus on hard groupings where groups of similar state-action pairs are aggregated into one cluster and no generalization is performed across clusters; in practice, we learn state-action representations in which the difference between representations approximates the rope distance between state-action pairs. Furthermore, for theoretical analysis, we consider exact computation of the rope diffuse metric and of action-values using dynamic programming. First, we present the following lemma. For proofs, refer to Appendix <ref>. lemmalemmaQboundAssume the rewards ℛ:×→Δ([0,1]) then given an aggregated mdp ℳ = ⟨, ,ℛ, P, γ , d̃_0⟩ constructed by aggregating state-actions in an ϵ-neighborhood based on d_, and an encoder ϕ: → that maps state-actions in 𝒳 to these clusters, the action-value for the evaluation policyin the two mdps are bounded as:| q^(x) - q̃^(ϕ(x))| ≤2ϵ/(1 - γ) Lemma <ref> states that the error in our estimate of the true action-value function ofis upper-bounded by the clustering radius of d_, ϵ. Lemma <ref> then leads us to our main result: theoremthmJboundUnder the same conditions as Lemma <ref>, the difference between the expected fitted q-evaluation (fqe) estimate and the expected estimate of fqe+rope is bounded:| _s_0,a_0∼[q^(s_0,a_0)] - _s_0,a_0∼[q^(ϕ(s_0,a_0))]| ≤2ϵ/(1 - γ)Theorem <ref> tells us that the error in our estimate of ρ() is upper-bounded by the size of the clustering radius ϵ.The implication is that grouping state-action pairs according to the rope diffuse metric enables us to upper bound error in the ope estimate. At an extreme, if we only group state-action pairs with zero rope distance together then we obtain zero absolute error meaning that the action-value function for the aggregated mdp is able to realize the action-value function of the original mdp. § EMPIRICAL STUDYIn this section, we present an empirical study of rope designed to answer the following questions: * Does rope group state-actions that are behaviorally similar according to q^?* Does rope improve the data-efficiency of fqe and achieve lower ope error than other ope-based representation methods?* How sensitive is rope to hyperparameter tuning andextreme distribution shifts?§.§ Empirical Set-up We now describe the environments and datasets used in our experiments.Didactic Domain. We provide intuition about rope on our gridworld domain. In this tabular and deterministic environment, an agent starts from the bottom left of a 3×3 grid and moves to the terminal state at the top right. The reward function is the negative of the Manhattan distance from the top right.stochastically moves up or right from the start state and then deterministically moves towards the top right, and moves deterministically right when it is in the center. The behavior policy π_bacts uniformly at random in each state. We set γ =0.99.High-Dimensional Domains. We conduct our experiments on five domains: HumanoidStandup, Swimmer, HalfCheetah, Hopper, and Walker2D, each of which has393, 59, 23, 14, and 23 as the native state-action dimension respectively. We set γ =0.99. Datasets. We consider 12 different datasets: 3 custom datasets for HumanoidStandup, Swimmer, and HalfCheetah; and 9 d4rl datasets <cit.> for HalfCheetah, Hopper, and Walker2D. Each of the three custom datasets is of size 100K transition tuples with an equal split between samples generated byand a lower performing behavior policy. For the d4rl datasets, we consider three types for each domain: random, medium, medium-expert, which consists of samples from a random policy, a lower performing policy, and an equal split between a lower performing and expert evaluation policy (). Each dataset has 1M transition tuples. Note that due to known discrepancies between environment versions and state-action normalization procedures [<https://github.com/Farama-Foundation/D4RL/tree/master>], we generate our own datasets using the publicly available policies[<https://github.com/google-research/deep_ope>] instead of using the publicly available datasets. See Appendix <ref> for the details on the data generation procedure.Evaluation Protocol. Following <cit.> and to make error magnitudes more comparable across domains, we use relative mean absolute error (rmae). rmae is computed using a single dataset 𝒟 and by generating n seeds: rmae_i (ρ̂(π_e)) := |ρ(π_e) - ρ̂_̂î(π_e)|/|ρ(π_e) - ρ(π_rand) )|, whereρ̂_̂î(π_e) is computed using the i^th seedand ρ(π_rand) is the value of a random policy. We then report the Interquartile Mean (iqm) <cit.> of these n rmaes.Representation learning + OPE. Each algorithm is given access to the same fixed dataset to learn q^. The representation learning algorithms (rope and baselines) use this dataset to first pre-train a representation encoder, which is then used to transform the fixed dataset. This transformed dataset is then used to estimate q^. Vanilla fqe directly operates on the original state-action pairs.§.§ Empirical Results We now present our main empirical results.§.§.§ Designing ROPE: A State-Action Behavioral Similarity Metric for OPEThe primary consideration when designing a behavioral similarity distance function for ope, and specifically, for fqe is that the distance function should not consider two state-action pairs with different q^ values to be the same. Suppose we have a distance function d, two state-actions pairs, (s_1, a_1) and (s_2, a_2), and their corresponding q^. Then if d(s_1, a_1; s_2, a_2)= 0, it should be the case that q^(s_1, a_1)= q^(s_2, a_2). On the other hand, if d(s_1, a_1; s_2, a_2)= 0 but q^(s_1, a_1) and q^(s_2, a_2) are very different, then fqe will have to output different action-values for the same input, thus inevitably making fqe inaccurate on these state-action pairs.While there have been a variety of proposed behavioral similarity metrics for control, they do not always satisfy the above criterion for ope. We consider various state-action behavioral similarity metrics. Due to space constraints, we show results only for: on-policy mico <cit.> d_π_b(s_1, a_1; s_2, a_2) := |r(s_1,a_1)-r(s_2,a_2)| + γ_a_1',a_2'∼π_b[d_π_b((s_1', a_1'), (s_2', a_2'))], which groups state-actions that have equal q^π_b, and defer results for the random-policy metric <cit.> and policy similarity metric <cit.> to the Appendix <ref>.We visualize how these different metrics group state-action pairs in our gridworld example where a state-action is represented by a triangle in the grid (Figure <ref>). The gridworld is 3×3 grid represented by 9 squares (states), each having 4 triangles (actions). A numeric entry in a given triangle represents either: 1) the action-value of that state-action pair for(Figure <ref>) or 2) the group ID of the given state-action pair (Figures <ref> and <ref>). Along with the group ID, each state-action pair is color-coded indicating its group. In this tabular domain, we compute the distances using dynamic programming with expected updates.The main question we answer is: does a metric group two state-action pairs together when they have the same action-values under π_e? In Figure <ref> we see the q^π_e values for each state-action where all state-action pairs that have the same action-value are grouped together under the same color (e.g. all state-action pairs with q^π_e(·,·) = -6 belong to the same group (red)). In Figure <ref>, we see that rope's grouping is exactly aligned with the grouping in Figure <ref> i.e. state-action pairs that have the same action-values have the same group ID and color. On the other hand, from Figure <ref>, we see that on-policy mico misaligns with Figure <ref>. In Appendix <ref>, we also see similar misaligned groupings using the random-policy metric <cit.> and policy similarity metric <cit.>. The misalignment of these metrics is due to the fact that they do not group state-action pairs togethers that share q^ values. §.§.§ Deep OPE Experiments We now consider ope in challenging, high dimensional continuous state and action space domains. We compare the rmae achieved by an ope algorithm using different state-action representations as input. If algorithm A achieves lower error than algorithm B, then A is more data-efficient than B. Custom Dataset Results For the custom datasets, we consider mild distribution shift scenarios, which are typically easy for ope algorithms. In Figure <ref>, we report the rmae vs. training iterations of fqe with different state-action features fed into fqe. We consider three different state-action features: 1) rope (ours), 2) π_e-critic, which is a representation outputted by the penultimate layer of the action-value function of<cit.>, and 3) the original state-action features. Note that there is no representation learning involved for 2) and 3). We set the learning rate for all neural network training (encoder and fqe) to be the same, hyperparameter sweep rope across β and the dimension of rope's encoder output, and report the lowest rmae achieved at the end of fqe training. For hyperparameter sensitivity results, see Section <ref>. For training details, see Appendix <ref>.We find that fqe converges to an estimate of ρ() when it is fed these different state-action features. We also see that when fqe is fed features from rope it produces more data-efficient ope estimates than vanilla fqe. Under these mild distribution shift settings, -critic also performs well since the output of the penultimate layer of 's action-value function should have sufficient information to accurately estimate the action-value function of . D4RL Dataset Results On the d4rl datasets, we analyze the final performance achieved by representation learning + ope algorithms on datasets with varying distribution shift. In addition to the earlier baselines, we evaluate Bellman Complete Learning Representations (bcrl) <cit.>, which learns linearly Bellman complete representations and produces an ope estimate with Least-Squares Policy Evaluation (lspe) instead of fqe. We could not evaluate -critic since the d4rlcritics were unavailable[<https://github.com/google-research/deep_ope>]. For bcrl, we use the publicly available code [<https://github.com/CausalML/bcrl>]. For a fair comparison, we hyperparameter tune the representation output dimension and encoder architecture size of bcrl. We hyperparameter tune rope the same way as done for the custom datasets. We set the learning rate for all neural network training (encoder and fqe) to be the same. In Table <ref>, we report the lowest rmae achieved at the end of the ope algorithm's training. For the corresponding training graphs, see Appendix <ref>. We find that rope improves the data-efficiency of fqe substantially across varying distribution shifts. bcrl performs competitively, but its poorer ope estimates compared to rope is unsurprising since it is not designed for data-efficiency. It is also known that bcrl may produce less accurate ope estimates compared to fqe <cit.>. fqe performs substantially worse on some datasets; however, it is known that fqe can diverge under extreme distribution shift <cit.>. It is interesting, however, that rope is robust in these settings. We observe this robustness across a wide range of hyperparameters as well (see Section <ref>). We also find that when there is low diversity of rewards in the batch (for example, in the random datasets), it is more likely that the short-term distance component of rope is close to 0, which can result in a representation collapse.§.§.§ Ablations Towards a deeper understanding of rope, we now present an ablation study of rope. Hyperparameter Sensitivity In ope, hyperparameter tuning with respect to rmae is difficult since ρ() is unknown in practice <cit.>. Therefore, we need ope algorithms to not only produce accurate ope estimates, but also to be robust to hyperparameter tuning. Specifically, we investigate whether rope's representations produce more data-efficient ope estimates over fqe across rope's hyperparameters. In this experiment, we set the action-value function's learning rate to be the same for both algorithms. The hyperparameters for rope are: 1) the output dimension of the encoder and 2) β, the weight on the angular distance between encodings. We plot the results in Figure <ref> and observe that rope is able to produce substantially more data-efficient estimates compared to fqe for a wide range of its hyperparameters on the Walker2D-medium dataset, where fqe diverged (see Table <ref>). While it is unclear what the optimal hyperparameters should be, we find similar levels of robustness on other datasets as well (see Appendix <ref>). ROPE Representations Mitigate FQE Divergence It has been shown theoretically <cit.> and empirically <cit.> that under extreme distribution shift, fqe diverges i.e. it produces ope estimates that have arbitrarily large error. In Table <ref>, we also see similar results where fqe produces very high error on some datasets. fqe tends to diverge due to the deadly triad <cit.>: 1) off-policy data, 2) bootstrapping, and 3) function approximation.A rather surprising but encouraging result that we find is that even though rope faces the deadly triad, it produces representations that significantly mitigate fqe's divergence across a large number of trials and hyperparameter variations. To investigate how much rope aids convergence, we provide the performance profile[<https://github.com/google-research/rliable/tree/master>] <cit.> based on the rmae distribution plot in Figure <ref>. Across all trials and hyperparameters, we plot the fraction of times an algorithm achieved an error less than some threshold. In addition to the earlier baselines, we also plot the performance of 1) fqe-clip which is fqe but whose bootstrapping targets are clipped between [r_min/1 - γ, r_max/1 - γ], where r_min and r_max are the minimum and maximum rewards in the fixed dataset; and 2) fqe-deep, which is regular fqe but whose action-value function network is double the capacity of fqe (see Appendix <ref> for specifics). From Figure <ref>, we see that nearly ≈ 100% of the runs of rope achieve an rmae of ≤ 2, while none of the fqe and fqe-deep runs produce even ≤ 10 rmae. The failure of fqe-deep suggests that the extra capacity rope has over fqe (since rope has its own neural network encoder) is insufficient to explain why rope produces accurate ope estimates. We also find that in order to use fqe with the native state-action representations, it is necessary to use domain knowledge and clip the bootstrapped target. While fqe-clip avoids divergence, it is very unstable during training (see Appendix <ref>). rope's ability to produce stable learning in fqe without any clipping is promising since it suggests that it is possible to improve the robustness of fqe if an appropriate representation is learned. § LIMITATIONS AND FUTURE WORKIn this work, we showed that rope was able to improve the data-efficiency of fqe and produce lower-error ope estimates than other ope-based representations. Here, we highlight limitations and opportunities for future work. A limitation of rope and other bisimulation-based metrics is that if the diversity of rewards in the dataset is low, they are susceptible to representation collapse since the short-term distance is close to 0. Further investigation is needed to determine how to overcome this limitation. Another very interesting future direction is to understand why rope's representations significantly mitigated fqe's divergence. A starting point would be to explore potential connections between rope and Bellman complete representations <cit.> and other forms of representation regularizers for fqe[<https://offline-rl-neurips.github.io/2021/pdf/17.pdf>].§ CONCLUSIONIn this paper we studied the challenge of pre-training representations to increase the data efficiency of the fqe ope estimator. Inspired by work that learns state similarity metrics for control, we introduced rope, a new diffuse metric for measuring behavioral similarity between state-action pairs for ope and used rope to learn state-action representations using available offline data. We theoretically showed that rope: 1) bounds the difference between the action-values between different state-action pairs and 2) results in bounded error between the value ofaccording to the ground action-value and the action-value function that is fed with rope representations as input. We empirically showed that rope boosts the data-efficiency of fqe and achieves lower ope error than other ope-based representation learning algorithms. Finally, we conducted a thorough ablation study and showed that rope is robust to hyperparameter tuning and significantly mitigates fqe's divergence, which is a well-known challenge in ope. To the best of our knowledge, our work is the firstthat successfully uses representation learning to improve the data-efficiency of ope.§ REMARKS ON NEGATIVE SOCIETAL IMPACTOur work is largely focused on studying fundamental rl research questions, and thus we do not see any immediate negative societal impacts.The aim of our work is to enable effective ope in many real world domains. Effective ope means that a user can estimate policy performance prior to deployment which can help avoid deployment of poor policies and thus positively impact society.§ ACKNOWLEDGMENTS Thanks to Adam Labiosa and the anonymous reviewers for feedback that greatly improved our work. Support for this research was provided by American Family Insurance through a research partnership with the University of Wisconsin—Madison’s Data Science Institute. plainnat§ THEORETICAL BACKGROUNDIn this section, we include relevant background material. A metric, d:X× X→ℝ_≥ 0 has the following properties for some x,y,z∈ X: * d(x,x) = 0* d(x,y) = 0 ⟺ x = y* Symmetry: d(x,y) = d(y,x)* Triangle inequality: d(x,z)≤ d(x,y) + d(y,z) A pseudo metric, d:X× X→ℝ_≥ 0 has the following properties for some x,y,z∈ X:* d(x,x) = 0* Symmetry: d(x,y) = d(y,x)* Triangle inequality: d(x,z)≤ d(x,y) + d(y,z)Crucially, a pseudo metric differs from a metric in that if d(x,y) = 0 it may be the case that x ≠ y.A diffuse metric, d:X× X→ℝ_≥ 0 has the following properties for some x,y,z∈ X:* d(x,x) ≥ 0* Symmetry: d(x,y) = d(y,x)* Triangle inequality: d(x,z)≤ d(x,y) + d(y,z)Crucially, a diffuse metric differs from a pseudo metric in that self-distances may be non-zero. For readers interested in distances that admit non-zero self-distances, we refer them to material on partial metrics <cit.>. We make the following note as <cit.>: the original definition of partial metrics (see <cit.>) uses a different triangle inequality criterion than the one in Definition <ref> and is too strict (i.e. diffuse metrics violate this triangle inequality criterion), so we consider the diffuse metric definition presented in this paper.We now present background material on the Wasserstein and related distances.Let d:X× X→ℝ_≥ 0 be a distance function and Ω the set of all joint distributions with marginals μ and λ over the space X, then we have:W(d)(μ, λ) = (inf_ω∈Ω𝔼_x_1,x_2∼ω[d(x_1,x_2)]) Let d:X× X→ℝ_≥ 0 be a distance function and marginals μ and λ over the space X, then we have: W(d)(μ, λ) = sup_f∈Lip_1,d(X)𝔼_x_1∼μ[f(x_1)] - 𝔼_x_2∼λ[f(x_2)]where Lip_1,d(X) denotes the 1-Lipschitz functions f:X→ℝ such that |f(x_1)-f(x_2)|≤ d(x_1,x_2).Let d:X× X→ℝ_≥ 0 be a distance function and marginals μ and λ over the space X, then we have: D_LK(d)(μ, λ) = (𝔼_x_1∼μ,x_2∼λ[d(x_1,x_2)])We then have the following fact: W(d)(μ, λ)≤ D_LK(d)(μ, λ) i.e. the Wasserstein distance is upper-bounded by the Łukaszyk–Karmowski distance <cit.>.§ THEORETICAL RESULTS*Consider d, d'∈ℝ^𝒳×𝒳, then we have:||(ℱ^π_ed)(s_1, a_1; s_2, a_2)- (ℱ^π_ed')(s_1, a_1; s_2, a_2)||_∞= ||γ_s_1',s_2'∼ P, a_1', a_2'∼π_e[d(s_1', a_1'; s_2', a_2') - d'(s_1', a_1'; s_2', a_2')] ||_∞= |γ| · ||_s_1',s_2'∼ P, a_1', a_2'∼π_e[d(s_1', a_1'; s_2', a_2') - d'(s_1', a_1'; s_2', a_2')] ||_∞≤γmax_s_1',a_1', s_2',a_2'|d(s_1', a_1'; s_2', a_2') - d'(s_1', a_1'; s_2', a_2')]| = γ ||d - d'||_∞*Since ℱ^π_e is a contraction mapping and that ℝ^𝒳×𝒳 is complete under the L^∞ norm, by Banach's fixed-point theorem, lim_t→∞ℱ_t^π_e(d) = d_π_e. *To prove that d_π_e is a diffuse metric, we need to show it has the following properties for (s_1,a_1), (s_2, a_2), (s_3, a_3) ∈𝒳. We follow <cit.>'s strategy (see Proposition 4.10) to prove that a distance function is a diffuse metric. Recall that d_(s_1, a_1;s_2, a_2) := |r(s_1,a_1)-r(s_2,a_2)| + γ_s_1',s_2'∼ P, a_1', a_2'∼π_e[d_(s_1', a_1'; s_2', a_2')]. * Non-negativity i.e. d_π_e(s_1,a_1;s_2,a_2)≥ 0. Since |r(s_1,a_1)-r(s_2,a_2)|≥ 0, recursively rolling out the definition of d_ means that d_π_e(s_1,a_1;s_2,a_2) is a sum of discounted non-negative terms.* Symmetry i.e. d_π_e(s_1,a_1;s_2,a_2) = d_π_e(s_2,a_2;s_1,a_1). Since |r(s_1,a_1)-r(s_2,a_2)| = |r(s_2,a_2)-r(s_1,a_1)|, unrolling d_π_e(s_1,a_1;s_2,a_2) and d_π_e(s_2,a_2;s_1,a_1) recursively results in the discounted sum of the same terms.* Triangle inequality i.e. d_π_e(s_1,a_1;s_2,a_2)≤ d_π_e(s_1,a_1;s_3,a_3) + d_π_e(s_2,a_2;s_3,a_3). To show this fact, we will first consider an initialization to the distance function d_0(s_1,a_1; s_2,a_2) = 0, ∀ (s_1,a_1), (s_2,a_2)∈𝒳 and consider repeated applications of the operator ℱ^π_e to d_0, which we know will make d_0 converge to d_π_e (Proposition <ref>). We will show by induction that each successive update d_t+1 = ℱ^π_e(d_t) satisfies the triangle inequality, which implies that d_π_e satisfies the triangle inequality.We have the base the case at t=0 trivially holding true due to the initialization of d_0. Now let the inductive hypothesis be true for all t > 1 i.e. d_t(s_1, a_1; s_2, a_2) ≤ d_t(s_1, a_1; s_3, a_3) + d_t(s_3, a_3; s_2, a_2) for any (s_1,a_1), (s_2, a_2), (s_3, a_3) ∈𝒳. However, we know that:d_t+1(s_1, a_1; s_2, a_2)= |r(s_1, a_1) - r(s_2, a_2)| + γ_s_1',s_2'∼ P, a_1', a_2'∼π_e[d_t(s_1', a_1'; s_2', a_2')](a)= |r(s_1, a_1) - r(s_2, a_2)| + r(s_3, a_3) - r(s_3, a_3) + γ_s_1',s_2'∼ P, a_1', a_2'∼π_e[d_t(s_1', a_1'; s_2', a_2')](b)≤ |r(s_1, a_1) - r(s_3, a_3)| + |r(s_2, a_2) - r(s_3, a_3)| + γ_s_1',s_2'∼ P, a_1', a_2'∼π_e[d_t(s_1', a_1'; s_2', a_2')](c)≤ |r(s_1, a_1) - r(s_3, a_3)| + |r(s_2, a_2) - r(s_3, a_3)| + γ_s_1',s_2',s_3'∼ P, a_1', a_2',a_3'∼π_e[d_t(s_1', a_1'; s_3', a_3') + d_t(s_3', a_3'; s_2', a_2')]= |r(s_1, a_1) - r(s_3, a_3)| + γ_s_1',s_3'∼ P, a_1',a_3'∼π_e[d_t(s_1', a_1'; s_3', a_3')] + |r(s_2, a_2) - r(s_3, a_3)| + γ_s_2',s_3'∼ P, a_2',a_3'∼π_e[d_t(s_3', a_3'; s_2', a_2')]= d_t+1(s_1, a_1; s_3, a_3) + d_t+1(s_2, a_2; s_3, a_3)d_t+1(s_1, a_1; s_2, a_2)≤d_t+1(s_1, a_1; s_3, a_3) + d_t+1(s_2, a_2; s_3, a_3)where (a) is due to adding and subtracting r(s_3, a_3), (b) is due to Jensen's inequality, (c) is due to application of the inductive hypothesis. Thus, the triangle inequality is satisfied for all t ≥ 0, and given that d_t+1→ d_π_e, we have that d_π_e also satisfies the triangle inequality.*To prove this fact, we follow <cit.> (see Proposition 4.8) and use a co-inductive argument <cit.>. We will show that if |q^π_e(s_1,a_1) - q^π_e(s_2,a_2)|≤ d(s_1,a_1,;s_2,a_2) holds true for some specific symmetric d∈ℝ^𝒳×𝒳, then the statement also holds true for ℱ^π_e(d), which means it will hold for d_π_e.We have that for any (s,a)∈𝒳, max_s,a-|r(s,a)|/1 - γ≤ q^π_e(s,a) ≤max_s,a|r(s,a)|/1 - γ. Thus, for any (s_1,a_1), (s_2,a_2)∈𝒳, we have that |q^π_e(s_1,a_1) - q^π_e(s_2,a_2)|≤ 2max_s,a|r(s,a)|/1 - γ. We can then assume that our specific symmetric d is the constant function d(s_1, a_1; s_2, a_2) = 2max_s,a|r(s,a)|/1 - γ, which satisfies our requirement that |q^π_e(s_1,a_1) - q^π_e(s_2,a_2)|≤ d(s_1,a_1,;s_2,a_2).Therefore, we have q^π_e(s_1, a_1) - q^π_e(s_2, a_2)= r(s_1,a_1)-r(s_2,a_2) + γ∑_s_1'∈∑_a_1'∈P(s_1'|s_1,a_1)π_e(a_1'|s_1')q^π_e(s_1',a_1') - γ∑_s_2'∈∑_a_2'∈P(s_2'|s_2,a_2)π_e(a_2'|s_2')q^π_e(s_2',a_2')≤ |r(s_1,a_1)-r(s_2,a_2)| + γ∑_s_1',s_2'∈∑_a_1',a_2'∈P(s_1'|s_1,a_1)π_e(a_1'|s_1')P(s_2'|s_2,a_2)π_e(a_2'|s_2')(q^π_e(s_1',a_1') - q^π_e(s_2',a_2'))(a)≤ |r(s_1,a_1)-r(s_2,a_2)| + γ∑_s_1',s_2'∈∑_a_1',a_2'∈P(s_1'|s_1,a_1)π_e(a_1'|s_1')P(s_2'|s_2,a_2)π_e(a_2'|s_2')d(s_1',a_1';s_2',a_2')= ℱ^π_e(d)(s_1, a_1;s_2,a_2)where (a) follows from the induction hypothesis. Similarly, by symmetry, we can show that q^π_e(s_2, a_2) - q^π_e(s_1, a_1) ≤ℱ^π_e(d)(s_1, a_1;s_2,a_2). Thus, we have it that |q^π_e(s_1,a_1) - q^π_e(s_2,a_2)|≤ d_π_e(s_1,a_1,;s_2,a_2).*The proof closely follows that of Lemma 8 of <cit.>, which is in turn based on Theorem 5.1 of <cit.>. The main difference between their theorems and ours is that the former is based on state representations and the latter is based on optimal state-value functions, while ours is focused on state-action representations for . We first remark that this new aggregated MDP, ℳ, can be viewed as a Markov reward process (MRP) where the "states" are aggregated state-action pairs of the original MDP, ℳ. We now define the reward function and transition dynamics of the clustered MRP ℳ, where |ϕ(x)| is the size of the cluster ϕ(x). Note thatdenotes the probability of the event.r̃(ϕ(x)) = 1/|ϕ(x)|∑_y∈ϕ(x)r(y) P(ϕ(x') | ϕ(x)) = 1/|ϕ(x)|∑_y∈ϕ(x)(ϕ(x')| y)Then we have: | q^(x) - q̃^(ϕ(x))|= | r(x) - r̃(ϕ(x)) + γ∑_x'∈P(x'|x)q^(x') - γ∑_ϕ(x')∈P(ϕ(x')|ϕ(x))q̃^(ϕ(x'))|(a)=| r(x) - 1/|ϕ(x)|∑_y∈ϕ(x)r(y)+ γ∑_x'∈P(x'|x)q^(x') - γ1/|ϕ(x)|∑_ϕ(x')∈∑_y∈ϕ(x)(ϕ(x')| y)q̃^(ϕ(x'))|(b)=1/|ϕ(x)|| |ϕ(x)|r(x) - ∑_y∈ϕ(x)r(y)+ γ |ϕ(x)|∑_x'∈P(x'|x)q^(x') - γ∑_ϕ(x')∈∑_y∈ϕ(x)(ϕ(x')| y)q̃^(ϕ(x'))|(c)=1/|ϕ(x)||∑_y∈ϕ(x)(r(x) - r(y))+ ∑_y∈ϕ(x)(γ∑_x'∈P(x'|x)q^(x') - γ∑_ϕ(x')∈(ϕ(x')| y)q̃^(ϕ(x')))|(d.1)≤1/|ϕ(x)|∑_y∈ϕ(x)(| r(x) - r(y)| + γ|∑_x'∈P(x'|x)q^(x') - ∑_ϕ(x')∈(ϕ(x')|y)q̃^(ϕ(x'))|)(d.2)=1/|ϕ(x)|∑_y∈ϕ(x)(| r(x) - r(y)| + γ|∑_x'∈P(x'|x)q^(x') - ∑_ϕ(x')∈∑_z∈ϕ(x')P(z|y)q̃^(ϕ(x'))|)(d.3)=1/|ϕ(x)|∑_y∈ϕ(x)(| r(x) - r(y)| + γ|∑_x'∈P(x'|x)q^(x') - ∑_x'∈P(x'|y)q̃^(ϕ(x'))|)(e)≤1/|ϕ(x)|∑_y∈ϕ(x)(| r(x) - r(y)| + γ|∑_x'∈(P(x'|x)q^(x') - P(x'|y)q̃^(ϕ(x')))|)(f)≤1/|ϕ(x)|∑_y∈ϕ(x)(| r(x) - r(y)| + γ|∑_x'∈(P(x'|x)q^(x') - P(x'|y)q^(x'))|) + γ/|ϕ(x)|∑_y∈ϕ(x)(|∑_x'∈P(x'|y)(q^(x') - q̃^(ϕ(x')))|)(g)≤1/|ϕ(x)|∑_y∈ϕ(x)(| r(x) - r(y)| + γ|∑_x'∈(P(x'|x)- P(x'|y))q^(x')| + γ‖ q - q‖_∞)(h)=1/|ϕ(x)|∑_y∈ϕ(x)(| r(x) - r(y)| + γ|𝔼_x'∼ P(·|x)[q^(x')] - 𝔼_x'∼ P(·|y)[q^(x')]| + γ‖ q - q‖_∞) where (a) is due to the definition ofand P, (b) is due to multiplying and dividing by |ϕ(x)|, (c) is due to re-arranging terms, (d.1) is due to Jensen's inequality, (d.2 and d.3) are disaggregating the sums over clustered state-actions into sums over original state-actions by expanding (ϕ(x')|y) = ∑_x∈ϕ(x')P(x|y) for each clustered state-action, ϕ(x'), (e) is grouping the terms, (f) is by adding and subtracting 1/|ϕ(x)|∑_y∈ϕ(x)P(x'|y)q^(x'), (g) is since the infinity norm of the difference of the action-values is greater than the expected difference, (h) is re-writing the expression in terms of expectations. From Theorem <ref> we know q^π_e is 1-Lipschitz with respect to the distance function d_π_e. Notice that (h) contains the dual formulation of the Wasserstein distance where f = q^ (see Definition <ref>). We can then re-write (h) in terms of original definition of the Wasserstein distance:| q^(x) - q̃^(ϕ(x))|≤1/|ϕ(x)|∑_y∈ϕ(x)(| r(x) - r(y)| + γ W(d_π_e)(P(·|x), P(·|y)) + γ‖ q - q‖_∞)(i)≤1/|ϕ(x)|∑_y∈ϕ(x)(| r(x) - r(y)| + γ D_LK(d_π_e)(x', y') + γ‖ q - q‖_∞)(j)=1/|ϕ(x)|∑_y∈ϕ(x)(| r(x) - r(y)| + γ_x'∼^π_e, y'∼^π_e[d_π_e(x', y')] + γ‖ q - q‖_∞)(k)=1/|ϕ(x)|∑_y∈ϕ(x)(d_(x, y) + γ‖ q - q‖_∞)(l)≤ 2ϵ + γ‖ q - q‖_∞| q^(x) - q̃^(ϕ(x))|(m)≤2ϵ/1 - γ, ∀ x∈where (i) is due the fact that the Łukaszyk–Karmowski, D_LK, upper bounds the Wasserstein distance, (j) is using Definition <ref>, (k) is due to the definition of d_, and (l) is due the fact that the maximum distance between any two x, y ∈ϕ(x) is at most 2ϵ, which is greater than the average distance between any one point to every other point in the cluster, and (m) is due to ‖ q - q‖_∞≤2ϵ/1 - γ. *From Lemma <ref> we have that | q^(s_0,a_0) - q^(ϕ(s_0,a_0))| ≤2ϵ/(1 - γ).| _s_0,a_0∼[q^(s_0,a_0)] - _s_0,a_0∼[q^(ϕ(s_0,a_0))]|= |_s_0,a_0∼[q^(s_0,a_0) - q^(ϕ(s_0,a_0))]| (a)≤_s_0,a_0∼[|q^(s_0,a_0) - q^(ϕ(s_0,a_0))|] (b)≤_s_0,a_0∼(2ϵ/1 - γ) = 2ϵ/(1 - γ),where (a) follows from Jensen's inequality and (b) follows from Lemma <ref>. § ROPE PSEUDO-CODE§ EMPIRICAL RESULTSWe now include additional experiments that were deferred from the main text. §.§ Gridworld VisualizationsIn Section <ref>, we visualize how rope and on-policy mico group state-actions pairs. We now consider two additional metrics that group state-action pairs: * Policy similarity metric <cit.>: d_PSM(s_1, a_1; s_2, a_2) := |(a_1|s_1) - (a_2|s_2)| + γ_a_1',a_2'∼[d_PSM((s_1', a_1'), (s_2', a_2'))]. This metric measures short- and long-term similarity based on howacts in different states, not in terms of the rewards and returns it receives.* Random policy similarity metric <cit.>: d_RAND(s_1, a_1; s_2, a_2) := |r(s_1,a_1)-r(s_2,a_2)| + γ_a' ∼𝒰()[d_RAND((s_1', a'), (s_2', a'))]. Similar to d_, but considers behavior of a random policy that samples actions uniformly.From Figure <ref>, we reach the same conclusion as we did in Section <ref>: that existing state-action similarity metrics are unsuitable for learning q^ due to how they group state-action pairs. §.§ Deep OPE Experiments We now present additional details on our empirical setup and additional experiments.§.§.§ Additional Empirical Setup Details Before applying any of the algorithms, we normalize the states of the dataset to make the each feature dimension have 0 mean and 1 standard deviation. FQE Training Details In all experiments and all datasets, we use a neural network as fqe's action-value function with 2 layers and 256 neurons using relu activation function. We use mini-batch gradient descent to train the fqe network with mini-batch sizes of 512 and for 300K gradient steps. We use the Adam optimizer with learning rate 1e^-5 and weight decay 1e^-2. fqe minimizes the Huber loss. The only changes for fqe-deep are that it uses a neural network size of 4 layers with 256 neurons and trains for 500K gradient steps. Preliminary results with lower learning rates such as 5e^-6 and 1e^-6 did not make a difference. fqe uses an exponentially-moving average target network with τ = 0.005 updated every epoch. ROPE and BCRL Details In all experiments and datasets, we use a neural network as the state-action encoder for rope with 2 layers and 256 neurons with the relu activation. We use mini-batch gradient descent to train the the encoder network with mini-batch sizes of 512 and for 300K gradient steps. For rope and bcrl, we hyperparameter sweep the output dimension of the encoder. Additionally, for rope, we sweep over the angular distance scalar, β.For the output dimension, we sweep over dimensions: {|X|/3,|X|/2, |X|}, where |X| is the dimension of the original state-action space of the environment. For β, we sweep over {0.1, 1, 10}.The best performing hyperparameter set is the one that results in lowest rmae (from ρ()) at the end of fqe training. rope uses an exponentially-moving average target network with τ = 0.005 updated every epoch. Finally, the output of rope's encoder is fed through a LayerNorm <cit.> layer, followed by a tanh layer. rope minimizes the Huber loss.When computing d^≈d̃_ω rope uses the same procedure as mico (appendix C.2. of <cit.>):d̃_ω(s_1, a_1; s_2, a_2) ||ϕ_ω(s_1,a_1)||_2^2 + ||ϕ_ω̅(s_2,a_2)||_2^2/2 + βθ(ϕ_ω(s_1, a_1), ϕ_ω̅(s_2, a_2))where it applies the target network parameters, ω̅, on the (s_2, a_2) pair for stability. For the angular distance θ(ϕ_ω(s_1, a_1), ϕ_ω(s_2, a_2)), we have the cosine-similarity and the angle as below. Note in practice, for numerical stability, a small constant (e.g. 1e^-6 or 5e^-5) may have to be added when computing the square-root.CS(ϕ_ω(s_1, a_1), ϕ_ω(s_2, a_2))= ⟨ϕ_ω(s_1, a_1),ϕ_ω(s_2, a_2)⟩/||ϕ_ω(s_1,a_1)|| ||ϕ_ω(s_2,a_2)|| θ(ϕ_ω(s_1, a_1), ϕ_ω(s_2, a_2))= arctan2(√(1 - CS(ϕ_ω(s_1, a_1), ϕ_ω(s_2, a_2))^2), CS(ϕ_ω(s_1, a_1), ϕ_ω(s_2, a_2)))Custom DatasetsWe generate the datasets by training policies in the environment using sac <cit.> and take the final policy at the end of training asand we use an earlier policy with lower performance as the behavior policy. The expected discounted return of the policies and datasets for each domain is given in Table <ref> (γ = 0.99). The values for the evaluation and behavior policies were computed by running each for 300 rollout trajectories, which was more than a sufficient amount for the estimate to converge, and averaging the discounted return (note that <cit.> use 200 rollout trajectories).D4RL Datasets Due to known discrepancy issues between newer environments of gym[<https://github.com/Farama-Foundation/D4RL/tree/master>], we generat our datasets instead of using the publicly available ones. To generate the datasets, we use the publicly available policies [<https://github.com/google-research/deep_ope>]. For each domain, the expert and evaluation policy was the 10th (last policy) from training. The medium and behavior policy was the 5th policy. We added a noise of 0.1 to the policies.§.§.§ FQE Training Iteration Curves for D4RL Datasets In this section, we include the remaining fqe training iteration curves (ope error vs. gradient steps) for the d4rl dataset (Figure <ref>). We can see thatfqe diverges in multiple settings while rope is very stable. While fqe-clip does not diverge, it is still highly unstable.§.§.§ Ablation: ROPE Hyperparameter Sensitivity Similar to the results in Section <ref>, we show rope's hyperparameter sensitivity on all the custom and d4rl datasets. In general, we find that rope is robust to hyperparameter tuning, and it produces more data-efficient ope estimates than fqe for a wide variety of its hyperparameters. See Figures <ref> to <ref>.Note that in the bar graphs, we limit the vertical axis to 1. In the Hopper and Walker d4rl experiments, fqe diverged and had an error significantly larger than 1.§.§.§ Ablation: RMAE Distributions In this section, show the remaining rmae distribution curves <cit.> of each algorithm on all datasets. We reach the similar conclusion that on very difficult datasets, rope significantly mitigates the divergence of fqe and that to avoid fqe divergence it is necessary to clip the bootstrapping target. See Figures <ref> to <ref>.§.§.§ Training Loss Curves for ROPE and FQE In this section, we include the training loss curves for rope's training, fqe's training using rope representations as input, and normal fqe and fqe-clip. The training curves are a function of the algorithms hyperparameters (learning rate for fqe, β and representation output dimension for rope). We can see that on difficult datasets, the loss of fqe diverges. On the other hand, with rope, fqe's divergence is significantly mitigated. Note that rope does not eliminate the divergence. See Figures <ref> to <ref>. §.§.§ Understanding the ROPE RepresentationsIn this section, we try to understand the nature of the rope representations. We do so by plotting the mean of the: 1) mean feature dimension and 2) standard deviation feature dimension. For example, if there N state-action pairs, each with dimension D, we compute the mean and standard deviation feature dimension for each of the D dimensions across the N examples, and then compute the mean along the D dimensions. If the standard deviation value is close 0, it indicates that there may be a representation collapse. See Figure <ref>. §.§ Hardware For ExperimentsFor all experiments, we used the following compute infrastructure: * Distributed cluster on HTCondor framework* Intel(R) Xeon(R) CPU E5-2470 0 @ 2.30GHz* RAM: 7GB* Disk space: 4GB
http://arxiv.org/abs/2310.18409v1
{ "authors": [ "Brahma S. Pavse", "Josiah P. Hanna" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20231027180057", "title": "State-Action Similarity-Based Representations for Off-Policy Evaluation" }
[floatfix] ./figures/[email protected] Ruhr-University Bochum, ICAMS, Universitaetsstrasse 150, 44801 Bochum, Germany 04.00, 11.00, 12.00 The quantum-phase-field concept of matter is revisited with special emphasis on the introverted view of space. Extroverted space surrounds physical objects, while introverted space lies in between physical objects. Space between objects leads to a network structure of matter: a network in which one-dimensional spaces connect individual particles. Is multidimensional space an illusion? Ingo Steinbach January 14, 2024 ======================================§ INTRODUCTION `Space' is one of the first perceptions a child makes after being born: space separates the child from its mother. It is reported that babies first see the world upside down because of the optic our eyes. The image, generated by the brain from individual signals of the visual nerves, is understood as field in a two-dimensional (2D) vector space. Since we have two eyes, our brain can add the third dimension. These empirical observations over thousands of years have been formulated in the mathematical language of physics as the concept of fields in a continuous multidimensional vector space. There is no question about the practical success of this approach, which culminates in Newton's mechanics and Einstein's general relativity.But is `space,' in which we are living, a multidimensional continuum on a fundamental level? Is it necessary to describe space as a vector space? What problems arise if we do so? What alternative descriptions are possible?There are several approaches to the understanding general physics that are based on discrete spaces, string theory, loop gravity, or similar concepts. Typically, they embed discrete objects (objects with a reduced dimensionality) such as strings or branes into a high-dimensional continuous space-time. An infinite number of these discrete objects, so-called `quantum oscillators,' are attached to each region in multidimensional space.My concept of a discrete space is different: it develops a network of masses and spaces that are formed by quantum phase fields. Herein, I shall:* illustrate the introverted and extroverted views of space; * define the network structure of masses and spaces; * revisit the quantum-phase-field theory with emphasis to introverted spaces.§ INTRO- VERSUS EXTROVERTED VIEW OF SPACELet us start with the extroverted view of space; this is the common view in traditional physics, including quantum physics and general relativity. Extroverted space is something that exists with or without massive particles—leptons and quarks—embedded into it. Within the extroverted view, particles are placed into space. In this view, space is described mathematically as a real vector space of n dimensions, where n ≥ 3. Empty space, i.e., space in which there are no particles, may not be seen as fully empty because there are quantum fluctuations. There are concepts that consider space to be related to some `substance,' called ether, and this may be related to these quantum fluctuations.Space is generally seen to be filled by `fields': real or complex, scalar or vector valued, classical or quantum. These fields have a characteristic value at each point in space, and their best-known example is the electric field. Since extroverted space is formulated as a real and continuous vector space, classical fields have a measurable value `locally' at each point in this vector space. In quantum mechanics, one has to also consider the gradients of the fields, which makes the description `non-local.' In general relativity, one has to consider the invariance of the speed of light to connect three-dimensional (3D) real space with time, forming a 3D manifold in four-dimensional space-time. The fields transfer attractive or repulsive action between particles, and gravitation is not seen as an attractive force in general relativity, but as a consequence of the coupling between mass and the curvature of space-time.I compare this to a billiard game (although in this analogy, action is only transferred when balls collide, but the indentation of the ball into the cloth may be of importance). The green of the billiard table represents space. The particles—billiard balls—are placed in this space (see Figure <ref>, left). They interact according to Newton's laws of momentum and energy conservation, or their relativistic extensions. In a real billiard game, the player uses the cushions to mirror space. We may, hypothetically, push the cushions to infinity; then, our playground will be infinitely large. We may use periodic boundary conditions, or we might generalize flat space to Riemann geometries of different topology. The basic principle of the extroverted view of space stays the same: space exists with or without particles. Particles are placed into space, and space `surrounds' these particles. I thus call this the extroverted view.The introverted view is different; here, space, if you will, separates and connects particles (Figure <ref>, right). I call this view `introverted' because it considers that space lies between particles. The space inside a building, which is mostly called `room' in this context, is surrounded by walls and floors. Walls and floors are 2D objects in a 3D world; these objects do not exist as elementary entities in physics. Particles—leptons or quarks—are zero-dimensional, meaningthey are so small that no extension in any direction can be attributed to them, they are point like. Therefore, introverted space lying in-between the particles has to be one-dimensional (1D): there is no other choice. Particles may then be represented as the vertices, or nodes, of a network structure. The connections—the edges of the network—can be seen as `spaces,' and they are each defined by the distance between the two particles that they connect.Here, we must take a moment to reflect: what does `distance' mean? It is first of all a scalar value that is assigned to the relation between exactly two objects. The distance is small if the objects are close to each other, and it is large if they are not close. Considering `not close' in contrast to `close,' let us say that they have less to do with each other, that they interact less. We may measure the distance by a length in [m] or by a time to transfer action in [s]. We may also measure it according to the strength of interaction: by the binding energy between two particles in [J]. We will define distance between two particles by an energy![Particles are usually associated by a positive energy of their rest mass. Distance, space between particles, usually is not considered as an energetic state, although Newtons physics clearly draws this connection. I show that it has a negative energy, the energy of space. See section <ref>] The quantum-phase-field concept, as reviewed in this essay, describes mass and space as a network of energetic states. It is published in <cit.>, and a common version can be found in Chapter 8 of <cit.>.§ THE NETWORK OF MASSES AND SPACESLet us start with a formal definition of the network of particles and their connecting spaces.The network is defined by nodes (particles) U_i, i = 1 … n, and edges (spaces) E_I, I=1 … N. For a fully connected network, we have N=n2, but we will also consider partially connected networks N<n2. We postulate the following topology as the simplest form consistent with the above principles:*One E-element E_IE_km connects two U-elements U_k and U_m, k, m ∈ 1 … n.*One U-element U_i connects a number of 2 ≤ M ≤ NE-elements E_K, K ∈ 1 … N. Clearly, this leads to a network as depicted in Figure <ref>, right, where the U-elements are nodes and the E-elements are edges. Note that one needs a minimum of two elements of each type to form a primitive network (a ring of two edges connected by two nodes), which can be solved analytically. We will, however, mostly speak about many nodes and edges. Without loss of generality, we associate the U-elements with positive energy, the massive energy of particles, and the E-elements with negative energy, the energy of space.In the next chapter I will connect `edges' and `nodes' to `quantum phase fields'.§ FORMAL DEFINITION OF QUANTUM PHASE FIELDSThe basic object in the quantum-phase-field concept is a `phase.' The phase distinguishes a piece or matter from other pieces in a different phase state; i.e., there is at least one attribute of the piece of matter in one particular phase state that distinguishes it from other pieces in a different phase state.In quantum physics we call the phase a ‘quantum state’ and each state can be occupied only once. In condensed matter physics, the phase characterizes atomic order, or its absence: solid, liquid, gas, plasma, etc.; this also applies to magnetic order and so on. Each phase is distinguished from another phase by a so-called `order parameter,' which is normalized to 0≤ϕ≤ 1.We will allow many phases, ϕ_I, I=1 … N, and each phase is connected to all other phases ϕ_K, K I: ∑_I ϕ_I = 1.This is a system without outer boundaries; the phases ϕ_I constitute a closed system, forming a `universe'. We do this in analogy to the multi-phase-field theory in condensed matter physics <cit.>.Here, ϕ_I is simply termed `phase' because so far, no space has been defined. Each phase ϕ_I is an element of the system that is different from all other elements ϕ_K, but there are connections between the phases in situations where at least two phases have a value ϕ_I < 1 due to the constraint (<ref>). In fact, we will identify the phases ϕ_I with the edges of the network E_I.The task now is to define the nodes U_i. If the phase forms an edge, it must connect exactly two nodes. However, one node may connect many edges (if not all edges, as in a fully connected network). Later, when `space' is introduced, we will simply denote all points in space where at least two phases are connected (with the condition ϕ_I < 1, ϕ_k = 1 - ϕ_I) as `nodes.'To give the phases a physical meaning, I associate them with the conserved quantity `energy' H = ⟨ψ| Ĥ |ψ⟩. Here, |ψ⟩ is the quantum state of the system and Ĥ is the energy operator. Furthermore, I postulate that H=0, i.e., there is no net energy: all energetic states, positive and negative, have to sum to zero. The argument for this is simple: there is no evidence regarding where a finite energy of the universe should come from (see also the Wheeler–DeWitt theory <cit.>).I allow changes dĤ such the zero-energy state—the state of `nothing'—separates into positive and negative energetic elements—the state of `something.' This can be related to the Big Bang as the origin of our universe, if you will. We expand Ĥ in the changes dĤ with respect to the phases ϕ_I. Ĥ is thus itself a function of all phases {ϕ_I}, Ĥ = Ĥ({ϕ_I}), and, as a reminder, all fields are connected by the sum constraint (<ref>): Ĥ = ∑_I ∫_0^1 dϕ_I ∂Ĥ({ϕ_I})/∂ϕ_I.The integral runs over the definition range of the phase as an order parameter from 0 to 1, meaning yes or no, existing or not-existing. We allow the fields to vary between these bounds; i.e., they are diffuse, as is usual in phase-field theory.Introducing a length coordinate s_I (corresponding to an edge of the network), substituting dϕ_I = ∂ϕ_I∂ s_Ids_I, and introducing the forces ĥ_I = dĤds_I yields Ĥ = ∑_I=1^N ∫_-∞^∞ds_I ∂ϕ_I/∂ s_I∂Ĥ({ϕ_I})/∂ϕ_I= ∑_I=1^N ∫_-∞^∞ds_I ĥ_I({ϕ_I}). Space emerges; i.e., it is created by variations in the phases dϕ_I. We relate the line coordinate s_I to the distance Ω_I, which is defined by the negative inverse of the energy of an edge E_I (for the self-consistent proof see Section <ref>) by the integral ∫_-∞^+∞ds_I ϕ_I(s_I) = Ω_I = - α̃hc/48 E_I,where α̃ is a dimensionless parameter to be defined, h is Planck's quantum and c speed of light.It is important to note that the space s_I is not `fundamental,' but it is defined by the fundamental entity `energy' as a real number. It is an auxiliary coordinate to link the concept to the view of physics rooted in wave mechanics. From now on, we will consider the phase ϕ_I as a field ϕ_I(s_I) in the line coordinate s_I that is intrinsic to this field.The force operator ĥ is expanded in the phases and in their gradients:ĥ_I = u (η^2 [ ( ∂/∂ s_Iϕ_I)^2 - 1/c^2(∂/∂ tϕ_I)^2] +P_I({Φ_J} ),p_I =(γ∑_J=1, J I^Nϕ_I^2 ϕ_J^2 + γ̃∑_J,K,l=1, I JKL ^Nϕ_I ϕ_Jϕ_kϕ_l ).Here, u is a positive constant with dimensions of energy per length, or force [J/m], and η is a positive length to be determined at the end of this section [Eq. (<ref>)]. The gradient contribution in time is included with a velocity constant c, which ensures relativistic invariance due to Lorentz contraction of the length η (see <cit.> for details). Furthermore, p_I({Φ_J}) is the Landau potential for phase I expanded in all connected phases J. In contrast to previous publications <cit.>, we employ a ϕ^4 potential in the current version of the theory, with force parameters γ and γ̃.The second potential term connecting the four different phases in Eq. (<ref>) has an intriguing consequence: this is the only surviving potential term in the limit N →∞. It states that there is a minimum of four phases to be connected, with ϕ_Iϕ_Jϕ_Kϕ_L, which forms the minimum requirement for a space-filling body in three dimensions! All terms with lower numbers of connections vanish in the limit N →∞. To see this, we investigate the center of a node at which all connecting phases are equal ϕ_I=ϕ_J =1 N. In equilibrium, this center has the highest energy. The first term in the expansion of the Landau potential (<ref>) is the sum over pairs ϕ_I^2 ϕ_J^2 ≈ 1N^4. Since there are N2 contributions, and N2→ 12 N^2 for N→∞, this term vanishes for large networks. I retain this here for the analytical solvability of one edge between two nodes. The second term in the potential (<ref>), ϕ_I ϕ_Jϕ_kϕ_l, remains of order unity since N4→ 124 N^4 for N →∞. Higher-order terms, ϕ^6, ϕ^8, and so on, may be added with the same argument, as long as all phases are independent and we sum over all combinations.[Using the same argument, in previous versions of the theory <cit.>, we restricted ourselves to a pair of phases and a quadratic potential. Here, however, a non-analytical treatment of the non-linearity of the phase-field equation is needed, see <cit.>.] The gradient contributions of the energy operator (<ref>), ∂∂ s and 1c ∂∂ t, shall be understood as operators acting either on the phase or on the quantum mechanical wave function |ψ⟩. The kinetic equation for the evolution of phases is written down according to the Clausius–Duhem relation, with the time constant τ̃: τ̃∂/∂ tϕ_I = -δ/δϕ_I∫_0^+∞dt ⟨ψ| Ĥ|ψ⟩.This equation has two parts: (i) a non-linear wave equation for the phase fields ϕ_I, and (ii) a linear Schrödinger-type equation for quantum-mechanical excitations within the phases. This procedure is not new; it can be traced back to the so-called de Broglie–Bohm double-solution program <cit.>. For further explanation, see <cit.>.We now separate the expectation value of the energy operator (<ref>) into three different contributions. These are distinguished by whether the differential operators ∂/∂ s and ∂/∂ t are applied to the wave function |ψ⟩ or the field ϕ_I.Applying the differential operators to the phase components and using the normalization of the wave function ⟨ψ|ψ⟩=1 yields the force u_I[J/m] related to phase I: u_I = u {η^2[(∂/∂ s_Iϕ_I)^2 - 1/c^2(∂/∂ tϕ_I)^2] + p_I } . The mixed contribution, in which one of the operators ∂/∂ s and ∂/∂ t is applied to the field ϕ and one is applied to the wave function |ψ⟩, describes the correlation between the field and the wave function. This shall be set to 0 in the quasi-static limit. In this limit, we keep the field static for the evaluation of the quantum-mechanical force. Then, we take this force for the determination of the time evolution of the field. A coupled solution has not been worked out to date:0= ϕ_I uη^2 [ ∂ϕ_I/∂ s⟨ψ|∂/∂ s|ψ⟩ - 1/c^2∂ϕ_I /∂ t⟨ψ|∂/∂ t|ψ⟩].It is shown in <cit.> that Eq. (<ref>) is consistent with Newton's second law of acceleration. Finally, we apply the momentum operators ∂/∂ s and 1/c∂/∂ t to the wave function |ψ⟩, which yields the force e_I[J/m]:e_I =uη^2 ϕ_I^2 ⟨ψ|∂^2/∂ s^2-1/c^2∂^2/∂ t^2|ψ⟩. This contribution applies to the bulk energy of the phase field ϕ_I= 1. We will explicitly evaluate this after the structure of the solutions of the fields is discussed. One transforms the phase-field equation (<ref>) into the moving frame traveling with the velocity v. Inserting Eqs. (<ref>)–(<ref>) into (<ref>), we find: τ̃∂/∂ tϕ_I=-δ/δϕ_I∫_0^+∞dt ⟨ψ| Ĥ|ψ⟩=u [η^2 ∂^2ϕ_I/∂ s^2(1-v^2/c^2) - p'] +m_ϕ_IΔ e, where: p' is the variation of the Landau potential with respect to ϕ_I; Δ e = e_I - e_J is the difference in the volume force between two phases I and J according to Eq. (<ref>); and m_ϕ_I is the appropriate coupling function. The so-called doublon solution for two phases in a periodic setting is well known as the minimum solution of the classical part of the energy operator Ĥ [Eq. (<ref>)], as depicted in Figure <ref>: ϕ_I(s)= 1/2{tanh( 3 (s - s_1 -v t)/η_v) .- .tanh(3 (s - s_2 + v t)/η_v)},where the particles are located at s_1 and s_2 with distance Ω_12 = s_1-s_2. The phases transform into each other with velocity v ∝ Δ e. η_v = η√(1-v^2/c^2) shows Lorentz contraction of the quantum length for accelerated particles (for details, see <cit.>). In this picture, u is the force of inertia. One phase is bounded by two solitonian waves, one right-moving and one left-moving. We call this object a doublon. Finally, we relate the size of the transition region η to the parameter u and γ by the minimum solution of Eq. (<ref>) (the four-phase term proportional to γ̃ gives no contribution to the two-phase case): η = √(36 uγ). § VOLUME ENERGY OF ONE DOUBLONFrom the doublon solution in Section <ref>, we can see that the field forms a 1D box with fixed walls and size Ω_I for phase I, which proves (<ref>). We consider massless quantum fluctuations inside the box and find the explicit representation of the standing waves ψ_p in the length coordinate s, ψ_p = √(2/Ω)sin(π p/Ωs). The dispersion relation for massless particles is linear in momentum p̃= ph Ω, p ∈ℕ, instead of quadratic in p̃, as for the case of massive particles in a box potential. According to Casimir <cit.>, we have to compare quantum fluctuations in the box with discrete spectrum p and frequency ω_p= π c p 2Ω_I to a continuous spectrum. This yields the negative energy E_I of the space I:E_I = αh c/4Ω_I[ ∑_p=1^∞ p - ∫_1^∞ p dp ] = - αh c/48Ω_I,where α is a positive, dimensionless coupling coefficient. I have used the Euler–Maclaurin formula in the limit ϵ→ 0 after renormalization p → p e^-ϵ p. Since all parameters are positive, we see that `space' is accounted for by negative energy, scaling in inverse proportion to the size of the doublon, which proves (<ref>) for self-consistency. This energy scales like the energy of the gravitational field in Newtonian mechanics and general relativity. Therefore, the force—as the derivative of the energy (<ref>) with respect to space—can be associated with a gravitational attraction between the nodes, which is transmitted by the spaces. The nodes can thus be interpreted as massive objects: elementary particles. They are associated with positive energy U, while the bulk energy of one doublon/space is negative. In contrast to Newtonian mechanics and in agreement with general relativity (see `gravitational waves' <cit.>), the attraction is a wave phenomenon with time-dependent action. The coefficient α can be determined from the measured gravitational constant on Earth, as we will derive in the next section.§ CLOSED DOUBLON NETWORKFrom the doublon solution (<ref>), we see that the elements of the network—particles with energy U_i and spaces with energy E_I≡ E_ij—are both related to the phases, or if we relate the bulk energy of a phase to a space coordinate, we say that they are related to phase fields. Particles relate to the gradient contributions, while spaces relate to the bulk in between particles. Values of ϕ≡ 0 have no physical meaning. Since each doublon decays to zero on either side, it has to connect to at least one other doublon on each side due to the sum constraint (<ref>) within a finite transition region of size η_v. To get a handle on this size, as well as on the size of the bulk region Ω_ij, we need to evaluate the coupling constant α. We assume isotropy in the system and energy neutrality, with N being the number of particles connected to each single particle, dropping the subscript i. Since only 1/2 of each doublon counts for one particle, we find:0 = U + ∑_I=1^NE_I = U_i - 1/2αhc/48∑_I=1^N1/Ω_I.With the mass of one particle m = U/c^2 and the average size of the network as the homogeneous mean Ω̅= N[∑_I=1^N1/Ω_I]^-1, we get: α = 96 m c Ω̅/N h.The force between two particles along one connecting doublon is evaluated:f_I^micro =-(∂ E_I/∂Ω_I+ ∂ E_I/∂Ω̅∂Ω̅/∂Ω_I) =m c^2 Ω̅/ N1/Ω_I^2[ 1-Ω̅/ N Ω_I] =m^2 c^2 Ω̅/ M^u1/Ω_I^2[ 1-Ω̅/ N Ω_I]. The number of particles N can be estimated from the mass in the visible universe M^u compared to the mass of one particle m. For distances Ω_I < Ω̅N, we note repulsive action with a force scaling with 1 Ω_I^3. Thus, the singularity of collapse of an agglomeration of point-like particles into one point is forbidden. For Ω_I≫Ω̅N, Newton's law of gravity is recovered. Identifying the prefactor in Eq. (<ref>)c^2 Ω̅ M^u with the coefficient of gravitation G, one obtains: Ω̅ = G M^uc^2≈ 7.4 ×10^24m≈ 240Mpc,Ω̅ N= G mc^2≈ 3 ×10^-53m. I have used the numerical values G ≈ 6.67× 10^-11 [m^3/kg·s^2 ], c ≈ 3 × 10^8 [m/s ], M^u = 10^52[kg]<cit.>, and m = 4× 10^-26 [kg ] set to 1/4, the mass of a hydrogen atom, or a neutron and its neutrino, each consisting of four fermions. Repulsive gravitational action at the microscale is thus limited to distances Ω̅N below the Planck length. We have, however, to consider that this formal derivation considers the variation of a cosmological length Ω̅ with a very short length scale, e.g., between quarks inside a nucleon. From the formal definition of Ω̅ by the harmonic mean of all spaces Ω(I), which is dominated by short spaces, we see that Ω̅→ 0 for one single Ω(I) → 0, regardless of the large number of long and ultra-long spaces. The cutoff Ω̅N prevents this behavior, but on an unrealistically small scale, according to physical intuition. Therefore, the formal derivation of the micro force equation (<ref>) has not been presented in previous publications <cit.>.At `cosmological' distances, we shall treat Ω̅ and Ω_I as independent. Variation of the energy of space (<ref>) with respect to Ω̅ and Ω_I independently gives:f_I^macro = -(∂ E_I/∂Ω_I+ ∂ E_I/∂Ω̅) = m c^2 Ω̅/ NΩ_I^2[1-Ω_I/Ω̅]= G m^2/Ω_I^2[1-Ω_I/Ω̅].Structures beyond the marginal distance Ω̅ repel each other, leading to an accelerating expansion. We further see from the generalized gravitational law (<ref>) that in the limit Ω_I≫Ω̅, the force scales as f_I∝ -1 Ω_I instead of f_I∝ 1 (Ω_I)^2 for medium distances; i.e., repulsive gravitational action at ultra-long distances decays more slowly with distance than attractive gravitational action at medium distances. The consequences of this statement deserve further consideration in the future.§ CONCLUSION AND DISCUSSIONPhases ϕ_I and their variations dϕ_I are the principal elements of the quantum-phase-field concept, forming a monistic theory. The model defines a network of particles and spaces that are both determined by the phases and their variation. Particles are not placed in space, as in traditional theories, but rather they bound spaces! Spaces connect particles, as particles connect spaces. Particles and spaces are two sides of the same coin: the doublon. One space element connects exactly two particles; one particle is the connection between many spaces. This defines the network structure of the physical world. Both elements of the network—nodes and edges—are defined by energy, which is the only fundamental substance in the concept. Space is introduced as a negative inverse of the energy of one edge. It is shown that this ansatz leads to a quantum problem on the edges, the solution of which reproduces the negative energy associated with this space self-consistently. The quantum problem is defined as a linear Schrödinger-type equation. The equation of motion for the nodes is formulated as a classical non-linear wave equation derived from a Ginzburg–Landau-type Hamiltonian on the line coordinate of space. The non-linearity suppresses a spectrum of solutions, as is characteristic for linear Schrödinger-type wave solutions. This can also be compared with the theory of solitonian waves <cit.> and Goldstone modes in elementary particle physics <cit.>.The doublons are the minimum solution of the classical part of the quantum-phase-field equation (<ref>). The quantum part (<ref>) defines gravitational attraction or repulsion; the mixed part defines the equation of motion of particles under gravitational action.Energy is the only fundamental substance, and its net amount is zero, balancing the positive energy of particles with the negative energy of space. The problem then arises that a closed quantum system without energy has to be stationary according to the time-dependent Schrödinger equation (cf. the Wheeler DeWitt equation <cit.>):i h ∂ψ∂ t = Ĥψ = 0.In the present concept, the problem is separated into a linear quantum problem on the individual doublons and a non-linear classical wave problem for the solitonian fronts.Equation (<ref>) is linear in ψ, and time is seen as the observable conjugate to energy in the quantum-mechanical sense, and this part of the equation is reversible in time t.Ĥ itself is a non-linear function of the classical field variables ϕ=ϕ(t̃), and t̃ shall be called `thermodynamic time' as the observable conjugate to entropy. This distinction is introduced here to emphasize the different meaning of time in quantum mechanics (reversible in time) and thermodynamics, where time t̃ is not reversible according to the second law of thermodynamics. The thermodynamic time governs the dynamic evolution of the system according to the non-linear classical wave equation (<ref>) or (<ref>). The evolution of the phases ϕ then changes the spectrum of quantum fluctuations within the doublons; i.e., it determines the time dependence of the wave function ψ. Although up until now only the quasi-static solution has been worked out, a coupled solution should exist. We may formally introduce a complex time variable, where t̃ forms the real axis and i t the imaginary axis (see also the considerations in <cit.>). I leave further discussion to future work.Comparing the positive energy of mass and the negative energy of space leads to the prediction of a marginal distance Ω̅ beyond which gravitational action becomes repulsive. This distance compares well to the measured size of large voids in the universe <cit.>: massive objects at the rim of the voids repel each other such that they cannot enter the voids by gravitational forces.Here, another fundamental problem arises: if energy is conserved, according to Noether's theorem, the universe must be stationary. We may recall the concept of self-similar distributions in materials science, e.g., coarsening in a multi-grain structure <cit.>. Defining the relative length scale ω_r=ΩΩ̅, one may reformulate the theory in this relative coordinate system. In the classical concepts cited above, the system reaches a self-similar distribution in the relative coordinates that is stationary in time: the system has to become self-similar if the system parameters are time-independent. We may assume that such a condition also holds for the universe, but a rigorous proof has not been given to date.In the present treatment, space separates and connects particles. This defines the structure of the network of physical reality. Particles and spaces are two aspects of the doublons, the primitive object of the network; the doublon network of positive and negative states of energy defines the physical reality. It can be embedded into a 3D vector space or higher, but these spaces are not physical. In a multi-dimensional vector space, whether locally or globally defined, points are compact in any direction. In a network, points are compact only in one direction: the line coordinate s between nodes. Points in a multi-dimensional vector space that do not lie on edges or nodes of the network—i.e., which do not coincide with at least one doublon—are inaccessible: they have no physical reality. A last comment shall be given regarding `general relativity,' the best accepted theory of gravitation. General relativity is based on the premise that space is a multidimensional continuum which, in prevailing mathematical language, is formulated as a vector space. In this interpretation—the extroverted view of space—mass and space are treated as separate entities. Einstein's equations present a closure of the non-convex problem of space and mass: they connect masses via the deformation of space. The closure is based on Riemann curvature, which is basically derived from a network structure of discrete spaces. In the introverted view of space, where mass and space are part of the same entity (energy), the connection of mass and space emerges naturally. Energy is structured in the form of phases. Introducing space as a 1D line coordinate, one finds the doublon solution of phase expanded in this coordinate. The doublons are connected by the sum constraint (<ref>) to form a `universe.' Applying the gradient contribution of the Hamiltonian to the phases defines the positive energy of mass. Applying the gradient operators to the wave function and solving the quantum problem of acoustic excitation within the doublons defines the negative energy of the spaces. The theory defines gravitational attraction as a wave phenomenon. It is relativistic invariant and predicts repulsive gravitational action at ultra-long distances.If we examine the arguments favoring the introverted view of space against the extroverted view, we are driven to conclude that multidimensional continuum space does not exist: it is a construct of our brains—an illusion.10Note1 Particles are usually associated by a positive energy of their rest mass. Distance, space between particles, usually is not considered as an energetic state, although Newtons physics clearly draws this connection. I show that it has a negative energy, the energy of space. See section <ref>. Steinbach-QPF I. Steinbach, Quantum-phase-field concept of matter: Emergent gravity in the dynamic universe, Zeitschrift fur Naturforschung A72,(2017). Steinbach-Soliton J. Kundin and I. Steinbach, Quantum-phase-field: from the Broglie–Bohm double-solution program to doublon networks, Zeitschrift für Naturforschung75,(2020). Lectures I. Steinbach and H. Salama,Lectures on Phase Field (Springer, ADDRESS, 2023). Steinbach99b I. Steinbach and F. Pezzola, A generalized field method for multiphase transformations using interface fields, Physica D134,(1999). DeWitt1967 D. S. DeWitt, Quantum theory of gravity. I. The canonical theory, Physical Review160,1113(1967). Note2 Using the same argument, in previous versions of the theory <cit.>, we restricted ourselves to a pair of phases and a quadratic potential. Here, however, a non-analytical treatment of the non-linearity of the phase-field equation is needed, see <cit.>. Bohm1952a D. Bohm, A Suggested interpretation of the quantum theory in terms of “hidden” variables. I, Physical Review85,166(1952). Bohm1952b D. Bohm, A suggested interpretation of the quantum theory in terms of “hidden” variables. II, Physical Review85,180(1952). deBroglie1960 L. de Broglie,inNonlinear wave mechanics (PUBLISHER, ADDRESS, 1960), trans. A. J. Knodel, Elsevier (1960). deBroglie1971 L. de Broglie, L’interpretation de la mechanique ondulatoire par la theorie de la double solution, Proceedings of the International School of Physics Enrico Fermi49,346(1971). Casimir1948 H. Casimir, On the attraction between two perfectly conducting plates, Proceedings of the Koninklijke Nederlandse Akademie van Wetenschappen B51,793(1948). Einstein1918 A. Einstein, Über Gravitationswellen, Sitzungsberichte der Königlich Preussischen Akademie der Wissenschaften Berlin154(1918). Persinger2009 M. Persinger, A Simple Estimate for the Mass of the Universe: Dimensionless Parameter A and the Construct of "Pressure", J. of Physics Astrophysics and Physical Cosmology3,1(2009). Willox S. Colin, T. Durt, and R. Willox, de Broglie's double solution program: 90 years later, Annales de la Fondation Louis de Broglie. Goldstone1961 J. Goldstone, Field theories with Superconductor solutions, Il Nuovo Cimento19,154(1961). Goldstone1962 J. Goldstone, A. Salam, and S. Weinberg, Broken Symmetries, Phys. Rev. 127,965(1962). Chiatti C. Leonardo, Quantum Entities and the Nature of Time, Qeios, https://doi.org/10.32388/5UTZO4. Mueller2000 V. Müller, S. Arbabi-Bidgoli, J. Einasto, and D. Tucker, Voids in the Las Campanas Redshift Survey versus cold dark matter models, Mon. Not. R. Astron. Soc.318,280(2000). HILLERT1965 M. Hillert, On the theory of normal and abnormal grain growth, Acta Metallurgica13,227(1965). Darvishi2015 R. D. Kamachali, A. Abbondandolo, K. F. Sieburg, and I. Steinbach, Geometrical grounds of mean field solutions for normal grain growth, Acta Materialia 90,(2015). Steinbach2020 I. Steinbach, Erratum to: Quantum-phase-field concept of matter: Emergent gravity in the dynamic universe, Zeitschrift fur Naturforschung A89 (2020).
http://arxiv.org/abs/2310.18386v1
{ "authors": [ "Ingo Steinbach" ], "categories": [ "physics.gen-ph" ], "primary_category": "physics.gen-ph", "published": "20231027095204", "title": "Is multidimensional space an illusion?" }
Heuristics for Inequality minimization in PageRank values Subhajit Sahu January 14, 2024 ========================================================= Intelligent agents use internal world models to reason and make predictions about different courses of their actions at many scales <cit.>. Devising learning paradigms and architectures that allow machines to learn world models that operate at multiple levels of temporal abstractions while dealing with complex uncertainty predictions is a major technical hurdle <cit.>. In this work, we propose a probabilistic formalism to learn multi-time scale world models which we call the Multi Time Scale State Space (MTS3) model. Our model uses acomputationally efficient inference scheme on multiple time scales for highly accurate long-horizon predictions and uncertainty estimates over several seconds into the future. Our experiments, which focus on action conditional long horizon future predictions, show that MTS3 outperforms recent methods on several system identification benchmarks including complex simulated and real-world dynamical systems. Code is available at this repository: <https://github.com/ALRhub/MTS3>.§ INTRODUCTION World models attempt to learn a compact and expressive representation of the environment dynamics from observed data. These models can predict possible future world states as a function of an imagined action sequence and are a key ingredient of model-predictive control <cit.> and model-based reinforcement learning (RL). One important dimension of world models is the level of temporal granularity or the time scale at which the model operates.Existing literature on world models operates at a single level of temporal abstraction, typically at a fine-grained level such as milliseconds. One drawback of single-time scale world models is that they may not capture longer-term trends and patterns in the data <cit.>.For efficient long-horizon prediction and planning, the model needs to predict at multiple levels of temporal abstractions <cit.>.Intuitively, low-level temporal abstractions should contain precise details about the input so as to predict accurately in the short term, while high-level, abstract representations should simplify accurate long-term predictions. Both abstractions must also interrelate with each other at least in the sense that the higher-level predictions/plans can be turned into low-level moment-by-moment predictions.For example, in robotic manipulation, the robot must be able to perform precise and coordinated movements to grasp and manipulate the object at a fast time scale while at a slower time scale, the robot must also be able to recognize and utilize higher-level patterns and structures in the task, such as the shape, size and location of objects, and the overall goal of the manipulation task. Furthermore, temporal abstractions can capture relevant task structures across dynamical systems under non-stationary which can be used to identify the similarities and differences between tasks, allowing the robot to transfer knowledge learned from one task to another <cit.>.In this work, we attempt to come up with a principled probabilistic formalism for learning such multi-time scale world models as a hierarchical sequential latent variable model. We show that such models can better capture the complex, non-linear dynamics of a system more efficiently and robustly than models that learn on a single timescale. This is exemplified in several challenging simulated and real-world prediction tasks such as the D4RL dataset, a simulated mobile robot and real manipulators including data from heavy machinery excavators.§ PRELIMINARIESState space models (SSMs) are Bayesian probabilistic graphical models <cit.> that are popular for learning patterns and predicting behaviour in sequential data and dynamical systems. Formally, we define a state space model as a tuple ( 𝒵, 𝒜, 𝒪, f, h, Δ t), where 𝒵 is the state space, 𝒜 the action space and 𝒪 the observation space of the SSM. The parameter Δ t denotes the discretization time-step and f and h the dynamics and observation models respectively. We will consider the Gaussian state space model that is represented using the following equations[c] z_t= f(z_t-1,a_t-1) + ϵ_t, ϵ_t ∼𝒩(0, Σ_z), and[c]o_t= h(z_t) + v_t, v_t ∼𝒩(0, Σ_o).Here z_t ∈𝒵, a_t ∈𝒜 and o_t ∈𝒪 are the latent states, actions and observations at time t. The vectors ϵ_t and v_t denote zero-mean Gaussian noise.When f and h are linear/locally linear, inference can be performed efficiently via exact inference. There have been several works recently <cit.> where these closed-form solutions are coded as layers of a neural network in deep-state space model literature, i.e, the architecture of the network is informed by the structure of the probabilistic state estimator. Following this line of work, we propose a multi-time scale linear Gaussian state space model, whose inference can be performed via closed-form solutions. (Locally-)Linear Gaussian SSMs. To perform inference in SSMs, we follow <cit.>. They use a (locally-)linear dynamics model. Moreover, they replace the observations o with a latent observation w. This latent observation is obtained by an encoder w_ o_t = enc_w( o_t) along with the uncertainty of this observation, i.e., σ_o_t = enc_σ( o_t). Due to the non-linear observation encoder, a simplified linear observation model can now be used. Hence, the dynamics and observation models can be described asp( z_t+1| z_t,a_t) =𝒩( A_tz_t +c_t +B_ta_t,diag(σ_z)),andp( w_o_t| z_t) =𝒩( Hz_t,diag(σ_o_t)),where a simple observation matrix of H = [ I, 0] is used. The underlying assumption behind this observation model is that the latent state z_t = [ p_t^T,d_t^T]^T has twice the dimensionality of the latent observation w_t and only the first half of the latent state, i.e., p_t, can be observed. The second half of the latent state, i.e., d_t, serves as derivative or velocity units that can be used by the model to estimate the change of the observable part of the latent state.Factorized Inference in Linear Gaussian SSMs. Inference in the introduced linear Gaussian SSM is straightforward and can be performed using Kalman prediction and observation updates. However, these updates involve high dimensional matrix inversions that are expensive to evaluate and hard to backpropagate for end-to-end learning. Hence, <cit.> introduce a factorization of the belief p( z_t| o_1:t,a_1:t-1) = 𝒩(μ_t, Σ_t) such that only the diagonal and one off-diagonal vector of the covariance need to be computed, i.e.Σ_t = [[ Σ_t^u Σ^s_t; Σ^s_t Σ^l_t ]],with Σ_u = diag(σ_t^s),Σ_l = diag(σ_t^l)and Σ_s = diag(σ_t^s).Using this factorization assumption, closed-form Gaussian inference can be performed using only scalar divisions which are fast and easy to back-propagate. These factorization assumptions form the basis for the inference update in our MTS3 model.Bayesian Aggregation. To aggregate information from several observations into a consistent representation, <cit.> introduce Bayesian aggregation in the context of Meta-Learning. They again use an encoder to obtain a latent observation vector r_o_t and its uncertainty vector σ_o_t. Given the observation model p( r_o_t| z) = 𝒩( Hz, diag(σ_o_t)) with H =I and a prior p( z) = 𝒩(μ_0, diag(σ_0)), the posterior p( z|r_o_1:o_t) can again be effectively computed by Gaussian inference that involve only scalar inversions. Note that computing this posterior is a simplified case of the Kalman update rule used in Gaussian SSMs <cit.>, with no memory units, H =I and no dynamics. To increase efficiency, the update rule can be formulated in a batch manner for parallel processing instead of an incremental update <cit.>. § MULTI TIME SCALE STATE SPACE MODELS[12]r.380.75 [b].38< g r a p h i c s >MTS3 captures slow-moving long-term trends as the latent task dynamics and the fast-moving short-term trends as the latent state dynamics. Our goal is to learn a principled sequential latent variable model that can model the dynamics of partially observable robotic systems under multiple levels of temporal abstractions. To do so, we introduce a new formalism, called Multi Time Scale State Space (MTS3) Model, with the following desiderata: i) It is capable of modelling dynamics at multiple time scales. ii) It allows for a single global model to be learned that can be shared across changing configurations of the environments. iii) It can give accurate long-term predictions and uncertainty estimates. iv) It is probabilistically principled yet scalable during learning and inference. §.§ General DefinitionAn MTS3 model with 2 timescales is defined by two SSMs on a fast and a slow time scale respectively. Both SSMs are coupled via the latent state of the slow time scale SSM, which parametrizes/“reconfigures” the system dynamics of the fast time scale SSM. While the fast time scale SSM runs at the original time step Δ t of the dynamical system, the slow time scale SSM is only updated every H step, i.e., the slow time scale time step is given by H Δ t. We will derive closed-form Gaussian inference for obtaining the beliefs for both time scales, resulting in variations of the Kalman update rule which are also fully differentiable and used to back-propagate the error signal <cit.>. The definition with a 2-level MTS3 along with the inference and learning schemes that we propose is directly extendable to an arbitrary number of temporal abstractions by introducing additional feudal <cit.> hierarchies with longer discretization steps and is further detailed in Section <ref>.§.§.§ Fast time-scale SSM The fast time-scale (fts) SSM is given by 𝒮_fast = ( 𝒵, 𝒜, 𝒪, f_ l^fts, h^fts, Δ t, ℒ). Here, l ∈ℒ is a task descriptor that parametrizes the dynamics model of the SSM and is held constant for H steps. We will denote the task descriptor for the kth time window of H steps as l_k. The probabilistic dynamics and observation model of the fast time scale for the tth time step in the kth window can then be described as p(z_k,t|z_k,t-1,a_k,t-1,l_k) = 𝒩(f^fts_ l(z_k,t-1,a_k,t-1,l_k), Q), andp(o_k,t|z_k,t) = 𝒩(h^fts(z_k,t), R).Task-conditioned marginal transition model. Moreover, we have to consider the uncertainty in the task descriptor (which will, in the end, be estimated by the slow time scale model), i.e., instead of considering a single task descriptor l_k, we have to consider a distribution over task-descriptors p( l_k) for inference in the fts-SSM. This distribution will be provided by the slow-time scale SSM for every time window k. We can further define the marginal task-conditioned transition model for time window k which is given by p_ l_k(z_k,t|z_k,t-1,a_k,t-1) = ∫ p(z_k,t|z_k,t-1,a_k,t-1,l_k) p( l_k) dl_k Latent observations. Following <cit.>, we replace the observations by latent observations and their uncertainty, i.e., we use latent observation encoders to obtain w_k,t = enc_w( o_k,t) and an uncertainty encoder σ_k,t = enc_σ( o_k,t). The observation model is hence given by p( w_k,t| z_k,t) = 𝒩(h^fts( z_k,t), diag(σ_k,t)).§.§.§ Slow time-scale SSM The slow time-scale (sts) SSM only updates every H time step and uses the task parameter l as latent state representation. Formally, the SSM is defined as 𝒮_slow = ( ℒ, ℰ, 𝒯, f^sts, h^sts, H Δ t). It uses an abstract observationβ∈ℬ and abstract action α∈𝒜 that summarize the observations and actions respectively throughout the current time window. The general dynamics model is hence given byp( l_k| l_k-1, α_k) = 𝒩(f^sts( l_k-1, α_k),S). While there exist many ways to implement the abstraction of observations and actions of the time windows, we choose to use a consistent formulation by fusing the information from all H time steps of time window k using Gaussian conditioning. Observation abstraction. In terms of the abstract observation model, we choose to model H observations β_k,t, t ∈ [1,H] for a single slow-scale time step k. All these observations can then be straightforwardly integrated into the belief state representation using incremental observation updates. The abstract observation and its uncertainty for time step t is again obtained by an encoder architecture, i.e, β_k,t = enc_β( o_k,t, t), ν_k,t = enc_ν( o_k,t, t),and p(β_k,t| l_k) = 𝒩(h^sts( l_k), diag(ν_k,t)). Hence, the abstract observation β_k,t contains the actual observation o_k,t at time step t as well as a temporal encoding for the time-step. While multiple Bayesian observation updates are permutation invariant, the temporal encoding preserves the relative time information between the observations, similar to current transformer architectures. Action abstraction. The abstract action α_k causes the transitions to the latent task l_k from l_k-1. It should contain the relevant information of all primitive actions a_k,t, t ∈ [1,H] executed in the time window k. To do so, we again use Bayesian conditioning and latent action encoding. Each control action a_k,t and the encoding of time-step t is encoded into its latent representation and its uncertainty estimate, i.e.,α_k,t = enc_α( a_k,t, t), ρ_k,t = enc_ρ( a_k,t, t).The single latent actions α_k,t can be aggregated into a consistent representation α_k using Bayesian aggregation <cit.>. To do so, we use the likelihood p(α_k,t|α_k) = 𝒩(α_k, diag( ρ_k,t)) and obtain the posterior p(α_k|α_k,1:H) = 𝒩(μ_α_k, Σ_α_k), which is obtained by following the standard Bayesian aggregation equations, see Appendix A. Note that our abstract action representation also contains an uncertainty estimate which can be used to express different effects of the actions on the uncertainty of the prediction. Due to the Gaussian representations, we can compute the marginal transition model p_α_k( l_k| l_k-1, α_k,1:H) = ∫ p_α_k( l_k| l_k-1, α_k) p(α_k |α_k,1:H) dα_k.This transition model is used for inference and its parameters are learned. §.§.§ Connecting both SSMs via inferenceIn the upcoming sections, we will devise Bayesian update rules to obtain the prior p( l_k| β_1:k-1,α_1:k) and posterior p( l_k| β_1:k,α_1:k) belief state for the sts-SSM as well as the belief states for the fts-SSM. The prior belief p( l_k| β_1:k-1,α_1:k) contains all information up to time window k-1 and serves as a distribution over the task-descriptor of the fts-SSM, which connects both SSMs. This connection allows us to learn both SSMs jointly in an end-to-end manner.The probabilistic graphical model of our MTS3 model is depicted in Figure <ref>. In the next section, we will present the detailed realization of each SSM to perform closed-form Gaussian inference and end-to-end learning on both time scales. §.§ Inference in the Fast Time-Scale SSM The fts-SSM performs inference for a given time window k of horizon length H. To keep the notation uncluttered, we will also omit the time-window index k whenever the context is clear.We use a linear Gaussian task conditional transition model, i.e,p(z_t|z_t-1,a_t-1,l_k) = 𝒩(Az_t-1 + Ba_t-1 + Cl_k, Q),where A, B, C and Q are state-independent but learnable parameters. In our formulation, the task descriptor can only linearly modify the dynamics which was sufficient to obtain state-of-the-art performance in our experiments, but more complex parametrizations, such as locally linear models, would also be feasible. Following <cit.>, we split the latent state z_t = [ p_t,m_t]^T into its observable part p_t and a part m_t that needs to be observed over time.We also use a linear observation model p( w_t| z_t) = 𝒩( Hz_t, diag(σ_t)) with H = [ I,0]. We will assume that the distribution over the task descriptor is also given by a Gaussian distribution, i.e., p( l_k) = 𝒩(μ_ l_k, Σ_ l_k), which will be provided by the slow-time scale (sts) SSM, see Section <ref>. Given these modelling assumptions, thetask variable can now be integrated out in closed form, resulting in the following task-conditioned marginal transition modelp_ l_k(z_t|z_t-1,a_t-1) = 𝒩(Az_t-1 + Ba_t-1 + Cμ_ l_k,Q +C Σ_ l_k C^T ),which will be used instead of the standard dynamics equations. We follow the same factorization assumptions as in <cit.> and only estimate the diagonal elements of the block matrices of the covariance matrix of the belief, see Appendix B. The update equations for the Kalman prediction and observation updates are therefore equivalent to the RKN <cit.>.§.§ Inference in the Slow-Time Scale SSMPrediction Update. We follow the same Gaussian inference scheme as for the fts-SSM, i.e., we again employ a linear dynamics modelp( l_k| l_k-1, α_k) = 𝒩( Xl_k-1 +Y α_k,S), where X, Y and S are learnable parameters. The marginalized transition model for the abstract actions is then given byp_α_k(l_k|l_k-1) = ∫ p(l_k|l_k-1,α_k) p(α_k) d α_k = 𝒩(Xl_k-1 + Yμ_α_k, S +Y Σ_α_k Y^T ). We can directly use this transition model to obtain the Kalman prediction update which computes the prior belief p_α_1:k( l_k | β_1:k-1) = 𝒩(μ_l_k^-, Σ_l_k^-) from the posterior beliefp_α_1:k-1( l_k-1 | β_1:k-1) = 𝒩(μ_l_k-1^+, Σ_l_k-1^+) of the previous time window, see Appendix A.Observation Update. Similarly, we will use a linear observation model for the abstract observationsp(β_k,t|l_k) = 𝒩( Hl_k, diag(ν_k,t)) with H = [ I,0]. As can be seen from the definition of the observation matrix H, the latent space is also decomposed into its observable and unobservable part, i.e., l_k = [ u_k,v_k]. In difference to the standard factorized Kalman observation update given in Appendix A, we have to infer with a set of observations β⃗_k,t with t = 1 … H for a single time window k. While in principle, the Kalman observation update can be applied incrementally H times to obtain the posterior p_α_1:k( l_k | β_1:k) = 𝒩(μ_l_k^+, Σ_l_k^+), such an update would be very slow and also cause numerical inaccuracies. Hence, we devise a new permutation invariant version of the update rule that allows parallel processing with set encoders <cit.>. We found that this update rule is easier to formalize using precision matrices. Hence, we first transform the prior covariance vectors σ_l_k^u-, σ_l_k^l- and σ_l_k^s- to its corresponding precision representation λ_l_k^u-, λ_l_k^l- and λ_l_k^s- which can be performed using block-wise matrix inversions of Σ_l_k^-. Due to the factorization of the covariance matrix, this operation can be performed solely by scalar inversions. As the update equations are rather lengthy, they aregiven in Appendix A, B. Subsequently, we compute the posterior precision, where only λ_l_k^u is changed by λ_l_k^u+ = λ_l_k^u- + ∑_t=1^H1 ⊘ν_k,t while λ_l_k^l+ = λ_l_k^l- and λ_l_k^s+ = λ_l_k^s- remain constant. The operator ⊘ denotes the element-wise division. From the posterior precision, we can again obtain the posterior covariance vectors σ_l_k^u+, σ_l_k^l+ and σ_l_k^s+ using only scalar inversions, see Appendix A, B. The posterior mean μ_l,k^+ can now be obtained from the prior mean μ_l,k^- as [c]μ_l,k^+=μ_l,k^- + [ [ σ_l_k^u+; σ_l_k^s+;]]⊙[ [ ∑_t=1^H(β_k,t-μ^u,-_l_k) ⊘ν_k,t; ∑_t=1^H(β_k,t-μ^u,-_l_k) ⊘ν_k,t; ]].Note that for H = 1, i.e a single observation, the given equation is equivalent to the RKN updates. Moreover, the given rule constitutes a unification of the batch update rule for Bayesian aggregation <cit.> and the incremental Kalman update for our factorization of the belief state representation <cit.> detailed in Appendix A.§.§ A General Definition For anN-level MTS3 An N-level MTS3 can be defined as a family of N-state space models, {S_0, S_1, ..., S_N-1}. Each of the state space model S_i is given by S_i = (Z_i, A_i, O_i, f_i, h_i, H_i Δ t, L_i), where Z_i is the state space, A_i the action space, and O_i the observation space of the SSM. The parameter H_i Δ t denotes the discretization time-step and f_i and h_i the dynamics and observation models, respectively. Here, l_i ∈ L_i is a task descriptor that parametrizes the dynamics model of the SSM and is held constant for a local window of H_i+1 steps. l_i is a function of the latent state of SSM one level above it, i.e., S_i+1. The boundary cases can be defined as follows: for i=0, H_0 = 1. Similarly, for i=N-1, the latent task descriptor L_i is an empty set. For all i, H_i < H_i+1.Even though our experiments focus on MTS3 models with 2 hierarchies, extensive experimentation with more hierarchies can be taken as future work.§ MTS3 AS A HIERARCHICAL WORLD MODEL MTS3 allows for a natural way to build world models that can deal with partial observability, non-stationarity and uncertainty in long-term predictions, properties which are critical for model-based control and planning. Furthermore, introducing several levels of latent variables, each working at adifferent time scale allows us to learn world models that can make action conditional predictions/“dreams” at multiple time scales and multiple levels of state and action abstractions.§.§ Conditional Multi Time Predictions With World ModelConditional multi-step ahead predictions involve estimating plausible future states of the world resulting from a sequence of actions. Our principled formalism allows for action-conditional future predictions at multiple levels of temporal abstractions. The prediction update for the sts-SSM makes prior estimates about future latent task variables conditioned on the abstract action representations. Whereas, the task conditional prediction update in the fts-SSM estimates the future prior latent states, conditioned on primitive actions and the inferred latent task priors, which are decoded to reconstruct future observations.For initializing the prior belief p( z_k,1) for the first time step of the time window k, we use the prior belief p( z_k-1, H+1) of the last time step of the time window k-1.§.§ Optimizing the Predictive Log-Likelihood The training objective for the MTS3 involves maximizing the posterior predictive log-likelihood which is given below for a single trajectory, i.e., L = ∑_k=1^N ∑_t=1^H log p(o_k,t+1|β_1:k-1,α_1:k,w_k,1:t, a_k,1:t)= ∑_k=1^N ∑_t=1^H log∬ p(o_k,t+1|z_k,t+1)p(z_k,t+1|w_k,1:t,a_k,1:t,l_k) p( l_k|β_1:k-1,α_1:k)dz_k,t+1 dl_k=∑_k=1^N ∑_t=1^H log∫ p(o_k,t+1|z_k,t+1)p_ l_k(z_k,t+1|w_k,1:t,a_k,1:t) dz_k,t+1.The extension to multiple trajectories is straightforward and omitted to keep the notation uncluttered. Here, o_k,t+1 is the ground truth observations at the time step t+1 and time window k which needs to be predicted from all (latent and abstract) observations up to time step t. The corresponding latent state prior beliefp_ l_k(z_k,t+1|w_k,1:t,a_k,1:t) has a closed form solution as discussed in Section <ref>.We employ a Gaussian approximation of the posterior predictive log-likelihood of the form p(o_k,t+1|β_1:k-1,α_1:k,w_k,1:t, a_k,1:t) ≈𝒩(μ_o_k,t+1,diag(σ_o_k,t+1)) where we use the mean of the prior belief μ_z_k,t+1^- to decode the predictive mean, i.e, μ_o_k,t+1 =dec_μ(μ_z_k,t+1^-) and the variance estimate of the prior belief to decode the observation variance, i.e., σ_o_k,t+1 = dec_σ(Σ_z_k,t+1^-). This approximation can be motivated by a moment-matching perspective and allows for end-to-end optimization of the log-likelihood without using auxiliary objectives such as the ELBO <cit.>.Gradients are computed using (truncated) backpropagation through time (BPTT) <cit.> and clipped. We optimize the objective using the Adam <cit.> stochastic gradient descent optimizer with default parameters. We refer to Appendix A for more details. For training, we also initialize the prior belief p( z_k,1) with theprior belief p_ l_k-1( z_k-1,H+1| w_k-1,1:H, a_k-1,1:H) from of the previous time window k-1. However, we cut the gradients for the fast time scale between time windows as this avoids vanishing gradients and we observed a more stable learning behaviour. Yet, the gradients can still flow between time windows for the fts-SSM via the sts-SSM. §.§ Imputation Based Training For Long Term Prediction Using the given training loss results in models that are good in one-time step prediction but typically perform poorly in long-term predictions as the loss assumes that observations are always available up to time step t. To increase the long-term prediction performance, we can treat the long-term prediction problem as a case of the “missing value” problem, where the missing observations are at the future time steps. Thus, to train our model for long-term prediction, we randomly mask a fraction of observations and explicitly task the network to impute the missing observations, resulting in a strong self-supervised learning signal for long-term prediction with varying prediction horizon length. This imputation scheme is applied at both time scales, masking out single-time steps or whole time windows of length H. The imputation mask is also randomly resampled for every mini-batch. § RELATED WORKMulti Time Scale World Models One of the early works that enabled environment models at different temporal scales to be intermixed, producing temporally abstract world models was proposed by <cit.>. The work was limited to tabular settings but showed the importance of learning environment dynamics at multiple abstractions. However, there have been limited works that actually solve this problem at scale as discussed in <cit.>. A probabilistically principled formalism for these has been lacking in literature and this work is an early attempt to address this issue. Deep State Space Models. Deep SSMs combine the benefits of deep neural nets and SSMs by offering tractable probabilistic inference and scalability to high-dimensional and large datasets. <cit.> use neural network architectures based on exact inference on SSMs and perform state estimation and dynamics prediction tasks. <cit.> extend these models to modelling non-stationary dynamics. <cit.> perform learning and inference on SSMs using variational approximations. However, most of these recurrent state-space models have been evaluated on very short-term prediction tasks in the range of a few milliseconds and model the dynamics at a single time scale. TransformersRecent advancements in Transformers <cit.>, which rely on attention mechanism, have demonstrated superior performance in capturing long-range dependency compared to RNN models in several domains including time series forecasting <cit.> and learning world models <cit.>. <cit.> use transformer architectures based on a direct multistep loss <cit.> and show promising results for long-term forecasting since they avoid error accumulation from autoregression. On the other hand <cit.> uses a GPT-like autoregressive version of transformers to learn world models. These deterministic models, however, do not deal with temporal abstractions and uncertainty estimation in a principled manner. Nevertheless, we think Transformers that operate at multiple timescales based on our formalism can be a promising alternative research direction. § EXPERIMENTSIn this section, we evaluate our approach to a diverse set of simulated and real-world dynamical systems for long-horizon prediction tasks. Our experiments are designed to answer the following questions. (a) Can MTS3 make accurate long-term deterministic predictions (mean estimates)? (b) Can MTS3 make accurate long-term probabilistic predictions (variance estimates)? (c) How important are the modelling assumptions and training scheme?§.§ Baseline Dynamics ModelsWhile a full description of our baselines can be found in Appendix E, a brief description of them is given here: (a) RNNs - We compare our method to two widely used recurrent neural network architectures, LSTMs <cit.> and GRUs <cit.>.(b) RSSMs - Among several RSSMs from the literature, we chose RKN <cit.> and HiP-RSSM <cit.> as these have shown excellent performance for dynamics learning for short-term predictions and rely on exact inference as in our case. (c) Transformers - We also compare with two state-of-the-art Transformer <cit.> variants. The first variant (AR-Transformer) relies on a GPT-like autoregressive prediction <cit.>. Whereas the second variant (Multi-Transformer) uses direct multi-step loss <cit.> from recent literature on long horizon time-series forecasting <cit.>. Here, multistep ahead predictions are performed using a single shot given the action sequences. §.§ Environments and DatasetsWe experiment with three broad datasets. While full descriptions of these datasets, dataset creation procedure, and overall statistics are given in Appendix D, a brief description of them is as follows. (a) D4RL Datasets - We use a set of 3 different environments/agents from D4RL dataset <cit.>, which includes the HalfCheetah, Medium Maze and Franka Kitchen environment. Each of these was chosen because of their distinct properties like sub-optimal trajectories (HalfCheetah), realistic domains / human demonstrations (Kitchen), multi-task trajectories, non-markovian collection policies (Kitchen and Maze) and availability of long horizon episodes (all three). (b) Manipulation Datasets - We use 2 datasets collected from a real excavator arm and a Panda robot. The highly non-linear non-markovian dynamics due to hydraulic actuators in the former and non-stationary dynamics owing to different payloads in the latter make them challenging benchmarks. Furthermore, accurate modelling of the dynamics of these complex systems is important since learning control policies for automation directly on large excavators is economically infeasible and potentially hazardous. (c) Mobile Robotics Dataset - We set up a simulated four-wheeled mobile robot traversing a highly uneven terrain of varying steepness generated by a mix of sinusoidal functions. This problem is challenging due to the highly non-linear dynamics involving wheel-terrain interactions and non-stationary dynamics introduced by varying steepness levels. In all datasets, we only use information about agent/object positions and we mask out velocities to create a partially observable setting. §.§ Can MTS3 make accurate long-term deterministic predictions (mean estimates)? Here we evaluate the quality of the mean estimates for long-term prediction using our approach. The results are reported in terms of RMSE in Figure <ref>. We see that MTS3 gives consistently good long-term action conditional future predictions on all 6 datasets. Deep Kalman models <cit.> which operate on a single time scale fail to give meaningful mean estimates beyond a few milliseconds. Similarly, widely used RNN baselines <cit.> which form the backbone of several world models <cit.> give poor action conditional predictions over long horizons. AR-Transformers also fail possibly due to error accumulation caused by the autoregression. However, Multi-Transformers are a strong baseline that outperforms MTS3 in the Medium Maze and Panda dataset by a small margin. However, on more complex tasks like the Kitchen task, which requires modelling multi-object, multi-task interactions <cit.>, MTS3 is the only model that gives meaningful long horizon predictions. A visualization of the predicted trajectories vs. ground truth is given in Appendix C.§.§ Can MTS3 make accurate long-term probabilistic predictions (variance estimates)?Next, we examine the question of whether the principled probabilistic inference translates to accurate uncertainty quantification during long-horizon predictions. We trained all the baselines with a negative log-likelihood loss and used the same as a metric to quantify the quality of uncertainty estimates. As seen in table <ref>, MTS3 gives the most accurate uncertainty estimates in all datasets except Medium Maze, where it is outperformed by Multi-Transformer. Also, notably, AR-Transformers and deep Kalman models fail to learn any meaningful uncertainty representation when it comes to long-term predictions. §.§ How important are the modelling assumptions and training scheme? Now, we look at three important modelling and training design choices: (i) splitting the latent states to include an unobservable “memory” part using observation model h^sts=h^fts= H = [ I,0] as discussed in Sections <ref> and <ref>, (ii) action abstractions discussed in Section <ref>, (iii) training by imputation. To analyze the importance of the memory component, we derived and implemented an MTS3 variant with an observation model of h^sts=h^fts= I and a pure diagonal matrix representation for the covariance matrices. As seen in Figure <ref>, this results in worse long-term predictions, suggesting that splitting the latent states in its observable and unobservable part in MTS3 is critical for learning models of non-markovian dynamical systems. Regarding (ii), we further devised another variant where MTS3 only had access to observations, primitive actions and observation abstractions, but no action abstractions. As seen in our ablation studies, using the action abstraction is crucial for long-horizon predictions.Our final ablation (iii) shows the importance of an imputation-based training scheme discussed in Section <ref>. As seen in Figure <ref> when trained for 1 step ahead predictions without imputation, MTS3 performs significantly worse for long-term prediction suggesting the importance of this training regime.§.§ What is the role of the discretization step H.Δ t? Finally, we perform ablation for different values of H.Δ t, which controls the time scale of the task dynamics. The results reported are for the hydraulics dataset. The higher the value of H, the slower the timescale of the task dynamics relative to the state dynamics. As seen in Figure <ref>, smaller values of H (2,3,5 and 10) give significantly worse performance. Very large values of H (like 75) also result in degradation of performance. To further get an intuitive understanding of the MTS3's behaviour under different timescales, we plot the predictions given by MTS3 for different values of H on a trajectory handpicked from the hydraulics excavator dataset. As seen in Figure <ref>, for large values of H like 30 and 75, we notice that the slow-changing task dynamics "reconfigures" the fast dynamics every 30 and 75-step window respectively, by conditioning the lower level dynamics with the newly updated task prior. This effect is noticeable as periodic jumps or discontinuities in the predictions, occurring at 30 and 75-step intervals. Also, for a very large H like 75, the fast time scale ssm has to make many more steps in a longer window resulting in error accumulation and poor predictions. § CONCLUSION AND FUTURE WORKIn this work, we introduce MTS3, a probabilistic formalism for learning the dynamics of complex environments at multiple time scales. By modelling the dynamics of the world at multiple levels of temporal abstraction we capture both the slow-changing long-term trends and fast-changing short-term trends in data, leading to highly accurate predictions spanning several seconds into the future. Our experiments demonstrate that simple linear models with principled modelling assumptions can compete with large transformer model variants that require several times more parameters. Furthermore, our inference scheme also allows for principled uncertainty propagation over long horizons across multiple time scales which capture the stochastic nature of environments. We believe our formalism can benefit multiple future applications including hierarchical planning/control. We discuss the limitations and broader impacts of our work in Appendix F and G.§ ACKNOWLEDGEMENTWe thank the anonymous reviewers for the valuable remarks and discussions which greatly improved the quality of this paper. This work was supported by funding from the pilot program Core Informatics of the Helmholtz Association (HGF). The authors acknowledge support by the state of Baden-Württemberg through bwHPC, as well as the HoreKa supercomputer funded by the Ministry of Science, Research and the Arts Baden-Württemberg and by the German Federal Ministry of Education and Research.plainnat § IMPLEMENTATION DETAILS§.§ Inference In Slow Time Scale SSM §.§.§ Inferring Action Abstraction (sts-SSM)[11]r.31 [b]0.31width= Generative model for the abstract action α_k. The hollow arrows are deterministic transformations leading to implicit distribution α_k,t using an action set encoder. Given a set of encoded primitive actions and their corresponding variances {α_k,t, ρ_k,t}_t=1^H, using the prior and observation model assumptions in Section 3.1.2 of main paper, we infer the latent abstract action p(α_k|α_k,1:H) = 𝒩(μ_α_k, Σ_α_k)= 𝒩(μ_α_k, diag(σ_α_k)) as a Bayesian aggregation <cit.> of these using the following closed-form equations:σ_α_k= ( ( σ_0)^⊖ + ∑_n=1^N ((ρ_k,t)^⊖) )^⊖,μ_α_k= μ_0 + σ_α_k⊙∑_n=1^N (α_k,t - μ_0) ⊘ρ_k,t Here, ⊖, ⊙ and ⊘ denote element-wise inversion, product, and division, respectively. The update equation is coded as the “abstract action inference” neural network layer as shown in Figure <ref>. §.§.§ Task Prediction (sts-SSM)The goal of this step is to update the prior marginal over the latent task variable l_k, p(l_k|β_1:k-1,α_1:k), given the posterior beliefs from the time window k-1 and abstract action α_k.Using the linear dynamics model assumptions from Section 3.3, we can use the following closed-form update equations to compute, p(l_k|β_1:k-1,α_1:k) = 𝒩(μ_l_k^-,Σ_l_k^-), where[c] μ_l_k^-=Xμ_l_k-1^+ + Yα_k Σ_l_k^-= XΣ_l_k-1^+X^T + YΣ_α_kY^T +S. These closed-form equations are coded as the “task predict” neural net layer as shown in Figure <ref>.§.§.§ Task Update (sts-SSM) In this stage, we update the prior over l_k using an abstract observation set {β_k,t}_t=1^H, to obtain the latent task the posterior 𝒩(μ_z_k,t^+,Σ_z_k,t^+) = 𝒩([[ μ_t^u+; μ_t^l+ ]],[[ Σ_t^u Σ^s_t; Σ^s_t Σ^l_t ]]^+), with Σ_l_k^u = diag(σ_l_k^u),Σ_l_k^l = diag(σ_l_k^l)andΣ_l_k^s = diag(σ_l_k^s).To do so we first invert the prior covariance matrix [[ Σ_l_k^u Σ_l_k^s; Σ_l_k^s Σ_l_k^l ]]^+ to the precision matrix [[ λ_l_k^u λ_l_k^s; λ_l_k^s λ_l_k^l ]]^+ for permutation invariant parallel processing. The posterior precision is then computed using scalar operations are follows, where only λ_l_k^u is changed by λ_l_k^u+ = λ_l_k^u- + ∑_t=1^H1 ⊘ν_k,t while λ_l_k^l+ = λ_l_k^l- and λ_l_k^s+ = λ_l_k^s- remain constant. The operator ⊘ denotes the element-wise division. The posterior precision is inverted back to the posterior covariance vectors σ_l_k^u+, σ_l_k^l+ and σ_l_k^s+. Now, the posterior mean μ_l,k^+ can be obtained from the prior mean μ_l,k^- as [c]μ_l,k^+=μ_l,k^- + [ [ σ_l_k^u+; σ_l_k^s+;]]⊙[ [ ∑_t=1^H(β_k,t-μ^u,-_l_k) ⊘ν_k,t; ∑_t=1^H(β_k,t-μ^u,-_l_k) ⊘ν_k,t; ]].The inversion between the covariance matrix and precision matrix can be done via scalar operations leveraging block diagonal structure as derived in Appendix B. Figure <ref> shows the schematic of the task update layer.§.§ Inference In Fast Time Scale SSM The inference in fts-SSM for a time-window k involves two stages as illustrated in Figure <ref>, calculating the prior and posterior over the latent state variable z_t. To keep the notation uncluttered, we will also omit the time-window index k whenever the context is clear as in section 3.2.§.§.§ Task Conditional State Prediction (fts-SSM) Following the assumptions of a task conditional linear dynamics as in Section 3.2 of the main paper, we obtain the prior marginal for p(z_k,t|w^k_1:t-1,a^k_1:t-1,β_1:k-1,α_1:k-1) = 𝒩(μ_z_k,t^-,Σ_z_k,t^-) in closed form, whereμ_z_k,t^-=Aμ_z_k,t-1^- + Ba_k,t-1 + Cμ_l_k^-,Σ_k,t^-= AΣ_k,t-1^+A^T + CΣ_l_k^-C^T + Q. §.§.§ Observation Update (fts-SSM)In this stage, we compute the posterior belief p(z_k,t|w^k_1:t,a^k_1:t,β_1:k,α_1:k-1) = 𝒩(μ_z_k,t^-,Σ_z_k,t^-).using the same closed-form update as in<cit.>. The choice of the special observation model splits the state into two parts, an upper z_t^u and a lower part z_t^l, resulting in the posterior belief 𝒩(μ_z_k,t^-,Σ_z_k,t^-) = 𝒩([[ μ_t^u+; μ_t^l+ ]], [[ Σ_t^u Σ^s_t; Σ^s_t Σ^l_t ]]^+), with Σ_t^u = diag(σ_t^s),Σ_t^l = diag(σ_t^l) and Σ_t^s = diag(σ_t^s). Thus, the factorization allows for only the diagonal and one off-diagonal vector of the covariance to be computed and simplifies the calculation of the mean and posterior to simple scalar operations. The closed-form equations for the mean can be expressed as the following scalar equations,z_t^+ = z_t^- + [[ σ^u,-_t; σ^l,-_t ]] ⊙[[ w_t - z^u,-_t; w_t - z^u,-_t ]] ⊘[[ σ_t^u,- + σ_t^obs; σ_t^u,- + σ_t^obs ]], The corresponding equations for the variance update can be expressed as the following scalar operations,σ^u,+_t= σ^u,-_t ⊙σ^u,-_t ⊘( σ_t^u,- + σ_t^obs), σ^s,+_t= σ^u,-_t ⊙σ^s,-_t ⊘( σ_t^u,- + σ_t^obs),σ^l,+_t= σ^l, -_t - σ^s,-_t ⊙σ^s,-_t ⊘( σ_t^u,- + σ_t^obs), , where ⊙ denotes the elementwise vector product and⊘ denotes an elementwise vector division.§.§ Modelling Assumptions §.§.§ Control ModelTo achieve action conditioning within the recurrent cell of fts-SMM, we include a control model b(a_k,t) in addition to the linear transition model A_t. b(a_k,t) =f(a_k,t), where f(.) can be any non-linear function approximator.We use a multi-layer neural network regressor with ReLU activations <cit.>. However, unlike the fts-SSM where actions are assumed to be known and subjected to no noise, in the sts-SSM, the abstract action is an inferred latent variable with an associated uncertainty estimate. Hence we use a linear control model Y, for principled uncertainty propagation. §.§.§ Transition NoiseWe assume the covariance of the transition noise Q and S in both timescales to be diagonal. The noise is learned and is independent of the latent state. §.§ Training§.§.§ Training Objective DerivationWe further expand on the training objective in Section 4.2 here. The training objective for the MTS3 involves maximizing the posterior predictive log-likelihood which for a single trajectory, can be derived as, L = ∑_k=1^N ∑_t=1^H log p(o_k,t+1|β_1:k-1,α_1:k-1,w_k,1:t, a_k,1:t)= ∑_k=1^N ∑_t=1^H log∬ p(o_k,t+1|z_k,t+1)p(z_k,t+1|w_k,1:t,a_k,1:t,l_k) p( l_k|β_1:k-1,α_1:k-1)dz_k,t+1 dl_k=∑_k=1^N ∑_t=1^H log∫ p(o_k,t+1|z_k,t+1)p_ l_k(z_k,t+1|w_k,1:t,a_k,1:t) dz_k,t+1. The extension to multiple trajectories is straightforward. The approximation to the objective is done based on a moment-matching perspective as discussed in Section 4.2 of the main paper.§.§.§ InitializationWe initialize the states l_1 and z_1,1 at both timescales for the first-time window k=1 with an all zeros vector and correspondingcovariance matrices as Σ_l_1 = Σ_z_1,1 = 10 · I. For subsequent windows,the prior belief p( z_k,1) for the first time step of time window k, is initialized using the posterior belief p_ l_k-1( z_k-1,H| w_k-1,1:H, a_k-1,1:H) of the last time step oftime window k-1. It is also crucial to correctly initialize the transition matrix at both time scales so that the transition does not yield an unstable system. Initially, the transition model should focus on copying the encoder output so that the encoder can learn how to extract good features if observations are available and useful. We initialize the diagonal elements of the transition matrix at both timescales with 1 and the off-diagonal elements with 0.2, while the rest of the elements are set to 0, a choice inspired from <cit.>.§.§.§ Learnable ParametersThe learnable parameters in the computation graph are as follows: Fast Time Scale SSM: The linear transition model A, the non-linear control factor b, the linear latent task transformation model C, the transition noise Q, along with the observation encoder and the output decoder.Slow Time Scale SSM: The linear transition model X, the linear control model Y, the transition noise S, along with the observation set encoder and the action set encoder.§ PROOFS AND DERIVATIONS [7]r.29 [b]0.28width= Graphical Model For Bayesian conditioning with N observations.In the following sections vectors are denoted by a lowercase letter in bold, such as "v", while Matrices as an uppercase letter in bold, such as "M". I denotes identity matrix and 0 represents a matrix filled with zeros. For any matrix M, m denotes the corresponding vector of diagonal entries. Also, ⊙ denotes the elementwise vector product and⊘ denotes an elementwise vector division. §.§ Bayesian Conditioning As Permutation Invariant Set Operations[Bayesian Conditioning]Consider the graphical model given in Figure <ref>, where a set of N conditionally i.i.d observations r = {r_i}_i=1^N are generated by a latent variable l and the observation model p(r_i|l) = 𝒩(r_i |Hl, diag(σ_i^obs)). Assuming an observation model H=[I,0], the mean (μ) and precision matrix (Λ) of the posterior over the latent variable l, p(l|r) = 𝒩( μ_l^+, Σ_l^+) = 𝒩( μ_l^+, (Λ_l^+)^-1), given the prior p_0(l) = 𝒩( μ_l^-, Σ_l^-) = 𝒩( μ_l^-, (Λ_l^-)^-1) have the following permutation invariant closed form updates. Λ_l^+= Λ_l^- + [ [ diag(∑_i=1^n1/σ_i^obs), 0; 0 , 0; ]]μ_l^+ =μ_l^- + [ [ σ_l^u+; σ_l^s+;]]⊙[ [ ∑_i=1^N(r_i-μ^u,-_l) ⊙1/σ_i^obs; ∑_i=1^N(r_i-μ^u,-_l) ⊙1/σ_i^obs; ]]Note that Σ_l is the covariance matrix which is the inverse of the precision matrix Λ_l. Due to the observation model assumption H=[I,0], they take block diagonal form,Σ_l = [[ Σ_l^u Σ^s_l; Σ^s_l Σ^l_l ]],with Σ_u = diag(σ_l^u),Σ_l = diag(σ_l^l)and Σ_s = diag(σ_l^s).Proof: Case 1 (Single Observation): Before deriving the update rule for N conditionally iid observations, let us start with a simpler case consisting of a single observation r. If the marginal Gaussian distribution for the latent variable l takes the form p(𝐥) =𝒩(𝐥|μ, Λ^-1) and the conditional Gaussian distribution for he single observation r given l has the form , p(𝐫|𝐥) =𝒩(𝐫|𝐇𝐥+𝐛, 𝐋^-1). Then the posterior distribution over 𝐥 can be obtained in closed form as, p(𝐥|𝐫) =𝒩(𝐥|Σ{𝐇^T𝐋(𝐫-𝐛)+Λμ}, Λ^-1) ,where Λ=(Λ+𝐇^T𝐋𝐇).We refer to Section 2.3.3 of <cit.>, to the proof for this standard result. Case 2 (Set Of Observations): Now instead of a single observation, we wish to derive a closed form solution for the posterior over latent variable l∈ℝ^2d, given a set of N conditionally i.i.d observations r = {r_i}_i=1^N. Here each element r_i∈ℝ^d of the set r is assumed to to have an observation model H=[I,0]. In the derivation, we represent the set of N observations as a random vectorr = [ [ r_1; r_2; .; .; r_N ]]_Nd × 1. Since each observation in the set r are conditionally independent, we denote the conditional distribution over the context set as r|𝐥∼𝒩(H̅l,Σ_r), where the diagonal covariance matrix has the following form:Σ_r= [ [ diag(σ_r_1), 0, 0,..,0; 0, diag(σ_r_2), 0,..,0; ., ., .,..,.;. , ., .,..,.; 0, 0, 0,..,diag(σ_r_N) ]]_Nd × Nd.The corresponding observation modelH̅ isH̅ =[ [ H; H; .; .; H ]]_Nd × 2d = [ [ I , 0; I , 0;., .; . , .; I , 0 ]]_Nd × 2d. Now given the prior over the latent task variable l∼𝒩(μ_l^-, Σ_l^-), the parameters of the posterior distribution over the task variable, p(l|r) ∼𝒩(μ_l^+, Λ_l^+), can be obtained in closed-form substituting in Equation (<ref>) as follows.Λ_l^+ =(Σ_l^+)^-1= Σ_l^-1+ H̅ ^T Σ_rH̅ = Σ_l^-1+ [ [ diag(σ_r_1),diag(σ_r_2),diag(σ_r_3), . , . , diag(σ_r_N); 0 , 0, 0, . , . ,0;]]_2d× ndH̅ =λ_l^- + [ [ diag(∑_i=1^n1/σ_r_i), 0; 0 , 0; ]]_2d× 2d μ_l^+ = μ_l^- + (Λ^+)^-1H̅ ^T ( σ_r^-2I)(y-H̅μ_x)= μ_l^-+ Σ^+ H̅( σ_r^-2I) (y- H̅μ_x)= μ_l^-+ Σ^+[ [ σ_r_1^-2I , σ_r_2^-2I, σ_r_3^-2I , . , . ,σ_r_n^-2I;0 , 0, 0, . , . ,0; ]] (y-H̅μ_x)=μ_l^- + [ [σ_l^u+, σ_l^s+; σ_l^s+ , σ_l^l+;]][ [ ∑_n=1^N(𝐫_𝐧-μ^u,-_l) ⊙1/σ_i; 0; ]] =μ_l^- + [ [ σ_l^u+; σ_l^s+;]]⊙[ [ ∑_i=1^N(𝐫_𝐢-μ^u,-_l) ⊙1/σ_r_i; ∑_i=1^N(𝐫_𝐧-μ^u,-_l) ⊙1/σ_r_i; ]] Here μ_l^+ is the posterior mean and Λ_l^+ is the posterior precision matrix.The closed form updates for the resulting posterior distribution p(l|r) is permutation invariant with respect to the observation set r.§.§ Derivation For Matrix Inversions as Scalar OperationsConsider a blockmatrix of the following form A = [[ diag( a^u) diag( a^s); diag( a^s) diag( a^l) ]]. Then inverse A^-1 =B can be calculated using scalar operations and is given as, B = [[ diag( b^u) diag( b^s); diag( b^s) diag( b^l) ]] where,b^u = a_l ⊘ (a_u ⊙ a_l-a_s ⊙ a_s )b^s = -a_s ⊘ (a_u ⊙ a_l-a_s ⊙ a_s )b^l = a_u ⊘ (a_u ⊙ a_l-a_s ⊙ a_s ) . Proof: To prove this we will use the following matrix identity of a partitioned matrix from <cit.>, which states([ A B; C D ])^-1=([M-MBD^-1;-D^-1CM D^-1+D^-1CMBD^-1 ])where M is defined asM=(A-BD^-1C)^-1.Here M is called the Schur complement of the Matrix on the left side of Equation <ref>. The algebraic manipulations to arrive at scalar operations in Equation <ref> are straightforward.§ ADDITIONAL EXPERIMENTS AND PLOTS§.§ Additional results on ablation with discretization step H.Δ t [12]r.380.75 [b].38< g r a p h i c s >Ablation on discretization step H. Δ t. The long-term prediction results in terms of RMSE, with different H on the mobile dataset.In addition to the Hydraulics Dataset discussed in Section 6.4, we report the results of the ablation study with different values of H.Δ t, for the mobile robot dataset. The higher the value of H, the slower the timescale of the task dynamics relative to the state dynamics. As seen in Figure <ref>, smaller values of H (like 2,3,5 and 10) give significantly worse performance. Very large values of H (like 150) also result in degradation of performance. In the paper, we used a value of H=75.§.§ Visualization of predictions given by different models.In this section, we plot the multistep ahead predictions (mean and variance) by different models on 3 datasets on normalized test trajectories. Not that we omit NaN values in predictions while plotting. §.§.§ Franka Kitchen §.§.§ Hydraulic Excavator §.§.§ Mobile Robot § ROBOTS AND DATAIn all datasets, we only use information about agent/object positions and we mask out velocities to create a partially observable setting. All datasets are subjected to a mean zero, unit variance normalization during training. During testing, they are denormalized after predictions. The details of the different datasets used are explained below: §.§ D4RL Datasets Details: We use a set of 3 different environments/agents from D4RL dataset <cit.>, which includes the HalfCheetah, Franka Kitchen and Maze2D (medium) environment. (a) HalfCheetah: We used 1000 suboptimal trajectories collected from a policy trained to approximately 1/3 the performance of the expert. The observation space consists of 8 joint positions and the action space consists of 6 joint torques collected at 50 Hz frequency. 800 trajectories were used for training and 200 for testing. For the long horizon task, we used 1.2 seconds (60 timesteps) as context and tasked the model to predict 6 seconds (300 timesteps) into the future. (b) Franka Kitchen: The goal of the Franka Kitchen environment is to interact with the various objects to reach a desired state configuration. The objects you can interact with include the position of the kettle, flipping the light switch, opening and closing the microwave and cabinet doors, or sliding the other cabinet door. We used the "complete" version of the dataset and collected 1000 trajectories where all four tasks are performed in order. The observation space consists of 30 dimensions (9 joint positions of the robot and 21 object positions). The action space consists of 9 joint velocities clipped between -1 and 1 rad/s. The data was collected at a 50 Hz frequency. 800 trajectories were used for training and 200 for testing. For the long horizon task, we used 0.6 seconds (30 timesteps) as context and tasked the model to predict 2.7 seconds (135 timesteps) into the future. The dataset is complex due to multi-task, multi-object interactions in a single trajectory. (c) Medium Maze: We used 20000 trajectories from a 2D Maze environment, where each trajectory consists of a force-actuated ball (along the X and Y axis) moving to a fixed target location. The observation consists of as the (x, y) locations and a 2D action space. The data is collected at 100 Hz frequency. 16000 trajectories were used for training and 4000 for testing. For the long horizon task, we used 0.6 seconds (60 timesteps) as context and tasked the model to predict 3.9 seconds (390 timesteps) into the future. Rendering of the three environments is shown in Figure <ref>. §.§ Hydraulic ExcavatorDetails: We collected the data from a wheeled excavator JCB Hydradig 110W show in Figure <ref>. The data was collected by actuating the boom and arm of the excavator using Multisine and Amplitude-Modulated Pseudo-Random Binary Sequence (APRBS) joystick signals with safety mechanisms in place. A total of 150 mins of data was collected at a frequency of 100 Hz. of which was used as a training dataset and the rest as testing. The observation space consists of the boom and arm positions, while the joystick signals are chosen as actions. For the long horizon task we used 1.5 seconds (150 timesteps) as context and tasked the model to predict 12 seconds (1200 timesteps) into the future. §.§ Panda Robot With Varying Payloads Details: We collected the data from a 7 DoF Franka Emika Panda manipulator during free motion and while manipulating loads with weights 0kg (free motion), 0.5 kg, 1 kg, 1.5 kg, 2 kg and 2.5 kg. The robot used is shown in Figure <ref>. Data is sampled at a frequency of 100 Hz. The training trajectories were motions with loads of 0kg(free motion), 1kg, 1.5kg, and 2.5 kgs, while the testing trajectories contained motions with loads of 0.5 kg and 2 kg. The observations for the forward model consist of the seven joint angles in radians, and the corresponding actions were joint Torques in Nm. For the long horizon task we used 0.6 seconds (60 timesteps) as context and tasked the model to predict 1.8 seconds (180 timesteps) into the future. §.§ Wheeled Mobile Robot [14]r.5 [b]0.5 < g r a p h i c s >Wheeled Mobile Robot traversing terrain with complex variations in slopes induced by a mix of sine functions.Observation and Data Set: We collected 50 random trajectories from a Pybullet simulator a wheeled mobile robot traversing terrain with slopes generated by a mix of sine waves as shown in Figure <ref>. Data is sampled at high frequencies (500Hz). 40 out of the 50 trajectories were used for training and the rest 10 for testing. The observations consist of parameters which completely describe the location and orientation of the robot. The observation of the robot at any time instance t consists of the following features:[ o_t=[x, y, z, cos (α), sin (α), cos (β);sin (β), cos (γ), sin (γ)] ]where, x, y, z - denote the global position of the Center of Mass of the robot, α, β, γ- Roll, pitch and yaw angles of the robot respectively, in the global frame of reference <cit.>.For the long horizon task we used 0.6 seconds (150 timesteps) as context and tasked the model to predict 3 seconds (750 timesteps) into the future.§ HYPERPARAMETERS AND COMPUTE RESOURCES Compute ResourcesFor training MTS3, LSTM, GRU and Transformer models we used compute nodes with (i) Nvidia 3090 and (ii) Nvidia 2080 RTX GPUs. For training more computationally expensive locally linear models like RKN, HiP-RSSM we used compute nodes with NVIDIA A100-40 GPUs. Hyperparameters Hyperparameters were selected via grid search. In general, the performance of MTS3 is not very sensitive to hyperparameters. Among all the baselines, Transformer models were most sensitive to hyperparameters (see Appendix E.5 for details of Transformer architecture).Discretization Step: For MTS3, the discretization step for the slow time scale SSM as discussed in Section 3.1 for all datasets was fixed as H ·Δ t = 0.3 seconds. In our experiments, we found that discretization values between 0.2 ≤ H ·Δ t ≤ 0.5 seconds give similar performance. Rule Of thumb for choosing discretization step in MTS3: For any N-level MTS3 as defined in Section 3.4, we recommend searching for discretization factor H_i as a hyperparameter. However, as a general rule of thumb, it can be chosen as H_i=(√(T))^i, where T is the maximum prediction horizon required / episode length. This ensures that very long recurrences are divided between smaller equal-length task-reconfigurable local SSM windows (of length √(T)) spread across several hierarchies. Encoder Decoder Architecture: For all recurrent models (MTS3, HiP-RSSM, RKN, LSTM and GRU) we use a similar encoder-decoder architecture across datasets. Small variations from these encoder-decoder architecture hyperparameters can still lead to similar prediction performance as reported in the paper. Observation Set Encoder (MTS3): 1 fully connected + linear output: * Fully Connected 1: 240, ReLUAction Set Encoder (MTS3): 1 fully connected + linear output: * Fully Connected 1: 240, ReLUObservation Encoder (MTS3, HiP-RSSM, RKN, LSTM, GRU): 1 fully connected + linear output: * Fully Connected 1: 120, ReLUObservation Decoder (MTS3, HiP-RSSM, RKN, LSTM, GRU): 1 fully connected + linear output: * Fully Connected 1: 120, ReLUControl Model (Primitive Action Encoder) (MTS3, HiP-RSSM, RKN): 1 fully connected + linear output: * Fully Connected 1: 120, ReLUThe rest of the hyperparameters are described below:§.§ D4RL Datasets §.§.§ Half CheetahRecurrent ModelsTransition Model (HiP-RSSM, RKN): number of basis: 32 * α(z_t): No hidden layers - softmax output Autoregressive Transformer BaselineLearning Rate: 1e-5Optimizer Used: Adam Optimizer Embedding size: 96 Number of Decoder Layers: 4 Number Of Attention Heads: 4 Multistep Transformer BaselineLearning Rate: 1e-5Optimizer Used: Adam Optimizer Embedding size: 128 Number Of Encoder Layers: 2Number of Decoder Layers: 1 Number Of Attention Heads: 4 §.§.§ Franka KitchenRecurrent ModelsTransition Model (HiP-RSSM, RKN): number of basis: 15 * α(z_t): No hidden layers - softmax output Autoregressive Transformer BaselineLearning Rate: 5e-5Optimizer Used: Adam Optimizer Embedding size: 64 Number of Decoder Layers: 4 Number Of Attention Heads: 4Multistep Transformer BaselineLearning Rate: 1e-5Optimizer Used: Adam Optimizer Embedding size: 64 Number Of Encoder Layers: 2Number of Decoder Layers: 1 Number Of Attention Heads: 4§.§.§ Maze 2DRecurrent ModelsTransition Model (HiP-RSSM, RKN): number of basis: 15 * α(z_t): No hidden layers - softmax output Autoregressive Transformer BaselineLearning Rate: 5e-5Optimizer Used: Adam Optimizer Embedding size: 96 Number of Decoder Layers: 4 Number Of Attention Heads: 4Multistep Transformer BaselineLearning Rate: 1e-5Optimizer Used: Adam Optimizer Embedding size: 128 Number Of Encoder Layers: 2Number of Decoder Layers: 1 Number Of Attention Heads: 4§.§ Franka Robot Arm With Varying LoadsRecurrent ModelsTransition Model (HiP-RSSM,RKN): number of basis: 32 * α(z_t): No hidden layers - softmax outputAutoregressive Transformer BaselineLearning Rate: 5e-5Optimizer Used: Adam Optimizer Embedding size: 64 Number of Decoder Layers: 4 Number Of Attention Heads: 4Multistep Transformer BaselineLearning Rate: 2e-5Optimizer Used: Adam Optimizer Embedding size: 64 Number Of Encoder Layers: 2Number of Decoder Layers: 1 Number Of Attention Heads: 4 §.§ Hydraulic ExcavatorTransition Model (HiP-RSSM,RKN): number of basis: 15 * coefficient net α(z_t): No hidden layers - softmax outputAutoregressive Transformer BaselineLearning Rate: 1e-5Optimizer Used: Adam Optimizer Embedding size: 96 Number of Decoder Layers: 4 Number Of Attention Heads: 4Multistep Transformer BaselineLearning Rate: 5e-5Optimizer Used: Adam Optimizer Embedding size: 64 Number Of Encoder Layers: 2Number of Decoder Layers: 1 Number Of Attention Heads: 4§.§ Wheeled Robot Traversing Uneven Terrain Transition Model (HiP-RSSM,RKN): number of basis: 15 * coefficient net α(z_t): No hidden layers - softmax output Autoregressive Transformer BaselineLearning Rate: 5e-5Optimizer Used: Adam Optimizer Embedding size: 128 Number of Decoder Layers: 4 Number Of Attention Heads: 4Multistep Transformer BaselineLearning Rate: 5e-5Optimizer Used: Adam Optimizer Embedding size: 64 Number Of Encoder Layers: 4Number of Decoder Layers: 2 Number Of Attention Heads: 4 §.§ Transformer Architecture DetailsFor the AR-Transformer Baseline, we use a GPT-like autoregressive version of transformers except that for the autoregressive input we also concatenate the actions to make action conditional predictions.For Multi-Transformer we use the same direct multistep prediction and loss as in recent Transformer time-series forecasting literature <cit.>. A description of the action conditional direct multi-step version of the transformer is given in Algorithm <ref>. § LIMITATIONSWe list some of the limitations of the paper here. (i) We restricted our definition and experiments to MTS3 with two levels of temporal abstractions, which was sufficient in many of our tasks. However, for certain tasks like the Maze2D, we believe more hierarchies can help. As discussed in the main paper the method and inference scheme allows easy addition of more Feudal <cit.> hierarchies with larger discretization steps (H ·Δ t). (ii) We restrict our application to action conditional long horizon future predictions and do not use the model for (hierarchical) control. A probabilistically principled formalism for hierarchical control as an inference problem, that builds upon MTS3 models is left for future work.(iii) Finally, we restrict our experiments to proprioceptive sensors from the agent and objects. The performance of MTS3 which relies on “reconstruction loss” as the objective is yet to be validated on noisy high dimensional sensor inputs like Images. Image-based experiments and “non-reconstruction” based losses <cit.> can be taken up as future work.§ BROADER IMPACTWhile we do not foresee any immediate negative societal impacts of our work, we do believe that machines that can replicate human intelligence at some point should be able to reason at multiple levels of temporal abstractions using internal world models <cit.>. Having intelligent agents with type 2 reasoning capabilities can have both positive and negative impacts. We believe identifying and mitigating the potentially harmful effects of such autonomous systems is the responsibility of sovereign governments.
http://arxiv.org/abs/2310.18534v3
{ "authors": [ "Vaisakh Shaj", "Saleh Gholam Zadeh", "Ozan Demir", "Luiz Ricardo Douat", "Gerhard Neumann" ], "categories": [ "cs.LG", "cs.AI" ], "primary_category": "cs.LG", "published": "20231027231844", "title": "Multi Time Scale World Models" }
1,2]Francesco Perciavalle 3]Oliver Morsch 2]Davide Rossini 1,4,5,6]Luigi Amico [1]Quantum Research Center, Technology Innovation Institute, P.O. Box 9639 Abu Dhabi, UAE [2]Dipartimento di Fisica dell’Università di Pisa and INFN, Largo Pontecorvo 3, I-56127 Pisa, Italy [3]CNR-INO and Dipartimento di Fisica dell’Università di Pisa, Largo Pontecorvo 3, 56127 Pisa, Italy [4]Dipartimento di Fisica e Astronomia, Via S. Sofia 64, 95127 Catania, Italy [5]INFN-Sezione di Catania, Via S. Sofia 64, 95127 Catania, Italy [6]Centre for Quantum Technologies, National University of Singapore 117543, Singapore Coherent excitation transport through ring-shaped networks [ January 14, 2024 ==========================================================The coherent quantum transport of matter wave through a ring-shaped circuit attached to leadsdefines an iconic system in mesoscopic physics that has allowed both to explore fundamentalquestions in quantum science and to draw important avenues for conceiving devices of practical use.Here we study the source-to-drain transport of excitations going through a ring-network, without propagation of matter waves. We model the circuit in terms of a spin system with specific long-range interactions that are relevant for quantum technology, such as Rydberg atoms trapped in optical tweezers or ion traps. Inspired by the logic of rf- and dc-SQUIDs, we consider rings with one andtwo local energy offsets, or detunings. As a combination of specific phase shifts in going though the localized detunings and as a result of coherent tunneling, we demonstrate how the transport of excitations can be controlled, with a distinctive dependence on the range of interactions.§ INTRODUCTIONQuantum transport in mesoscopic circuits deals with matter propagating in networkscharacterizedby a spatial scale comparable with the particles coherence length <cit.>. In this regime, quantum effects as quantum tunneling, conductance quantization, flux-quantization, Aharanov-Bohm effect, etc, play a prominent role <cit.>. Recently, quantum transport ofcold atoms guided in versatile and flexible laser-generated circuits has been studied both theoretically and experimentally <cit.>. Indeed, through widely tunable interactions and disorder, new schemes in mesoscopic physics have been defined, both for bosonic and fermionic sytems, with a great potential for basic quantum science and applications <cit.>. In this paper, we refer to one of the most iconic systems of mesoscopic physics that has led to far reaching implications: a mesoscopic ring-shape track connected to source and drain leads <cit.>. The tunneling through scattering impurities or localized barriers placed in the ring circuit can induce specific phase shifts in the particles wave function <cit.>. In this way, the source-to-drain current can be controlled, a fact that is relevant both to study fundamental features of quantum interference and to engineer mesoscopic quantum devices with enhanced performances.With a similar logic, neutral bosonic matter wave current oscillations have been predicted and analysed in Ref. <cit.>. Rings with one or two localized barriers can define the bosonic analog of rf- and dc-SQUIDs<cit.>. Such cold-atom implementations pave the way to rotation sensors based on the Sagnac effect <cit.>. Here,we study the source-to-drain quantum transport through a ring-shaped circuit in which, instead of matter, the dynamics occur in terms of excitations. The implementation we rely on is a circuit made of Rydberg atoms trapped in tweezers or ions trapped in suitable electromagnetic fields <cit.>. Indeed, in such systems the motion of the atoms or of the ions can be neglected, with the relevant dynamics occurring as the transfer of excitations of internal energy states of the system. Moreover, both Rydberg atoms and ions can be trapped in a large variety of different geometries and with a remarkable control of the physical conditions <cit.>.In contrast with the cases considered so far, here we deal with systems with a long-range interaction capturing the characteristic physics of Rydberg atoms and trapped ions <cit.>.We implement a specific quench dynamics that we demonstrate to lead to the sought drain-to-source excitation transport: Initialized in the source, the excitation propagates along the two arms of the ring and, after many scattering events determined by the (long range) interaction, reaches the drain. Dealing with a closed system, the excitations population in the source and in the drain displays characteristic oscillations.We shall see that the entire excitation dynamics and then the source-to-drain transport can be controlled by suitable energy level detunings localized in the ring track. Such detunings play a similar role of what the aforementioned local potential barriers do for matterwave <cit.>.We shall see that the presence ofdetuningsgives a non trivial time-dependent phase to the time-evolved state. Inspired by the rf-and dc-SQUID concept, we study the cases of one and two localized detunings. The paper is organized as follows: in Sec. <ref>, we introduce the model and the possible experimental platforms on which it can be realized. In Sec. <ref>, we study the dynamics of the leads and the ring population in presence of a single detuning in one arm of the ring. In Sec. <ref>, we perform the same type of simulations but in presence of two detunings, one for each arm of the ring, with the same height and the same sign first, with opposite sign secondly. Our conclusions are drawn in Sec. <ref>. The Appendix provide a more detailed analysis of the model properties and of the mechanisms that lead to the peculiar dynamics observed in themain text. § MODEL AND METHODSThe system is comprised of two leads, the source (𝒮) and the drain (𝒟), each of them modeled by a single site containing at most one excitation, connected to a ring network (ℛ) of N_r sites. Localized detunings are placed in the ring lattice and arranged in different configurations (see Fig.<ref>). The excitation in each site is modelled as a two-level system {|↑⟩, |↓⟩}.The Hamiltonian of the system readsℋ̂=ℋ̂_ℛ+∑_ℒ=𝒮,𝒟ℋ̂_ℒℛ+ℋ̂_𝒮𝒟,with ℋ̂_ℛ = ∑_i < j∈ℛg/d_ij^α(σ̂_i^+σ̂_j^- +H.c.)+ℋ̂_ Det, ℋ̂_ℒℛ = ∑_j∈ℛg/d_ℒ,j^α(σ̂_j^+σ̂_ℒ^- +H.c.),(ℒ=𝒮, 𝒟), ℋ̂_𝒮𝒟 = g/d_𝒮𝒟^α(σ̂_𝒮^+σ̂_𝒟^- +H.c.), ℋ̂_ Det = ∑_j∈ℛ∑_j_l∈ℐΔ_jn̂_jδ_j,j_l. Here σ̂^α_j (α=x,y,z) denote the spin-1/2 Pauli matrices on the jth atom and σ̂^±_j = 12 (σ̂^x_j ± i σ̂^y_j) are the corresponding raising/lowering operators, such that σ̂^+_j |↓_j⟩=|↑_j⟩ and σ̂^-_j |↑_j⟩=|↓_j⟩. The Hamiltonian part (<ref>) describes the intra-ring hopping in the presence of detunings (<ref>), while Eq. (<ref>) describes the leads-ring hopping and Eq. (<ref>) describes the direct source-drain hopping.We suppose that the excitation |↑⟩ can hop from site to site with a strength scaling as 1/d^α, with d denoting the distance between the sites and α a parameter inversely related to the hopping range. In particular, d_ij is the intra-ring distance, d_𝒮,j and d_𝒟,j are the distances between the leads and all the sites of the ring, while d_𝒮𝒟 is the distance between source and drain (see <ref>). We work in the weak leads-ring coupling limit, corresponding to K_nn≪ J_nn, where K_nn and J_nn are the leads-ring and intra-ring nearest-neighbor hopping strengths, respectively (see <ref> for details). Finally, we fix the total number of atoms in such a way that N_r/2 is odd. In this paper, we consider 1 ≤α≤ 3, with the nearest-neighbor and the fully connected models corresponding to α→∞ and α=0, respectively.Trapped ions in linear chains provide a powerful platform to realize quantum spin Hamiltonians in which the spin-spin interaction or the hopping strength scale approximatively as d^-α, where α∈ (0, 3) <cit.>; the |↓⟩ and |↑⟩ states are two internal electronic states of the ions. The implementation of different geometries is more challenging, but two-dimensional spin Hamiltonians have been realized as well <cit.>. Concerning rings, circular traps of ions have been realized as well <cit.>. The case α=3 can be also implemented in Rydberg atoms. For such systems, |↓⟩ and |↑⟩ are two Rydberg states of opposite parity (i.e., |S⟩ and |P⟩) <cit.>. Thanks to the high control on the geometry in which the atoms are located in these systems <cit.>, it is possible to realize ring-shaped and more complicated geometries <cit.>. We consider three possible configurations of the localized detunings in the ring lattice. The case ℐ={ j_0} corresponds to one localized detuning (see Fig. <ref>b), the path crossed by the excitation in the two arms of the ring is clearly different; the two other cases correspond to ℐ={ j_A,j_B} with j_A and j_B in diametrical position, Δ_A=Δ_B and Δ_A=-Δ_B respectively (see Fig. <ref>c,d). Since the distances d_𝒮,j_A≠ d_𝒮,j_B, we observe that, also in this case, the path crossed by the excitation fraction in one arm of the ring is different from the other. The local detunings are experimentally feasible through local AC-Stark shifts with a focused laser on the site of interest.To initialize the transport, we apply a specific quench protocol. At t=0 we localize a single excitation in the source:|ψ(0)⟩=|↑⟩_𝒮⊗|↓,...,↓⟩_ℛ⊗|↓⟩_𝒟,then we evolve the system via Schrödinger equation |ψ(t)⟩=exp(-iℋ̂ ̂t̂) |ψ(0)⟩.To monitor the dynamics, we scrutinize the number of excitations in the source, the ring, and the drain:n̂_𝒮=12 (1̂_𝒮+σ̂^z_𝒮), n̂_ℛ=12∑_j∈ℛ(1̂_j+σ̂^z_j), n̂_𝒟=12 (1̂_𝒟+σ̂^z_𝒟).We comment that, since the Hamiltonian conserves the total number of excitations n̂_tot=n̂_𝒮+n̂_ℛ+n̂_𝒟,with the initial condition (<ref>), we can work in the one-excitation sector, namely ⟨ψ(t)|n̂_tot|ψ(t)|=⟩1.Thus, we define the projector P̂ represented by a N× 2^N matrix that projects operators and states of the full Hilbert space into operators and states of the one excitation sector. The states and operators with which we will work are of the form P̂|ψ⟩ and P̂ÔP̂^†, |ψ⟩ and Ô being a generic state and operator acting on the full Hilbert space. § DYNAMICS IN THE PRESENCE OF A SINGLE DETUNINGWe first study the dynamics of the number of excitations in the system, in the presence of a single localized detuning (<ref>) (see Fig.<ref>b).The results are summarized in Fig. <ref>, reporting the population dynamics P_j=⟨n̂_j|$⟩ in the source, the ring and the drain. Then, we analyze the phase difference behavior between the two arms of the ring. In particular, the phase difference between two opposite sites located in the two different arms.§.§ Excitation dynamics in presence of a single detuningBecause of the weak coupling assumption, insights on the general features of the excitations dynamics can beachieved by looking at the energy (discrete) spectrum of the leads and ring separately. The initial state will be denoted as|ψ(0)⟩=|𝒮⟩, assumed at zero energy. The ring Hamiltonian in the single particle sector hasN_reigenstates with non-zero energies;they result to bedoubly degenerate except the ground and highestexcited states. The role of the localized detuning is to break the degeneracies and shift the ring levels.For specific values of the detuningΔ_resthe ring energy levels can be resonant with the initial source state|𝒮⟩(see <ref>). In this case, the excitation is transferredfrom source to the drain and backward to the source, through the ring that is well populated, see Fig. <ref>. Since the total number of excitations is conserved and the ring is populated, during its time evolution the population is distributed along leads and channel. Far from resonance, coherent source-drain oscillations display a markeddependence onΔas soon asαincreases. This means that the source-drain dynamics depends on the characteristics of the ring and not only on the source-drain direct couplingℋ̂_𝒮𝒟.In this regime, the transport can occur in terms of cotunneling processes <cit.>: Similarly to the transport occurring in quantum dots in the regime in which sequential tunneling cannot happen (because of the Coulomb blockade), a source-to-drain transport in our system can occur through virtual transitions in the ring energy levels; in this regime, the ring is found with a low population of excitations.Due to the long-range nature of the interaction we consider in the present paper, specific features of the system emerge when leads and ring-shape track are not treated separately (see Fig. <ref>).Specifically, because of a combination ofgeometrical effects and energy level configuration, we found that the transportdisplays a marked dependence onα, especially in the (leads-ring) nearly resonant regimes. Forα=1, the source and drain populations oscillate regularly untilΔ/J_nn≈5, value for which the rings starts to be populated and the dynamics inside the leads is less regular. Far from that value, the ring is not populated and the source-drain oscillations does not depend on the detuning.Forα=2, the dynamics is quite different: in the (Δ,t) plane appears a V-pattern.On the borders of the V, the drain and source populations are respectively maximum and minimum. The vertex of the V is in correspondence ofΔ_res, where the ring is more populated. Another important feature of this regime is the population dynamics for values of detuning far from the resonance. The source is at every time the most populated, the drain the less populated. The ring population is small but not zero, it shows a weak oscillation with those of the source. Theα=3regime is characterized by slow (compared with the previous casesα=1andα=2) coherent transfer of excitation from the source to the drain interrupted at the value ofΔfor which source and ring are resonant. As a specific feature of this case, we note a localization of the excitation in the source occurring for values ofΔslightly smaller ofΔ_res. Indeed, such localized source state appears in the analysis of the full leads-ring system Hamiltonian spectrum (see <ref>). For different values ofΔthe complete states have also substantial projection in the drain state and therefore the dynamicsresults to display the characteristicsource-drain coherent oscillations.§.§ Phase difference dynamics in presence of a single detuning We now analyse the phase of the excitations flow between the two arms of the ring. To this end, we compute the phase difference between two symmetric sites of the arms, beingi=N_r/2-1andj=N_r/2+1. The argument of the two-body correlator⟨σ̂_i^+σ̂_j^-|$⟩ gives the phase difference δϕ between the two sites, details are reported in <ref>. With the sake of corroborating our results analytically,we analysed a straight finite chain. Indeed, we found that the phases of the coefficients of the time evolved state in the position basis depend non trivially on Δ. Fig. <ref> reports the phase difference in function of time and detuning for different values of α. Our results indicate that, as a localized potential barrier does for matter waves, thelocalized detuning causes a phase shift between the two arms of the ring. The dependence of the phase difference on the detuning is evident for each value of α. In particular, it follows the ring population oscillation frequency. However, also when the ring population oscillation amplitude is small and hardly observable, the phase difference can present non negligible oscillations. For instance, for α=1 and negative values of the detuning, there are evident phase difference oscillations while the ring is basically not populated. In a window around Δ=0, the phase difference is zero because the paths on the two arms are the same. On the other hand, in correspondence of the resonances, as soon as the ring becomes populated, we observe strong oscillations in the phase difference.In general, the phase oscillation follows the population oscillation, independently on the amplitude of the latter. If there is a little amount of excitation that moves in the ring, the phase difference oscillates with an amplitude that is significantly different from zero. For instance, for α=1 and negative detuning, we observe phase oscillations also if the ring population is very small. § DYNAMICS IN PRESENCE OF DOUBLE DETUNINGIn this section we study the source-to-drain dynamics of number of excitations in the case in whichtwo atoms in the ring are detuned - see Sec. <ref>. We first study the case in which the two detunings have the same sign and then the case in which they have same magnitude but opposite sign.§.§ Excitation dynamics in presence of double detunings with the same sign Also in this case, the hopping range has pronounced effects on the dynamics. The combined energy shifts caused by the two detunings and hopping rangegive rise to a resonant transport occurring in correspondence to two resonances (instead of the one in the previous single detuning case) that turns out to be not symmetric with respect to Δ=0.The nature of the level shift that brings to resonance can be accessed analytically and numerically looking at the ring spectrum respectively in the nearest neighbor and long range cases (see <ref>). Fig. <ref> reports the population dynamics.For α=1, the dynamics is characterized by fast oscillations between source and drain with a frequency that is nearly independent of detuning. The ring is almost never populated except for the two Δ_res values, where the dynamics in the leads result to be erratic.In the case α=2, the dynamics clearly changes. Far from resonances, the dynamics result to besimilar to theone observed in the single detuningcase. In this case we see two V-shapedpatterns in the (Δ,t) plane around the two resonances. We notice a depletion in the source and a filling of the drain in correspondence of the meeting points of thetwo V-shapedpatterns.Finally, as in the case with single detuning, for α=3 the transport between source and drain is slower and more affected by the process occurring in the ring track. In particular, tuning Δ the transport in the system shows markedly different features. For large (positive or negative) values of Δ, thetransport of excitations is slow-down, withthe ring resulting weakly populated. Around the two resonant levels the transport becomes erratic and the ring results populated. In between the two resonances, we observe regular and fast oscillations of the population in the leads. §.§ Excitation dynamics in presence of double detunings with opposite sign We complete our analysis by the study of the configuration of double equal barrier with opposite sign. Also here, the three different values of α denote three different regimes. The double barrier regime is basically different from the other two because the first perturbative order energy correction is zero - see <ref>. Thus, it is harder to get resonances.For α=1, the dynamics is characterized by fast oscillations between source and drain, the ring is never resonant with initial state and so is never populated. For α=2, the ring spectrum presents two symmetric resonances (instead of the asymmetric ones of the previous subsection)with the zero energy mode. This feature is corroborated by a perturbative analysis showing thatthe resonances depends as Δ^2 (see <ref>). For large (positive and negative) values of Δ, the population in the drain is minimal, while a small fraction of excitation oscillates between source and ring.For values of Δ between the two resonances, a source-drain slow oscillation given by the recombination of the two V-shapedpatterns appear.The regime corresponding to α=3 is characterized by slow coherent oscillations between the two leads, the ring track being weakly populated. Differently from the α=1 case, the value of Δ determines the oscillation frequency between source and drain. It is found that the excitation transfer slows down by increasing Δ. §.§ Two arms phase difference with double detuning Figures <ref> and <ref> report the phase difference between the sites of the two arms closest to the drain in presence of two detunings. As in the case of single detuning, the phase difference oscillations are minima in amplitude for α=1 far from resonances. For α=2 we observe strong phase difference oscillations, evident oscillations are observed also for α=3. In all the cases analyzed so far, the phase difference oscillations follow in frequency those of populations. Also in this case, the phase oscillations are correlated to the ring population oscillations: the phase oscillation amplitude is bigger when the ring is more populated. However, a little amount of excitation in the ring is sufficient to have substantial oscillations in the phase.In <ref> we also study the phase difference dynamics for different locations of the detunings. As expected, the phase oscillations strongly depend on it; if the detunings are located in such a way that the excitation fractions follow the same paths on the two arms of the ring, the phase difference is zero for any Δ. This result can be achieved putting two equal detunings in two diametrically opposite zones of the arms. Otherwise, if the resonance location in Δ is the same and the two paths are different, the dynamics differs with the one considered here with specific features depending on the geometry. § DISCUSSIONSWe discussed the source-drain excitation transport througha ring lattice trackin which in one or two sites the atoms are detunedby Δ respect to the others; such detunings act as local energy shifts in the system and therefore they playthe role of localized impurities.Motivated by the know-how in quantum technology, the system isdescribed by long range XY Hamiltonian with1/d^α interaction; specific values of α that are relevant for ion traps <cit.> and Rydberg atoms <cit.> localized in optical tweezers are considered. This feature should be contrasted with previous studies corresponding to nearest-neighbor hopping <cit.>. As we detailed below, the transport can be controlled by tuning Δ, with distinctive features of α.The energy excitation is initialized in the source lead, with a zero energy state. Then the excitation is transferred to the ring by quenching the interaction with the rest of the system. In the weak leads-ring tunneling regime,we note that, once the excitations are transferredto the ring track,they propagate within the ring with a fast time dynamics. The presence of localized detunings causes the shift of the energy levels of the XY Hamiltonian of the ring and, for specific values of Δ,astate resonantwith the source one can occur.Inspired by rf- and dc-SQUIDs,we considered the two cases of one and two detunings; in turn, we considered two detunings with the same sign or withopposite sign. Resonant and non-resonant transport display distinctive features. For resonant transport, the excitation can be transferred from the source to drain, while the ring track is moderately occupied. Far from resonance, the excitation oscillates between source and drain, minimmally populating of the ring on the the observed time scales. The nature of the oscillation depends on the characteristics of the ring.Clearly, we observe that,the far-off-resonance dynamics isnearly independent on the detuning (indicating that the detuned sites are bypassed by the long range hopping). By increasing α, we observe a non trivial effect of detunings which is first localized around theresonant transport and then extended to the non-resonant dynamics. By studying the phase difference between upper and lower arms of the ring, we can conclude that the detuning can affect the phase of excitations (similarly to the effect of barriers in matterwave propagation). The corresponding phase difference is observed to oscillate, evenfor small population oscillations. Phase difference and population oscillation frequencies are related. Besides its own interest, we think our work is relevant to conceive devices of practical value employing Rydberg atoms or trapped ions. Finally,we point out that our logiccan be feasibly extended tosystems with different geometries,also inpresence of noise. § ACKNOWLEDGMENTSWe thank Giampiero Marchegiani and Tobias Haug for useful discussions. The Julian Schwinger Foundation grant JSF-18-12-0011 is acknowledged.The Julian Schwinger Foundation grant JSF-18-12-0011 is acknowledged. OM also acknowledges support by the H2020 ITN “MOQS" (grant agreement number 955479) and MUR (Ministero dell’Università e della Ricerca) through the PNRR MUR project PE0000023-NQSTI. Numerical computations have been performed using the Julia packages <cit.>. § DISTANCE AND WEAK COUPLING LIMITHere we provide a proper definition of the intra-ring and ring-leads distances, thenwe specify what we mean for weak ring-leads coupling. To define the distances, we first need to introduce an appropriate labeling of the atoms in the ring. Thus, supposing to work with an even number of sites in the ring, we label as N_r the site closest to the source and N_r/2 the site closest to the drain. We identify with φ_j=2π j / N_r the angle associated to the site j of the ring. Given this labeling, we can write the source-ring distance asd_𝒮,j=√((d_𝒮ℛ+R)^2 + R^2 - 2(d_𝒮ℛ+R)Rcosφ_j)where d_𝒮ℛ is the smallest distance between source and ring, precisely, it is the distance between the source and the site N_r; R is the radius of the ring. At the same time, the drain-ring distance isd_𝒟,j=√((d_𝒟ℛ+R)^2 + R^2 - 2(d_𝒟ℛ+R)Rcos(π - φ_j))where d_𝒟ℛ is the smallest distance between source and ring, that is the distance between drain and site N_r/2. The intra-ring distance isd_ij=2Rsin( |φ_i - φ_j|2)=2Rsin( π|i - j|N_r).Finally, we introduce the source-drain distance. Given d_𝒟ℛ, d_𝒮ℛ and the radius of the ring, the aforementioned distance is d_𝒮𝒟=d_𝒮ℛ+2R+d_𝒟ℛ. A pictorial representation of the distances is reported in Fig.<ref>. Given a proper definition of the intra-ring and leads-ring distances, we can introduce the intra-ring nearest neighbor hoppingJ_nn=gd_nn^α,d_nn=2Rsin(πN_r)and the leads-ring nearest neighbor hoppingK_nn=gd_𝒮ℛ^α=gd_𝒟ℛ^α,where we are supposing that d_𝒮ℛ=d_𝒟ℛ. In the main text we work in the weak leads-ring coupling limit K_nn≪ J_nn, which means d_𝒮ℛ^α≫ d_nn^α. Thus, if we consider K_nn=J_nn/M, M being an integer bigger than one, the distances will be related by d_𝒮ℛ=M^1/αd_nn. § RING SPECTRAL ANALYSISIn the weak coupling regime, the ring and leads modes can be treated separately, their relation is crucial to understand the nature of the transport. The initial state of our protocol is a zero energy state, thus, the presence of zero energy modes in the ring is responsible for the population of the latter during the dynamics. In absence of zero energy modes in the ring, the latter will not be populated and the dynamics will result in coherent oscillations between source and drain. In this section, we will study the effect of localized detunings on the spectrum of an XY model in a ring. We will first use a perturbative approach to study the effect of the detunings on the spectrum of the nearest neighbor model; it is instructive to understand the possible presence of resonances between ring and drain. Then, we will numerically analyze the modification of the ring Hamiltonian spectrum due to the long range hopping paying particular attention to the changes in terms of resonances with zero energy modes. We will work in the one excitation sector of the Hamiltonian. §.§ Nearest neighbor XY energy shift due to detuningLet us consider the generic Hamiltonian ℋ̂ = ℋ̂_0 + λℋ̂'̂.ℋ̂_0 is the unperturbed Hamiltonian, while ℋ̂'̂ is the perturbation Hamiltonian whose intensity λ is supposed to be small with respect to the unperturbed eigenvalues. We consider anthat Hamiltonian possesses at least a couple of degenerate states |E_a^0⟩ and |E_b^0⟩ such thatℋ̂_0|E_a,b^0⟩=E^(0)|E_a,b^0⟩.From now on we will work in the subspace spanned by these two states. Let us introduce the matrix elementW_ij=⟨E_i^0|ℋ̂'̂|E_j^0|.⟩The first order correction to the unperturbed eigenvalue is <cit.>E^(1)_± = W_aa+W_bb±√((W_aa-W_bb)^2+4|W_ab|^2)2, E = E^(0) + λ E^(1)_±.Now, we consider a XY model on a ring composed of N_r sites perturbed by a localized detuning term:ℋ̂ = J∑_j=1^N_r(σ̂_j^+ σ̂_j+1^- + H.c.) + Δ∑_j=1^N_rn̂_jδ_j,j_0 = ℋ̂_0 + Δℋ̂'̂where here Δ is the perturbative parameter and we work in the one excitation sector. The unperturbed Hamiltonian can be diagonalized transforming the spins in fermions via Jordan Wigner transformation <cit.>. The basic idea of the Jordan wigner transformation is to transform the spin Hamiltonian into a fermionic Hamiltonian. To do that, we introduce the JW fermionsĉ_j = ( ∏_ℓ<jσ̂_ℓ^z) σ̂_j^-, ĉ_j^† = (∏_ℓ<jσ̂_ℓ^z)σ̂_j^+;the resulting unperturbed model is a fermionic non-diagonal Hamiltonian. In the one excitation sector, the fermionic operators are rotated through the transformationĉ_j^† = 1√(N_r)∑_n=1^N_re^2π i j n / N_rd̂_n^†and the diagonal unperturbed Hamiltonian is obtainedℋ̂_0 =∑_n=1^N_rE_n^(0)d̂_n^†d̂_n,E_n^(0) = 2J cos(2π nN_r).We define the momentum k=2π n / N_r and we can observe that the eigenvalues of the Hamiltonian are all doubly degenerate, except the ground and the maximum energy states. We can easily observe that the couples of degenerate eigenstates are of the form |k⟩ and |2π - k⟩ (or alternatively |n⟩ and |N_r - n⟩) which refers to states with momenta k and 2π-k; thus, we can restrict our analysis in the subspace spanned by these two states. Here, the perturbation matrix element isW_k,k'=⟨k|n̂_j_0|k'|.⟩To perform the computation it is convenient to transform the number operator in terms of Jordan Wigner fermions in the momentum space. Done that, the first matrix element to which we are interested is W_k,k=1N_r∑_m,m'=1^N_re^-2π i m j_0/ N_re^2π i m' j_0/ N_r⟨n|d̂_m^†d̂_m'|n|⟩ =1N_r∑_m,m'=1^N_re^2π i (m'-m) j_0/ N_rδ_n,mδ_m',n=1N_r=W_2π - k,2π - k.At the same time, the off-diagonal term will beW_k,2π-k=1N_r∑_m,m'=1^N_re^2π i (m'-m) j_0/ N_rδ_n,mδ_m',N_r-n=e^2ikj_0N_r=W^*_2π-k,k.Given the perturbation matrix elements, we can compute the first order energy corrections using eq.(<ref>)E_± = E^(0) + Δ2± 22N_r.Thus, one of the two degenerate states remains unchanged and the other one is shifted by a factor 2Δ/N_r. The non degenerate eigenvalues E_GS and E_max are shifted by a common factor W_k_GS,k_GS=⟨k_GS|n̂_j_0|k_GS|=⟩W_k_max,k_max=⟨k_max|n̂_j_0|k_max|=⟩Δ/N_r, where k_GS and k_max are the momenta associated respectively to the ground and the maximum energy state. In the case of two detunings with the same sign, we have to consider the perturbation Hamiltonianℋ̂'=n̂_j_A+n̂_j_B;the perturbation matrix elements areW_k,k=2N_r=W_2π-k,2π-k, W_k,2π-k=1N_r( e^2ikj_A+e^2ikj_B)=W^*_2π-k,k.Given the matrix elements, we can compute the energy correction for degenerate states via (<ref>) E_± = E^(0) + Δ2±√(2(1+cos(2k(j_A - j_B))))N_r.If we set the barriers in opposite sides of the ring, i.e. j_A=N_r/4 and j_B=3N_r/4, in such a way that |j_A-j_B|=N_r/2, we obtain a shift of a factor 0 or 4Δ/N_r for each state. On the other hand, the ground and the maximum energy state are both shifted by a factor 2Δ/N_r.Finally, it is straightforward to observe that the first order correction to the energy states in presence of two barriers with opposite sign gives zero contribution at the first perturbative order. To do that, it is sufficient to perform the same calculation done for two equal detunings with ℋ̂'=n̂_j_A-n̂_j_B. If |j_A-j_B|=N_r/2, the first order energy correction is zero for each state of the spectrum. Figure <ref> reports the numerical results for the energy shifts in presence of localized detunings. Through the numerical computation we easily understand what happens beyond the first perturbative order. In particular, both for single and double equal detunings, the first order shifts in Δ are corrected for big values of the latter tending to flatten. The main difference between the two cases is the slope of the shift at small values of the detuning. The one of the single barrier case is smaller than the one of the double barrier case. Thus, while in the latter it is sufficient to permit to two levels to cross the zero energy mode, in the single barrier case the linear shift it is not sufficient. It is important to observe that we do not observe resonances in this case in our Δ range, it is possible that they are present for bigger values of the detuning. However, in the case of two opposite detunings, the degenerate states reported are not shifted even for more than first order contributions; then, we do not expect resonances, the dynamics is characterized by coherent oscillations between source and drain. §.§ Long range XY energy shift due to detuning For a nearest-neighbor XY model, the spectrum is symmetric with respect to the zero energy states, meaning that the energies are organized in pairs ± E_n. We also showed how from the energy shifts we expect that the resonances are symmetric for Δ→ -Δ. However, the dynamics reported in the main text shows that for long range hopping this is not true. The ring population does not follow a symmetric behavior with respect to Δ=0. Therefore, a detailed analysis of the energy shift in the spectrum is important to understand the asymmetry in the dynamics. The model with which we work here is (<ref>).Figure <ref> reports the energy spectrum in all the three paradigmatic detuning configurations considered so far. Before analyzing the effect of the detuning, it is necessary to pay attention on the role of the inverse hopping range α for Δ=0. We can immediately observe that for each value of α the degeneracies persist; on the other hand, a substancial shift of the levels is observed. Passing from α=5 to α=1 the levels lose their symmetry with respect to E=0 and tend to crush to a negative energy value. This means that the first E>0 and the first E<0 states does not possess anymore symmetric energy with respect to zero. When the detuning is switched on, the energy levels are shifted and cross the zero energy mode, the crossing is not symmetric in Δ simply because the starting energy levels from which the shift starts are anymore symmetric with respect to E=0.To be more quantitative, let us first consider the case of the single barrier reported in Fig. <ref>(a). Here, the levels that cause resonances are the orange and red. Increasing the hopping range, the red and the orange levels are shifted downwards; during the shift, one of the two levels crosses the zero energy mode giving rise to one resonance at negative Δ. The purple level in the nearest neighbors limit is not sufficiently shifted to give rise to a positive resonance at positive Δ in the interval considered so far. Then, for one detuning, we have only one value or narrow range of Δ for which the ring becomes populated during the dynamics.Figure <ref>(b) reports the results for two equal detunings: for each value of α there are at least two values of Δ for which the zero energy mode is crossed for the same reason for which it happens in the nearest neighbor XY model, meaning a bigger slope of the energy shift curve. In the two opposite detuning case reported in Fig. <ref>(c), the long range hopping is responsible for a shift of the energy levels that is quadratic in the detuning Δ. As in the nearest neighbor limit, also in presence of long range the first order correction is zero, thus the shift scales as Δ^2 with a coefficient that increases as α decreases. For this reason, it is possible to have symmetric resonances with respect to Δ=0. In particular, in the plots reported, resonances appear at α=2.It is safe to assume that the presence of the resonances strongly depends also on the number of atoms in the ring. The presence of size dependent resonance location can be immediately deduced from the perturbative results obtained in the previous section. Indeed, the unperturbed energy is E^(0)_k = 2J cos k where k∼ 1/N_r and the first order shifts are directly proportional to Δ/N_r. In presence of long range hoppings we do not have access to exact spectrum of the Hamiltonian, however, at least for finite value of N_r we can analyze how the resonance position moves increasing the size. To be more precise, we consider the case of single barrier and plot the crossing states in function of Δ. Here, for crossing states we mean the states that cross the zero energy level in the interval Δ∈ [-10,0]. Figure <ref> reports them and the first order linear corrections E^(0)+2Δ/N_r, where we extracted E^(0) from numerical data and assumed that the the first order perturbative shiftis unchanged with respect to the nearest neighbor case. We immediately observe that for small values of Δ the linear shift keeps working well; indeed, increasing the size of the system, the slope of the shift is smaller. Moreover, the energy E^(0) is smaller increasing the size and this makes the resonance position located at smaller values of |Δ| for bigger values of the size. Finally, we observe that already for N_r=18 the position of the resonance can be qualitatively estimated from the linear shift as Δ_res≈ -E^(0)N_r/2, since it is located at a relatively small value of the detuning. § DYNAMICS IN ABSENCE OF DETUNINGSHere we analyze the dynamics of the system in absence of any type of detuning. We expect to find a non trivial dynamics also in absence of it; due to the geometry of the system, the excitation that travels along the system is subject to many scattering and splitting events. To have an idea of the complexity of the dynamics, let us consider an excitation subject only to nearest neighbor hopping that travels in the system starting from the source. The excitation moves from the source to the nearest site on the ring. The coupling between source and ring is smaller compared to the intra-ring one, thus, once a small excitation fraction reaches the ring, it is quickly split in the two arms of the ring. The splitted excitation fractions travel in the two arms of the ring and meet at the end of them, before the drain. Here, they scatter, a part of the excitation hops to the drain, the rest continues its dynamics in the arms of the ring. During a long time evolution a lot of scattering events occur and the dynamics will result complex and not intuitively easily to describe.Figure <ref> reports the population dynamics of the drain for different values of the inverse hopping range α. we consider α=1, α=3, nearest neighbor limit and the particular value of α for which, in absence of detuning, the ring presents zero energy levels resonant with the initial state. For instance, from Fig.<ref> we can observe that for N=16 the particular α_res value is near to 2. Indeed, we find α_res=2.168; the other α_res values are reported in the Fig.<ref> labels. We observe how the drain population oscillates quite regularly except in the α_res case; the ring becomes is well populated and the oscillations are extremely irregular. For the other cases the ring is weakly populated, see Figs.<ref>,<ref>,<ref>, thus it is important to follow the behavior of the drain density. The latter is characterized by small frequency oscillations with big amplitude modulated by fast oscillations with small amplitude. In nearest neighbor limit, the two frequencies can be evaluated analytically <cit.>ω_±J_nn= ±N_rδ8+K̃_nn^2 N_r - 2 K̃_nn^2 δ4K̃_nn^2δ^2 - N_r(8+K̃_nn^2 N_r) + √((N_rδ8+K̃_nn^2 N_r - 2 K̃_nn^2 δ4K̃_nn^2δ^2 - N_r(8+K̃_nn^2 N_r))^2 + N_rδ8K̃_nn^28N_r + K̃_nn^2(N_r^2-4δ^2))where K̃_nn=K_nn/J_nn and δ=(π/N_r). From the numerical computation we can observe that the frequencies are weakly dependent on the size of the system. The non trivial form of the frequencies confirms how the dynamics is complex.Surprisingly, for α=3 the dynamics changes a lot with respect to the nearest neighbor limit. One of the crucial differences, is that here the leads are strongly connected with a plethora of sites near to the N_r and N_r/2, not just with one site. The first counter-intuitive result is the slow down of the dynamics with respect to the nearest neighbor case. The presence of hoppings does not cause a faster transport of the excitation from one site to the other. Secondly, the transport is slower for N=12, while N=16 is the faster case. Thus, a smaller size does not correspond to faster dynamics. Lastly, the case α=1 is the most intuitive one. The hopping range is sufficiently large to avoid complex internal effects in the ring. The oscillation are faster with respect to all the other case considered with smaller hopping range; moreover, increasing the size of the system it becomes slower. Thus, the dynamics can result complex and nonintuitional also in absence of detunings barriers in the system, in particular, the Rydberg limit α=3 presents interesting features due to the complex dynamics that takes place in the ring. § INFORMATION ON THE DYNAMICS FROM THE COMPLETE SPECTRUMIn <ref> we showed that important information on the dynamics can be extracted from the bare ring Hamiltonian, assuming that the ring and the leads are weakly coupled. Here, we focus our attention on the complete Hamiltonian, describing how the resonances manifest themselves on the state population and how the transport nature is modified and damaged from strong coupling and disorder. §.§ Full Hamiltonian eigenstate nature In the one excitation sector, a generic eigenstate of the Hamiltonian can be written in the computational basis as |E_n⟩=c_𝒮^(n)|𝒮⟩+|ℛ⟩+c_𝒟^(n)|𝒟⟩where |𝒮⟩=|↑⟩_𝒮⊗|↓,...,↓⟩_ℛ⊗|↓⟩_𝒟, |ℛ⟩=∑_j∈ℛc_j^(n)|↓⟩_𝒮⊗|↓,...,↑_j,...,↓⟩_ℛ⊗|↓⟩_𝒟 and |𝒟⟩=|↓⟩_𝒮⊗|↓,...,↓⟩_ℛ⊗|↑⟩_𝒟. In absence of resonances, it is safe to assume that the leads and the ring degrees of freedom are separated. Thus, we group the states of the Hamiltonian in three classes: {|E_n_l⟩, |E_n_r⟩, |E_n_lr⟩}. |E_n_l⟩ are the states in which all the excitations are in the leads, |E_n_r⟩ are the states in which only the ring is populated, and |E_n_lr⟩ are the states in which the population is distributed between ring and leads; the labels satisfy the relation n_l+n_r+n_lr=N. Since the initial state of our protocol is |ψ(0)⟩=|𝒮⟩, the states |E_n_r⟩ will not contribute to the dynamics; thus, the generic time evolved state can be written as|ψ(t)⟩=∑_n_l e^-iE_n_lt⟨E_n_l|𝒮||%s⟩⟩E_n_l+∑_n_lr e^-iE_n_lrt⟨E_n_lr|𝒮||%s⟩⟩E_n_lr.Moreover, if the Hamiltonian spectrum does not contain uniform states |E_n_lr⟩, the dynamics is dominated by the leads states |E_n_l⟩=c_𝒮^(n_l)|𝒮⟩+c_𝒟^(n_l)|𝒟⟩ and will result in oscillations between source and drain of the form|ψ(t)⟩=∑_n_l e^-iE_n_lt(c_𝒮^(n_l))^*[c_𝒮^(n_l)|𝒮⟩+c_𝒟^(n_l)|𝒟⟩].The value of the lead state energies and their coefficients is fundamental to have information on the source-drain oscillation frequency and amplitude. On the other hand, the ring becomes populated only when the Hamiltonian has |E_n_lr⟩ states. Thus, we expect to observe states |E_n_lr⟩ in correspondence of the resonances.In this section, we focus our attention on the eigenstate population P^(n)_j=⟨E_n|n̂_j|E_n|$⟩, we compute it for source, ring and drain in the whole spectrum.Figure <ref> reports the behavior of the eigenstate populations in leads and ring for three different values of the size of the system, the inverse hopping range isα=3and only one localized barrier atj_0=N_r/4is taken into account. We immediately observe that the majority of the states of the system are ring states for whichP_ℛ^(n)≈1andP_𝒮,𝒟^(n)≈0, they do not contribute to the dynamics if the system is initialized in|S⟩. From a comparison between Fig. <ref> and Fig. <ref> we observe that in correspondence of the resonance the ring population is1/2for two states of the spectrum, the excitation is equally distributed between ring and leads and this will result in a filling of the ring during the dynamics. Far from resonance, the Hamiltonian states are approximately ring or leads states withP_ℛ^(n)≈0,1; thus, the dynamics in leads states and will result in coherent oscillations between source and drain.Let us observe that leads states possessP_ℒ^(n)=P_𝒮^(n)+P_𝒟^(n)=1, therefore, the nature of the dynamics strongly depends on the source and drain population of the states. In particular, forΔ< Δ_resthere are source localized states, meaning states in which the source population is dominant an all the others. In absence of uniform states, the dynamics follows Eq. (<ref>), that results in|ψ(t)⟩≈e^iϕ|𝒮⟩if there is a dominant localized source state. Thus, forΔ< Δ_resthe excitation remains localized in the source or at most a small fraction of it moves to the drain. The value of the population shows an irregular dependence on the size, the position of the resonance is size dependent and various finite size effects comes into play. Far from resonance, the size dependence is not regular, the source and drain populations follows different behaviors without a well defined dependence on the size. Thus, for each value of the size there is a resonance, but the dynamics far from the resonance strongly depends onN. §.§ Eigenstate leads population for different leads-ring coupling In the previous subsection we analyzed the full spectrum populations in the weak coupling regime. Here, we study it far from the weak coupling regime. We study the system in terms of leads population, in particular we analyze the leads magnetization that is directly related to leads and ring populationsσ̂_ℒ^z = σ̂_𝒮^z + σ̂_𝒟^z = 2(n̂_ℒ - 1̂) = -2n̂_ℛ.This means that leads and ring states have respectively⟨σ̂_ℒ^z|=⟩0,-2.We are always focused on the Rydberg caseα=3, Fig. <ref> reports the leads magnetization for three different values of the leads-ring coupling. Increasing the coupling the population of the eigentstates changes drastically. For small coupling only three states behave similarly to|E_n_rl⟩states around the resonance point, the states are the7-th,8-th and9-th excited states. IncreasingK_nn/J_nn, both|E_n_r⟩and|E_n_l⟩states become|E_n_rl⟩states. The spectrum is not anymore separated in well defined classes, this will result in a dynamics that is not characterized by coherent source-drain oscillation, but the ring will be populated not only for specificΔvalues. Thus, the transport will not be anymore controllable tuningΔ. §.§ Robustness against noise Here we study the robustness of the results under the presence of disorder. In particular, we work with the impurity Hamiltonianℋ̂_imp=∑_j∈ℛΔ_j n̂_jwhereΔ_jis randomly extracted in the interval[-ϵ,+ϵ]for eachjexcept forj=N_r/4in whichΔ_jbelongs to the interval[Δ-ϵ,Δ+ϵ]. Therefore, it corresponds to the case of the single localized barrier in presence of noise. We study the effect of disorder on the leads population forN_rea=100disorder realizations in the single barrier case forN=12andα=3. We report the behavior of the expectation value⟨E_n|σ̂_ℒ^z|E_n|⟩=1N_rea∑_β=1^N_rea⟨E_n^(β)|σ̂_ℒ^z|E_n^(β)|⟩eq:Magnetzwhich is the leads magnetization expectation value in the Hamiltonian spectrum averaged over many disorder realizations.|E_n^(β)⟩is then-th eigenstate of the Hamiltonian for theβ-th disorder realization. Fig.<ref> reports the behavior of the leads magnetization for different values of the disorder. We can observe how the presence of disorder can be dangerous for the population, indeed, for strong disorder (ϵ=1) ring and leads states can become|E_n_lr⟩states. However, a little amount of disorder (ϵ=0.1) does not modify the structure of the populations. In the intermediate caseϵ=0.5the population behavior is slightly modified but keeps well the zero disorder structure with a resonance and a well defined separation between leads and ring states far from it.§ PHASE DIFFERENCE DYNAMICSHere we provide more details on the phase effects that the presence of localized detunings generate. In the first part we show how the presence of a single localized detuning in the center of the chain of three atoms has significant effects on the phase associated to each site. For this reason, we suspect that a phase difference between two different paths with detuning located in different positions arises. Thus, in the second part of the section and in the main text we report the phase difference between two sites in the two arms in different conditions. We show that its dynamics is related to the population one. §.§ Detuning-dependent phase difference in the three sites systemThe presence of a localized detuning introduces a time dependent phase between different sites of the system. To see this, let us consider the simple three sites problem. We suppose to consider only nearest neighbor hopping, and put a localized detuning in the middle of the chain. As a first stage, we fix the detuningΔ=0and consider the three site Hamiltonianℋ̂_ hop=J(σ̂^+_1σ̂^-_2 + σ̂^+_2σ̂^-_3 +H.c.).Supposing to consider the initial state|ψ(0)⟩=|↑_1,↓_2,↓_3⟩and evolve it through the Schrödinger equation. The resulting time-evolved state will be|ψ(t)⟩= 12(cos(√(2J)t) +1)|↑_1,↓_2,↓_3⟩ + i√(2)sin(√(2J)t) |↓_1,↑_2,↓_3⟩ + 12(cos(√(2J)t) -1)|↓_1,↓_2,↑_3⟩;in absence of detuning, there is not a relative phase between the sites1and3. In presence of a localized detuning, the Hamiltonian considered isĤ=ℋ̂_hop+Δn̂_2, the resulting time-evolved state will be|ψ(t)⟩= (e^-i(Δ-f(Δ,J))t/2𝒩_1^2 + e^-i(Δ+f(Δ,J))t/2𝒩_2^2+12)|↑_1,↓_2,↓_3⟩+ ((Δ-f(Δ,J))e^-i(Δ-f(Δ,J))t/22J𝒩_1^2 + (Δ+f(Δ,J))e^-i(Δ+f(Δ,J))t/22J𝒩_2^2)|↓_1,↑_2,↓_3⟩+ (e^-i(Δ-f(Δ,J))t/2𝒩_1^2 + e^-i(Δ+f(Δ,J))t/2𝒩_2^2-12)|↓_1,↓_2,↑_3⟩where𝒩_1^2=2+(Δ-f(Δ,J))^2/4J^2,𝒩_2^2=2+(Δ+f(Δ,J))^2/4J^2andf(Δ,J)=√(Δ^2+8J^2). In this way, the real and imaginary part of the|↑_1,↓_2,↓_3⟩state will be(c_↑↓↓(t) )=cos((Δ-f)t/2)𝒩_1^2+cos((Δ+f)t/2)𝒩_2^2+12,(c_↑↓↓(t) )=-sin((Δ-f)t/2)𝒩_1^2-sin((Δ+f)t/2)𝒩_2^2.The phase associated to the state isϕ_↑↓↓(t)=tan^-1[(c_↑↓↓(t) )(c_↑↓↓(t) )],while the phase associated to the state|↓_1,↓_2,↑_3⟩is ϕ_↓↓↑(t)=tan^-1[(c_↑↓↓(t) )(c_↑↓↓(t) )-1],therefore, the two phases are different. In presence of detuning, a non-zero relative phase between the first and the last site of the chain is present. The difference with the zero detuning case is clear: the states|↑,↓,↓⟩and|↓,↓,↑⟩acquire aΔ-dependent phase, also the state|↓,↑,↓⟩has aΔ-dependent phase differently from the zero detuning case in which it is fixed to±π.§.§ Phase difference for different locations of detunings If we consider two arms that differ from the detuning value, we expect to see a non-trivial phase difference between the sites of them. To formally evaluate the phase shift in the set-up considered in the main text, we write a generic time evolved states of the system in the position basis|ψ(t)⟩=c_𝒮(t)|𝒮⟩+∑_j∈ℛc_j(t)|↓⟩_𝒮⊗|↓,...,↑_j,...,↓⟩_ℛ⊗|↓⟩_𝒟+c_𝒟(t)|𝒟⟩,and we observe that the two body correlator ⟨σ̂_i^+σ̂_j^-|=⟩c_i^*(t)c_j(t)=|c_i(t)||c_j(t)|e^i(ϕ_j(t)-ϕ_i(t))is directly related to the phase shift. Thus, the phase difference between the two sitesiandjis accessible throughδϕ(t)=ϕ_j(t)-ϕ_i(t)=( ⟨σ̂_i^+σ̂_j^-|⟩). We can fixi=N_r/2-1andj=N_r/2+1in order to have information on the difference between the phases accumulated by excitation fractions crossing the two arms of the ring. Figures <ref>, <ref>, and <ref> report the phase difference dynamics and detuning dependence for the configurations shown in Fig.<ref> showing its relation with the population dynamics. Here, we show the phase difference for alternative choices of the detunings location in the Rydberg caseα=3.We immediately observe that when the two detunigs have the same sign and the detuning locations are symmetric with respect to the leads [Fig. <ref>(a)], the phase difference is zero everywhere. Indeed, due to the symmetry of the system, the excitation acquires the same phase in the two arms. Differently, if the detunings are opposite, strong phase difference oscillations appear in such a way that it is never zero except inΔ=0[Fig. <ref>(c)]. Figure <ref>(b,d) show the phase difference in a situation in which the distance between the detunings is the same as the one considered in the main text, meaningj_B-j_A=N_r/2, but they are located in different positions. As in the main text case, the equal barrier case presents two resonances forΔ_redanalogous to those considered in the main text configuration, they are easily recognizable by fast phase difference oscillations. However, far from resonance, the dynamics is different and strongly depends on the the barrier location. The same applies to the opposite barrier case, there is no resonance but the nature of the oscillation is different.§ CHAINIn this section we consider an analogous situation to those considered in the main text in which the channel is a chain of atoms. Thus, the only difference with the model considered in the main text is a new definition of the distancesd_ij=a|i-j|,d_𝒮𝒟=d_𝒮ℛ+(N_c-1)a+d_𝒟ℛ, d_𝒮,j=d_𝒮ℛ+a(j-1),d_𝒟,j=d_𝒟ℛ+a(N_c-j);ais the lattice spacing in the chain,d_𝒮ℛandd_𝒟ℛare the distances between leads and chain,N_c=N-2is the number of atoms in the channel. The sites are labeled in such a way thatj=1is the site connected to the source andj=N_cis the site connected to the drain. In the nearest neighbor limit the channel Hamiltonian in absence of detunings can be diagonalized passing first through a Jordan Wigner transformation and then rotating appropriately the creation and destruction operators. The unperturbed Hamiltonian turns out to be <cit.> ℋ̂_0 =∑_n=1^N_cE_n^(0)d̂_n^†d̂_n,E_n^(0) = 2J cos(π nN_c+1),ĉ_j=√(2N_c+1)∑_n=1^N_csin(π j nN_c+1) d̂_n.In this case the spectrum is not degenerate, thus the barrier will simply shift the energy levels without splitting any degeneracy. In the case of non-degenerate spectrum the first order correction in presence of a single detuning located at the positionj_0can be straightforwardly calculatedΔ E_n^(1)=Δ⟨n|n̂_j_0|n|=⟩2ΔN_c+1sin^2(π n j_0N_c+1).We are interested on the role of the long range hopping. Fig. <ref>(a) reports the energy spectrum of the uncoupled chain Hamiltonian near to the zero energy value for different values ofα. ForΔ=0and in the nearest neighbor case, the spectrum is symmetric with respect toE=0, then a linear shift of the levels is caused byΔ. As in the ring case, with a single detuning the shift is not sufficient to have a resonance at zero energy in the detuning interval considered. Increasing the hopping range and so decreasingα, the levels are basically shifted downwards in terms of energy. For this reason, for finiteαit is possible that resonances appear. Forα=3, we observe a resonance with the zero energy state atΔaround zero. The presence of a resonance will allow us to state that the ring will be populated during the dynamics.However, as observed previously, the presence of resonances is not sufficient to have complete information on the dynamics. It is also important to understand how the complete spectrum of leads and chain behaves inΔ. In Fig. <ref>(b) we show the ring population for three different values of the leads-chain couplingK_nn/J_nn. We can immediately observe that in this case also forK_nn/J_nn=0.1ring and leads states are not well separated. In particular, far from resonance, many eigenstates are of the form|E_nl⟩and so the dynamics will not result in coherent oscillations between source and drain. ForK_nn/J_nn=0.05the separation is better defined for big values ofΔ, while around the resonance the ring population is1/2. Thus, we can conclude that to decouple leads and chain we need a bigger distances between them with respect to ring case.Finally, Fig. <ref>(c) reports the dynamics of the source, chain and drain population forK_nn/J_nn=0.05. In proximity of the resonance the ring is heavily populated to the disadvantage of the source. Increasing the value of the detuning and going far from resonance, the transport through the ring is reduced; however, theΔ-window in which the ring is populated and is not simply a narrow region as in the ring case.10beenakker1991quantum CWJ Beenakker and Henk van Houten.Quantum transport in semiconductor nanostructures.InSolid state physics, volume 44, pages 1–228. Elsevier, 1991.blanter2000shot Ya M Blanter and Markus Büttiker.Shot noise in mesoscopic conductors.Physics reports, 336(1-2):1–166, 2000.datta2005quantum Supriyo Datta.Quantum transport: atom to transistor.Cambridge university press, 2005.nazarov2009quantum Yuli V Nazarov and Yaroslav M Blanter.Quantum transport: introduction to nanoscience.Cambridge university press, 2009.stone1992theory AD Stone.Theory of coherent quantum transport.InPhysics of Nanostructures, pages 65–100. Institute of Physics, 1992.chien2015quantum Chih-Chun Chien, Sebastiano Peotta, and Massimiliano Di Ventra.Quantum transport in ultracold atoms.Nature Physics, 11(12):998–1004, 2015.stadler2012observing David Stadler, Sebastian Krinner, Jakob Meineke, Jean-Philippe Brantut, and Tilman Esslinger.Observing the drop of resistance in the flow of a superfluid fermi gas.Nature, 491(7426):736, 2012.husmann2015connecting Dominik Husmann, Shun Uchino, Sebastian Krinner, Martin Lebrat, Thierry Giamarchi, Tilman Esslinger, and Jean-Philippe Brantut.Connecting strongly correlated superfluids by a quantum point contact.Science, 350(6267):1498–1501, 2015.corman2019quantized Laura Corman, Philipp Fabritius, Samuel Häusler, Jeffrey Mohan, Lena H Dogra, Dominik Husmann, Martin Lebrat, and Tilman Esslinger.Quantized conductance through a dissipative atomic point contact.arXiv preprint arXiv:1907.06436, 2019.haug2019aharonov Tobias Haug, Hermanni Heimonen, Rainer Dumke, Leong-Chuan Kwek, and Luigi Amico.Aharonov-bohm effect in mesoscopic bose-einstein condensates.Physical Review A, 100(4):041601, 2019.amico2021roadmap Luigi Amico, Malcolm Boshier, Gerhard Birkl, Anna Minguzzi, Christian Miniatura, L-C Kwek, Davit Aghamalyan, Veronica Ahufinger, Dana Anderson, Natan Andrei, et al.Roadmap on atomtronics: State of the art and perspective.AVS Quantum Science, 3(3), 2021.amico2022colloquium Luigi Amico, Dana Anderson, Malcolm Boshier, Jean-Philippe Brantut, Leong-Chuan Kwek, Anna Minguzzi, and Wolf von Klitzing.Colloquium: Atomtronic circuits: From many-body physics to quantum technologies.Reviews of Modern Physics, 94(4):041001, 2022.buttiker1984quantum M Büttiker, Y Imry, and M Ya Azbel.Quantum oscillations in one-dimensional normal-metal rings.Physical Review A, 30(4):1982, 1984.gefen1984quantum Yuval Gefen, Yoseph Imry, and M Ya Azbel.Quantum oscillations and the aharonov-bohm effect for parallel resistors.Physical review letters, 52(2):129, 1984.washburn1986aharonov Sean Washburn and Richard A Webb.Aharonov-bohm effect in normal metal quantum coherence and transport.Advances in Physics, 35(4):375–422, 1986.griffiths2003ouantum David J Griffiths.Ouantum mechanics.2003.ryu2013experimental C Ryu, PW Blackburn, AA Blinova, and MG Boshier.Experimental realization of josephson junctions for an atom squid.Physical review letters, 111(20):205301, 2013.ryu2020quantum Changhyun Ryu, EC Samson, and Malcolm Geoffrey Boshier.Quantum interference of currents in an atomtronic squid.Nature communications, 11(1):3338, 2020.krzyzanowska2023matter Katarzyna A Krzyzanowska, Jorge Ferreras, Changhyun Ryu, Edward Carlo Samson, and Malcolm G Boshier.Matter-wave analog of a fiber-optic gyroscope.Physical Review A, 108(4):043305, 2023.barrett2014sagnac Brynle Barrett, Rémy Geiger, Indranil Dutta, Matthieu Meunier, Benjamin Canuel, Alexandre Gauguet, Philippe Bouyer, and Arnaud Landragin.The sagnac effect: 20 years of development in matter-wave interferometry.Comptes Rendus Physique, 15(10):875–883, 2014.blatt2012quantum Rainer Blatt and Christian F Roos.Quantum simulations with trapped ions.Nature Physics, 8(4):277–284, 2012.monroe2021programmable Christopher Monroe, Wes C Campbell, L-M Duan, Z-X Gong, Alexey V Gorshkov, Paul W Hess, Rajibul Islam, Kihwan Kim, Norbert M Linke, Guido Pagano, et al.Programmable quantum simulations of spin systems with trapped ions.Reviews of Modern Physics, 93(2):025001, 2021.browaeys2020many Antoine Browaeys and Thierry Lahaye.Many-body physics with individually controlled rydberg atoms.Nature Physics, 16(2):132–142, 2020.bernien2017probing Hannes Bernien, Sylvain Schwartz, Alexander Keesling, Harry Levine, Ahmed Omran, Hannes Pichler, Soonwon Choi, Alexander S Zibrov, Manuel Endres, Markus Greiner, et al.Probing many-body dynamics on a 51-atom quantum simulator.Nature, 551(7682):579–584, 2017.barredo2016atom Daniel Barredo, Sylvain De Léséleuc, Vincent Lienhard, Thierry Lahaye, and Antoine Browaeys.An atom-by-atom assembler of defect-free arbitrary two-dimensional atomic arrays.Science, 354(6315):1021–1023, 2016.schymik2020enhanced Kai-Niklas Schymik, Vincent Lienhard, Daniel Barredo, Pascal Scholl, Hannah Williams, Antoine Browaeys, and Thierry Lahaye.Enhanced atom-by-atom assembly of arbitrary tweezer arrays.Physical Review A, 102(6):063107, 2020.birkl1992multiple Gerhard Birkl, Sven Kassner, and Herbert Walther.Multiple-shell structures of laser-cooled 24mg+ ions in a quadrupole storage ring.Nature, 357(6376):310–313, 1992.kiesenhofer2023controlling Dominik Kiesenhofer, Helene Hainzer, Artem Zhdanov, Philip C Holz, Matthias Bock, Tuomas Ollikainen, and Christian F Roos.Controlling two-dimensional coulomb crystals of more than 100 ions in a monolithic radio-frequency trap.PRX Quantum, 4(2):020317, 2023.maier2019environment Christine Maier, Tiff Brydges, Petar Jurcevic, Nils Trautmann, Cornelius Hempel, Ben P Lanyon, Philipp Hauke, Rainer Blatt, and Christian F Roos.Environment-assisted quantum transport in a 10-qubit network.Physical review letters, 122(5):050501, 2019.barredo2015coherent Daniel Barredo, Henning Labuhn, Sylvain Ravets, Thierry Lahaye, Antoine Browaeys, and Charles S Adams.Coherent excitation transfer in a spin chain of three rydberg atoms.Physical review letters, 114(11):113002, 2015.yang2019quantum Fan Yang, Shuo Yang, and Li You.Quantum transport of rydberg excitons with synthetic spin-exchange interactions.Physical Review Letters, 123(6):063001, 2019.arrazola2016digital Iñigo Arrazola, Julen S Pedernales, Lucas Lamata, and Enrique Solano.Digital-analog quantum simulation of spin models in trapped ions.Scientific reports, 6(1):30534, 2016.richerme2014non Philip Richerme, Zhe-Xuan Gong, Aaron Lee, Crystal Senko, Jacob Smith, Michael Foss-Feig, Spyridon Michalakis, Alexey V Gorshkov, and Christopher Monroe.Non-local propagation of correlations in quantum systems with long-range interactions.Nature, 511(7508):198–201, 2014.bohnet2016quantum Justin G Bohnet, Brian C Sawyer, Joseph W Britton, Michael L Wall, Ana Maria Rey, Michael Foss-Feig, and John J Bollinger.Quantum spin dynamics and entanglement generation with hundreds of trapped ions.Science, 352(6291):1297–1301, 2016.brown2016co Kenneth R Brown, Jungsang Kim, and Christopher Monroe.Co-designing a scalable quantum computer with trapped atomic ions.npj Quantum Information, 2(1):1–10, 2016.rajabi2019dynamical Fereshteh Rajabi, Sainath Motlakunta, Chung-You Shih, Nikhil Kotibhaskar, Qudsia Quraishi, Ashok Ajoy, and Rajibul Islam.Dynamical hamiltonian engineering of 2d rectangular lattices in a one-dimensional ion chain.npj Quantum Information, 5(1):32, 2019.duca2023orientational Lucia Duca, Naoto Mizukami, Elia Perego, Massimo Inguscio, and Carlo Sias.Orientational melting in a mesoscopic system of charged particles.Physical Review Letters, 131(8):083602, 2023.britton2012engineered Joseph W Britton, Brian C Sawyer, Adam C Keith, C-C Joseph Wang, James K Freericks, Hermann Uys, Michael J Biercuk, and John J Bollinger.Engineered two-dimensional ising interactions in a trapped-ion quantum simulator with hundreds of spins.Nature, 484(7395):489–492, 2012.yoshimura2015creation Bryce Yoshimura, Marybeth Stork, Danilo Dadic, Wesley C Campbell, and James K Freericks.Creation of two-dimensional coulomb crystals of ions in oblate paul traps for quantum simulations.EPJ Quantum Technology, 2:1–17, 2015.richerme2016two Philip Richerme.Two-dimensional ion crystals in radio-frequency traps for quantum simulation.Physical Review A, 94(3):032320, 2016.noguchi2014aharonov Atsushi Noguchi, Yutaka Shikano, Kenji Toyoda, and Shinji Urabe.Aharonov–bohm effect in the tunnelling of a quantum rotor in a linear paul trap.Nature communications, 5(1):3868, 2014.li2017realization Hao-Kun Li, Erik Urban, Crystal Noel, Alexander Chuang, Yang Xia, Anthony Ransford, Boerge Hemmerling, Yuan Wang, Tongcang Li, Hartmut Häffner, et al.Realization of translational symmetry in trapped cold ion rings.Physical review letters, 118(5):053001, 2017.chen2023continuous Cheng Chen, Guillaume Bornet, Marcus Bintz, Gabriel Emperauger, Lucas Leclerc, Vincent S Liu, Pascal Scholl, Daniel Barredo, Johannes Hauschild, Shubhayu Chatterjee, et al.Continuous symmetry breaking in a two-dimensional rydberg array.Nature, 616(7958):691–695, 2023.bornet2023scalable Guillaume Bornet, Gabriel Emperauger, Cheng Chen, Bingtian Ye, Maxwell Block, Marcus Bintz, Jamie A Boyd, Daniel Barredo, Tommaso Comparin, Fabio Mezzacapo, et al.Scalable spin squeezing in a dipolar rydberg atom array.arXiv preprint arXiv:2303.08053, 2023.lienhard2020realization Vincent Lienhard, Pascal Scholl, Sebastian Weber, Daniel Barredo, Sylvain de Léséleuc, Rukmani Bai, Nicolai Lang, Michael Fleischhauer, Hans Peter Büchler, Thierry Lahaye, et al.Realization of a density-dependent peierls phase in a synthetic, spin-orbit coupled rydberg system.Physical Review X, 10(2):021031, 2020.averin1990virtual DV Averin and Yu V Nazarov.Virtual electron diffusion during quantum tunneling of the electric charge.Physical Review Letters, 65(19):2446, 1990.tran2008sequential TB Tran, IS Beloborodov, Jingshi Hu, XM Lin, TF Rosenbaum, and HM Jaeger.Sequential tunneling and inelastic cotunneling in nanoparticle arrays.Physical Review B, 78(7):075437, 2008.jurcevic2014quasiparticle Petar Jurcevic, Ben P Lanyon, Philipp Hauke, Cornelius Hempel, Peter Zoller, Rainer Blatt, and Christian F Roos.Quasiparticle engineering and entanglement propagation in a quantum many-body system.Nature, 511(7508):202–205, 2014.kramer2018quantumoptics Sebastian Krämer, David Plankensteiner, Laurin Ostermann, and Helmut Ritsch.Quantumoptics. jl: A julia framework for simulating open quantum systems.Computer Physics Communications, 227:109–116, 2018.griffiths2018introduction David J Griffiths and Darrell F Schroeter.Introduction to quantum mechanics.Cambridge university press, 2018.jordan1993paulische Pascual Jordan and Eugene Paul Wigner. Über das paulische äquivalenzverbot.Springer, 1993.de2009x Antonella De Pasquale and Paolo Facchi. XY model on the circle: Diagonalization, spectrum, and forerunners of the quantum phase transition.Physical Review A, 80(3):032102, 2009.hegde2015quench Suraj Hegde, Vasudha Shivamoggi, Smitha Vishveshwara, and Diptiman Sen.Quench dynamics and parity blocking in majorana wires.New Journal of Physics, 17(5):053036, 2015.
http://arxiv.org/abs/2310.17967v1
{ "authors": [ "Francesco Perciavalle", "Oliver Morsch", "Davide Rossini", "Luigi Amico" ], "categories": [ "cond-mat.quant-gas", "quant-ph" ], "primary_category": "cond-mat.quant-gas", "published": "20231027083120", "title": "Coherent excitation transport through ring-shaped networks" }
Integer Sequences: Irregular Arraysand Intra-Block Permutations Boris Putievskiy===================================================================Language Representation Models (LRMs) trained with real-world data may capture and exacerbate undesired bias and cause unfair treatment of people in various demographic groups. Several techniques have been investigated for applying interventions to LRMs to remove bias in benchmark evaluations on, for example, word embeddings. However, the negative side effects of debiasing interventions are usually not revealed in the downstream tasks. We propose , a set of evaluations on assessing the fairness of debiasing. In this work, We examine four debiasing techniques on a real-world text classification task and show that reducing biasing is at the cost of degrading performance for all demographic groups, including those the debiasing techniques aim to protect. We advocate that a debiasing technique should have good downstream performance with the constraint of ensuring no harm to the protected group. § INTRODUCTIONSuppose a hiring hospital wants to offer targeted advertisements for an open surgeon position. The employer from the hospital mines the user's bio on social media to predict whether an individual is a surgeon to determine whether to offer the relevant advertisement. To make the prediction, they use a pre-trained Language Representation Model to encode the text and then fine-tune a classification model on top of the representation. The employer decides to use a debiasing technique on the mined data to give equal opportunity to people with different attributes. However, the employer observes that female surgeons receive the advertisement at much lower rates than male surgeons. Even worse, the fraction of female surgeons seeing the advertisement went down after the debiasing intervention. In this work, we show that such scenarios are highly plausible with existing debiasing techniques.Undesired bias or social stereotypes have been found in natural language representations <cit.>, and systematic ways of debiasing have been widely discussed <cit.>. Recent works focus on developing techniques to detect, evaluate and mitigate bias in LRMs and reduce harm to marginalized individuals and groups <cit.>. Some of those works measure bias comprising metrics <cit.> and datasets <cit.> to investigate biases within a specific natural language processing (NLP) task, such as text classification <cit.> or language generation <cit.>. Other works design debiasing techniques for the specific applications in patient notes <cit.>, clinical record de-identification <cit.>, or dissecting ML-guided health decisions <cit.>.Due to the variants in datasets and the application area, it is hard to evaluate the downstream performance of the debiasing techniques. Previous work has raised this concern <cit.> by utilizing Equality of Opportunity and evaluating the downstream model performance with debiasing across all groups in the dataset. We expand this consideration to examine how debiasing affects group-wise performance and the model performance with other well-known debiasing techniques. In this work, we study the effectiveness of the language debiasing technique on the task where the protected attributes are given in the dataset (<Ref>). We propose , a framework with a combination of criteria for characterizing fairness in multiple senses: a group-wise utility or performance measure (x) and the corresponding difference of performance x between groups (known as GAP) of model performance on that evaluation metrics between protected attributes. We evaluate four widely-used debiasing techniques on a challenging language model multiclass classification task, where the input is embedded from brief natural language bios, and the target of the classification is the profession. We find that debiasing techniques are either ineffective in reducing the GAP, or are effective at the cost of reducing the model performance on protected attributes, including the group for which debiasing was intended to improve outcomes. In a context where the protected group subject prefers higher model performance, such an intervention achieves `fairness' only through harm. §There are many diverse downstream applications for natural language classifiers, and as such, limit to any specific metric will likely have limited utility in some cases. Our leaves the flexibility to use any desired evaluation metric to match the use case at hand. The x can, therefore, be any performance evaluation metric used at the class level in the downstream task. Fairness Definition. We argue that a fair debiasing technique should guarantee that after debiasing: * Metrics (x) of the protected group[In this study, we define the `protected group' as the demographic group with a lower performance before applying debiasing techniques.] with respect to the protected attributes should be no worse than before. This can be thought of as a “do no harm” criterion.* The GAP of the metrics between protected attributes should decrease substantially. This can be thought of as an “improvement in equality criterion.” If a debiasing intervention satisfies these two criteria, we consider this base satisfaction. For a multi-class classification problem, we can further break down this criteria satisfaction at the level of individual predictive classes for each demographic group.We say an intervention satisfies advanced satisfaction if, in addition to base satisfaction, it does not result in a reduction of performance (measured by x) for the non-protected group(s).§ EXPERIMENTData.In our study, we use Bias in Bios<cit.>, which contains short online biographies (bios) written in English. Each bio is associated with one of 28 professions and one of two gender identities, where we consider the gender identity as `protected attributes'[We acknowledge that gender is more complex and non-binary. Following the data collection process in Bias in Bios, we used a binary division of gender while investigating the bias in the pre-trained language models and consider the gender group as the only group in all the experiments in this paper.]. We want to predict a profession given the tokenized English bio. Details of the study population can be found in <Ref>. Problem Statement. We want to evaluate the overall and group-wise prediction performance before and after applying each debiasing technique. In both the overall and the group-wise evaluation, we consider the True Positive Rate (TPR) of the classification, broken down by gender, as the relevant utility measure x and calculate the difference of TPR between groups. Denote the set of all profession 𝒫 for each tokenized bio in the dataset 𝒟. For the protected attribute z in all attributes 𝒵, in a given profession p, we have binary gender attribute male (z) and female (z'). The GAP of TPR can be denoted as GAP^TPR_z, p with a specific gender g and profession prediction p̂. For a given profession p:GAP^TPR_z, p = TPR_z, p - TPR_z', p, where TPR_z, p = ℙ[P̂ = p | Z = z, P = p]where P̂, Z, and P are predictions for the profession, gender, and ground-truth profession, respectively.We use the GAP to measure the difference in model performance for the selected evaluation metric between the protected attributes, which quantifies the disparity in the model's classification task performance prediction across different prediction classes and protected attributes.In the overall performance evaluation, due to the data variance, we involve the idea of GAP Root Mean Square (GAP^RMS) with TPR to combat the possible impact of data imbalance <cit.>. We denote the GAP^RMS in this experiment as[For simplicity, in the rest of the paper, GAP refers to TPR GAP and GAP^RMS refers to the TPR GAP with Root Mean Square.]:GAP^RMS_z = √(1/| 𝒫 |∑_p∈𝒫(GAP^TPR_z, p)^2)GAP^RMS helps to evaluate both the model predictions and the variance with respect to different profession groups. Different from comparing the averaged group-wise TPR overall profession groups, GAP^RMS will not be influenced by the imbalanced disparities in TPR in one attribute in one direction. We use TPR and Accuracy in conjunction with the evaluation. Accuracy measures the proportion of correctly classified cases overall with the system, regardless of the specific classes the predicted label belongs to. Under extreme data imbalance cases, accuracy might not be an ideal pick of metrics for performance evaluation. Different from the group-wise population, the overall population of the protected attributes is close to each other and accuracy thus turns out to be a valuable metric x for the overall performance evaluation. <Ref> elaborates more on evaluation metrics and how they contribute to this experiment.Experimental Setup.In this study, we evaluate the following four debiasing methods (<Ref>): [t]0.44* Equality of Opportunity (EO) <cit.>* Decoupled Classifiers(Decoupled) <cit.> [t]0.56 * Counterfactual Data Augmentation (CDA) <cit.>* Iterative Nullspace Projection (INLP) <cit.>We use the Logistic Regression classifier with multi-group classification for prediction[For INLP, we tokenize the English bio with BERT <cit.>]. To validate our results through statistical significance testing, we use the same set of hyperparameters on the classifier and repeat the experiment five times for each debiasing techniques. We report the mean performance across these runs and run a two-sample t-test to investigate if the difference between means is statistically significant. Results.<Ref> presents the overall model performance before and after applying the four debiasing techniques. We observe that after debiasing, across all groups, the GAP^RMS between the protected attributes decreases substantially. However, this comes at the expense of worsening the model prediction for both groups, including the protected group with worse-off performance before debiasing. The overall performance seems to satisfy the GAP criterion in while failing to achieve the first `do no harm' criterion. <Ref> details the performance after debiasing on each of the 28 professions for all four techniques. Among all the professions, INLP has more than 85% groups where the GAP decreases after debiasing (<Ref>). Meanwhile, the change of GAP with INLP is also the best among all the debiasing techniques (<Ref>). We detail the change of TPR regarding the protected attributes in <Ref>. To address the impact of data imbalance, we consider both unweighted and weighted calculations with respect to the group population in evaluating the group-wise satisfaction rate among different professions (the prediction groups). <Ref> shows base and advanced satisfaction with weighted and unweighted calculations. The weighted performance of the debiasing techniques is more distinctive than the unweighted. While the total number of satisfaction is similar, one technique might have the satisfaction on the profession with a large population. Decoupled Classifiers constantly outperform the other debiasing techniques, regardless of whether take total affected data points into consideration. Unfortunately, for all four debiasing techniques, none of them exceed 50% satisfaction rate under both base and advanced satisfaction criteria. r0.5 Percentage of professions with worsened GAP metrics after debiasing. Method Worsened GAP EO 39% Decoupled 39% CDA 39% INLP 14%<Ref> shows the changes in GAP broken down by profession. Note that while EO had the greatest improvement in GAP (<Ref>), that performance increase is not equally spread across all professions. Likewise, <Ref> shows the percentage of professions that worsened in the GAP metric for each debiasing method. We observe that no method is able to achieve an overall reduction in GAP without increasing the GAP in a sizeable portion of the professions. In particular, EO, Decoupled, and CDA increase the GAP in more than a third of professions. This adds a new dimension of complexity to the analysis of reducing harm, which has not yet been explored.§ DISCUSSIONOur work introduces , a framework to evaluate the effectiveness and fairness of language debiasing models on specific downstream tasks. From the multiclass classification task, we investigate the fairness of debiasing models in different perspectives: we introduce base and advance rule of satisfaction, evaluate both overall and group-wise performance (<Ref>, <Ref>), compare the weighted and unweighted satisfactions (<Ref>), and measured the proportion of groups that are harmed by increasing the overall performance (<Ref>). From the result, we found that none of the debiasing models achieves over 50% satisfaction. Among the four models, the Decoupled seems to have the best performance in satisfaction results. However, as we delve deep into the results of metrics, we found that in group-wise evaluation, the Decoupled does not have much improvements in reducing the GAP or increasing the TPR for the protected group (<Ref>, <Ref>). And INLP worsen about half of the professions in the TPR performance for the protected group (<Ref>). When examining the effects of these debiasing techniques, we see a further complexity not yet addressed in the literature. While all techniques cause a decrease in GAP metrics, this effect is not uniform across professions. As stated in <cit.>, INLP is known for reducing GAP after debiasing. When we move one step deeper, from the overall performance to the group-wise performance, INLP hurts more than one-third of the groups to achieve a reduction of GAP in the report of overall performance (<Ref>, <Ref>). In the previous discussion, one might found that the model with high satisfaction is actually not doing the debiasing job well. One might question how assists in evaluating fair and effective language debiasing techniques. We advocate that should be considered as the first constraint in the debiasing evaluation. It means a debiasing model should guarantee to maintain a high satisfaction rate with before reaching the goal of improving the prediction performance. Therefore, a good debiasing technique should consider as to pass an `entry test' to demonstrate the model robustness in fairness without harm to the protected group(s). § CONCLUSION AND FUTURE WORKIn our study, we have highlighted difficulties with existing debiasing techniques when used as part of an intervention on a language classification task. Through , we highlight practical challenges that existing debiasing techniques face to remain `fair' after debiasing in the downstream applications, such as a tradeoff in performance or imbalanced changes in performance across the target classes. There is a gap in the current state of the art for a principled debiasing technique that can guarantee higher satisfaction rates of both our `do no harm' and `improvement of equality' criteria. Our evaluation motivates the need for multiple assessments of fairness to ensure bias reduction without harm. § SOCIAL IMPACT STATEMENTThe goal of debiasing is to intentionally tip the scale back in favor of those who face discrimination without inadvertently perpetuating harm or injustice.As pointed back to the hiring story, one may wish to debias the model toward the direction that the gap between men and women is reduced. However, the inadvertent effect might be that our model becomes an even poorer recruitment tool for finding surgeons, sacrificing the model performance for all protected attributes to have a closer prediction performance, even harming the very group we intended to increase equity. With the development of language technology, it is unavoidable to observe bias within the model. We are committed to advancing the research on reducing bias in language models, which requires more robust evaluations and frameworks for fairness.§ PROFESSION-LEVEL MODEL PERFORMANCE We provide more details about group-wise model performance for each of the debiasing technique with . We underline the profession that achieves base satisfaction and the profession underlined with a star(^*) refers to achieving advanced satisfaction. In the parenthesis, we include the standard deviation from repeating the experiment five times. §.§ Equality of OpportunityEquality of Opportunity is a measure of debiasing discrimination. If the TPR of two protected attributes is the same, they are equally qualified for a positive output. They should have the same probability of being correctly classified by the language model. By optimizing accuracy, the classifier can meanwhile optimize a form of Equality of Opportunit (EO). Then we measure the cost of the EO, ensuring equality of opportunity with respect to the accuracy <cit.>. §.§ Decoupled Classifiers Decoupled Classifiers involve training multiple classifiers independently for different protected attributes <cit.>. By training separately, each classifier can concentrate on learning the pattern specific to the group, which can improve the performance and accuracy of each individual classifier. Also, since each classifier is independent, the model has the flexibility to add or modify specific groups without affecting the rest, which is adaptive for applications with dynamic datasets. §.§ Counterfactual Data AugmentationCounterfactual Data Augmentation (CDA) is a technique that debiases by adjusting the training dataset <cit.>. As counterfactual reasoning, CDA generates counterfactual instances with respect to the protected attributes. For example, if we have binary gender as a protected attribute, by doing data augmentation, the sentence 'she is happy' would be augmented with 'he is happy.' We consulted <cit.> for the full pair of words list used in our experiment.§.§ Iterative Nullspace Projection Iterative Nullspace Projection (INLP) works by iteratively projecting the feature vectors of a pre-trained language model onto a subspace that is orthogonal to the subspace spanned by the protected attributes, effectively 'nulling out' the protected attributes from the feature representation<cit.>. By doing so, the model is forced to focus on other relevant features that are not correlated with the protected attributes. The INLP algorithm iteratively projects the feature vectors onto the orthogonal space of the subspace spanned by the protected attributes until the resulting feature representation is orthogonal to the protected attributes.§ PRELIM EXPERIMENT: COUNTERFACTUAL DATA AUGMENTATION WITH ITERATIVE NULL SPACE PROJECTION As an immediate possible next step, we experiment with the possibility of combining two existing methods to achieve a better satisfaction rate. We want to up weight protected group populations in the training data in the pre-processing step and use the augmented data as input to a debiasing layer <cit.>. In <Ref>, <Ref>, we evaluate the scale of changes with respect to the debiasing technique and profession groups. Across all the professions, CDA has the most large positive scale changes. We implement the counterfactual data augmentation technique on the input data of INLP (<Ref>). However, we do not see a huge improvement in combining the two existing debiasing techniques we evaluate in this work. § EVALUATION METRICSTPR. The True Positive Rate quantifies a system's proficiency in accurately identifying true positives within the overall population of the designated positive group(s)GAP. GAP is closely related to the concept of fairness by Equality of Opportunities in <cit.>, which introduces that if two individuals from a different group of the label are equally qualified for a positive outcome, they should have the same probability of being classified correctly.Suppose we have one profession with a highly significant gap between males and females; by averaging group-wise TPR, the result would be affected by this specific significant gap while the result of TPR GAP with RMS will not by such case. § GROUP-WISE PERFORMANCE CHANGESWe evaluate the changes in True Positive Rate (TPR) and the changes in GAP across all professions for males and females on the debiasing techniques (<Ref>, <Ref>, <Ref>). Through the figures, we calculate the change in model performance after the debiasing and before. The blue bars denote the directions in our interest, where we want an increase in the measure of TPR and a decrease in the measure of GAP.§ FAIRNESS WITH AWARENESS AND FAIRNESS WITH UNAWARENESS Debiasing approaches can be categorized as fairness with awareness and fairness with unawareness. Fairness with awareness refers to the approach where sensitive attributes, such as gender, race, or age, are explicitly considered in the model's debiasing process to ensure fair outcomes <cit.>. This approach allows for targeted interventions to ensure that different demographic groups are treated equitably and has been used successfully in the fair classification literature <cit.>. Fairness with unawareness refers to the approach where sensitive attributes are not explicitly considered by the model <cit.>. Instead, the model is designed to ensure fair outcomes without direct knowledge of the protected attributes. This approach helps maintain privacy and limits the potential for misuse and has been the main focus of prior work on debiasing language models. Still, it may not be as effective in addressing biases rooted in complex interactions between features or when there is a strong correlation between sensitive attributes and other input features. An example of fairness with unawareness technique is adversarial debiasing <cit.>, which involves training a model to generate unbiased outputs while an adversary attempts to predict sensitive attributes from those outputs.§ STOPWORDSWe remove certain commonly seen words from the biography.We borrowed the list of stopwords from <cit.>:“i", “me",“my", “myself", “we", “our", “ours", “ourselves", “you", “your", “yours", “yourself", “yourselves", “he", “him", “his", “himself", “she", “her", “hers", “herself", “it", “its", “itself", “they", “them", “their", “theirs", “themselves", “what", “which", “who", “whom", “this", “that", “these", “those", “am", “is", “are", “was", “were", “be", “been", “being", “have", “has", “had", “having", “do", “does", “did", “doing", “a", “an", “the", “and", “but", “if", “or", “because", “as", “until", “while", “of", “at", “by", “for", “with", “about", “against", “between", “into", “through", “during", “before", “after", “above", “below", “to", “from", “up", “down", “in", “out", “on", “off", “over", “under", “again", “further", “then", “once", “here", “there", “when", “where", “why", “how", “all", “any", “both", “each", “few", “more", “most", “other", “some", “such", “no", “nor", “not", “only", “own", “same", “so", “than", “too", “very", “s", “t", “can", “will", “just", “don", “should", “now"
http://arxiv.org/abs/2310.18458v2
{ "authors": [ "Chloe Qinyu Zhu", "Rickard Stureborg", "Brandon Fain" ], "categories": [ "cs.CL", "cs.CY" ], "primary_category": "cs.CL", "published": "20231027201138", "title": "Do Not Harm Protected Groups in Debiasing Language Representation Models" }
Leadership Inference for Multi-Agent Interactions Hamzah I. Khan^1 and David Fridovich-Keil^1 ^1Aerospace Engineering and Engineering Mechanics, University of Texas at Austin {hamzah, dfk}@utexas.edu ================================================================================================================================================================== The large language models have achieved superior performance on various natural language tasks. One major drawback of such approaches is they are resource-intensive in fine-tuning new datasets. Soft-prompt tuning presents a resource-efficient solution to fine-tune the pre-trained language models (PLMs) while keeping their weight frozen. Existing soft prompt methods mainly focus on designing the input-independent prompts that steer the model to fit the domain of the new dataset. Those methods often ignore the fine-grained information about the task and context of the text. In this paper, we propose a multi-level prompt tuning (MPrompt) method for machine reading comprehension. It utilizes prompts at task-specific, domain-specific, and context-specific levels to enhance the comprehension of input semantics at different granularities. We also propose an independence constraint to steer each domain-specific prompt to focus on information within its domain to avoid redundancy. Moreover, we present a prompt generator that incorporates context-related knowledge in the prompt generation to enhance contextual relevancy. We conducted extensive experiments on 12 benchmarks of various QA formats and achieved an average improvement of 1.94% over the state-of-the-art methods[The code is available at <https://github.com/Chen-GX/MPrompt>.]. § INTRODUCTIONIn recent years, pre-trained language models (PLMs) have been widely applied in question-answering tasks <cit.>, particularly in machine reading comprehension <cit.>, and achieved remarkable success through the pretrain-then-finetune paradigm <cit.>.Despite the excellent performance, due to the explosive growth of parameter sizes in PLMs, the fine-tuning paradigm has become resource intensive.Recently, soft-prompt tuning has been widely explored as a parameter-efficient approach to addressing the aforementioned issues <cit.>. For example, <cit.> proposed Prefix-tuning, which prepends a sequence of optimizable prefixes to each transformer layer while keeping the parameters of PLMs frozen. Prefix-tuning provides a lightweight alternative to fine-tuning and has achieved comparable performance with fewer trainable parameters.<cit.> proposed Prompt-tuning, which only prepends optimizable prompt vectors to the input sequence, which used fewer parameters compared to Prefix-tuning. <cit.> discovered negative tokens in Prompt-tuning that have a detrimental effect on downstream tasks and proposed XPrompt to mask these negative tokens, resulting in improved performance. However, the aforementioned methods are input-independent, i.e., assigning a uniform prompt to all inputs of a given task, which under-utilizes the input semantics for the answer generation in machine reading comprehension.There is a growing trend towards designing input-dependent prompts (a.k.a dynamic prompts) for various tasks <cit.>.For example, <cit.> proposed DialogPrompt for a dialog system, which dynamically generates prompt vectors according to the input dialogue context.<cit.> extracts input-related information from BERT <cit.> as contextualized prompts for natural language generation <cit.>, which improves the relevance between the generated text and the input text.However, to the best of our knowledge, there has been little research exploring input-dependent prompt methods for question-answering tasks, especially for machine reading comprehension. It is challenging to apply input-independent methods to machine reading comprehension where the answer is context-sensitive. To address the above issues, we propose MPrompt, a novel Multi-level Prompt tuning approach for machine reading comprehension. Our method utilizes the dataset and the context information to create three levels of prompts: task-specific, domain-specific, and context-specific. The task-specific prompts are input-independent and generate a prompt based on the tasks. The domain-specific prompts utilize the domain knowledge generated from the dataset while context-specific prompts rely on the input context.These multi-level prompts endow PLMs with multiple fine-grained considerations of input semantics. To further enhance the domain-specific prompts and avoid information redundancy, we propose the independence constraint to steer each prompt to focus on knowledge within the domain rather than cross-domain knowledge.Furthermore, we extract context-related knowledge from a small-scale PLM, such as T5-small <cit.>, and integrate it into the prompt generation process to enrich the context sensitivity of prompts. With the help of these three levels of prompts, we achieve an average improvement of 1.94% over the state-of-the-art methods on 12 benchmark datasets.Our main contributions are as follows:* We propose a novel multi-level prompt tuning (MPrompt) for machine reading comprehension which generates prompts at task-specific, domain-specific, and context-specific levels to improve answer generation.* We propose an independence constraint to steer each domain-specific prompt to focus on intra-domain information, avoiding information redundancy, at the same time enriching the domain-related semantics. * We propose a prompt generator based on a small-scale PLM to integrate context-related knowledge into prompt generation, which enriches the context awareness and sensitivity of the generated prompts.§ RELATED WORK §.§ Machine Reading ComprehensionMachine Reading Comprehension (MRC) is a challenging task and hot topic in Question Answering (QA) <cit.>. It aims to comprehend contexts and provides answers to corresponding questions. In recent years, the focus of Machine Reading Comprehension research has shifted from Extractive Question Answering <cit.> to Generative Question Answering <cit.>. For example, <cit.> has explored a retrieval-augmented generation scheme that combined pre-trained retrieval models to enhance the performance of the generative question answering models. <cit.> unified the input format of different QA tasks into the same format and fine-tune the generative models <cit.> for question answering. However, with the explosive growth in the parameter size of PLMs, the fine-tuning process becomes exponentially more resource intensive. One way to relax this computational requirement is through prompt learning <cit.>. §.§ Prompt LearningWith the success of GPT-3 <cit.>, prompt learning <cit.> has provided another efficient way to utilize PLMs, which has attracted widespread attention. The format of prompts can be in human-readable natural language (discrete prompts) <cit.>, or embedding vectors (continuous prompts) <cit.>. The continuous prompts provide a more flexible solution that encodes information into a trainable embedding which presents the information to a pre-trained model more efficiently.For example, <cit.> proposed Prompt-tuning, which achieves competitive performance by prepending trainable prompts to input sequences, and <cit.> further improved the Prompt-tuning by pruning the negative prompt tokens.The aforementioned approaches did not sufficiently consider the full utilization of the input semantics and applied the same prompt for all examples in the dataset, which potentially limits the delivery of the language models. Therefore, <cit.> extracts contextualized prompts based on the input text from external PLMs, resulting in better performance in natural language generation. <cit.> proposes to combine task-specific prompts with dynamic prompts, enabling the model to have finer-grained control over the generated text.However, there has been little research exploring input-dependent prompt learning in question answering. In contrast to natural language generation, question-answering tasks emphasize understanding of the given question and context. Therefore, a lack of input-dependent prompts may lead to an under-leverage of the context information present in addition to the questions, particularly in machine reading comprehension tasks.§ METHODOLOGYOur proposed multi-level prompt tuning (MPrompt) framework is illustrated in Figure <ref>. The framework consists of a prompt generator and a generative question answering model, whereas the former relies on a smaller-sized encoder-decoder architecture. The prompt generator generates domain-specific and context-specific prompts and elicits context-related knowledge from small-scale PLMs into the generation process. §.§ Task-specific PromptMany previous works <cit.> have demonstrated that shareable prompt parameters learned from particular tasks can effectively enhance the performance of pre-trained language models on downstream tasks. Therefore, following <cit.>, we construct task-specific prompts that share common prompt information within the task.We prepend a prefix P∈ℝ^t× d for the different types of attention class in the pre-trained language models, where t is the length of the task-specific prompt and d is the dimension of the embedding in generative QA model. For each attention class[In encoder-decoder architecture models, there are typically three types of attention: self-attention in the encoder, masked self-attention in the decoder, and cross-attention in the decoder. The corresponding task-specific prompts are denoted as 𝒯_E, 𝒯_Dm, and 𝒯_Dc.], the prefix for key-value pairs 𝒯={𝒯_1,𝒯_2,...,𝒯_L} are learned through an MLP, 𝒯=MLP(P), where L denotes the number of layers in the generative QA model, 𝒯_l = (𝒯_l,K, 𝒯_l,V) ∀ l ∈{1,...,L}, 𝒯_l,K and 𝒯_l,V∈ℝ^t× d, and 𝒯∈ℝ^t× 2dL. The overall task-specific prompt is 𝒯_task={𝒯_E,𝒯_Dm,𝒯_Dc}. §.§ Domain-specific PromptIn question answering scenarios, especially in machine reading comprehension, the context plays a crucial role as it contains the answer or the evidence in support of the answer.Meanwhile, the context in QA datasets can often be divided into several domains. For example, in NewsQA <cit.>, the context can be grouped into different domains such as politics, economics, society, and so on.To improve the semantic understanding of context, the context from different domains should utilize different prompts, and each domain-specific prompt should imply a specific knowledge shared within the domain. However, most QA datasets do not have explicit information about the domain of the context. To avoid additional annotation costs, we cluster the context C in an unsupervised manner to obtain different domains D∈{D_1,...,D_n}, where n denotes the number of domains, and each context can only belong to one domain. Each domain has its own shared prompt, therefore the domain-specific prompts 𝒟 = {𝒟_1,...,𝒟_n}, where 𝒟_i∈ℝ^ρ× d_p∀ i ∈{1,...,n}, 𝒟_i denotes the prompt shared within the domain D_i, ρ denotes the length of the domain-specific prompts, d_p denotes the dimension of embedding from the prompt generator. Intuitively, domain-specific prompts should encapsulate information for each respective domain. Therefore, we introduce the independence constraint to steer 𝒟_i to focus on the information within domain D_i. Focusing on the knowledge specific to each domain can enhance contextual understanding, as confirmed by subsequent experiments. Specifically, for any pair of 𝒟_a and 𝒟_b ∈𝒟, we introduce the Hilbert-Schmidt Independence Criterion (HSIC) <cit.> to measure the independence between the prompts of two domains:HSIC(𝒟_a,𝒟_b) = 1/(ρ - 1)^2tr(KHLH),where H is the centering matrix H_ρ = I_ρ - 1/ρ11^𝐓, K_ij=ϕ (𝒟_a_i, 𝒟_a_j), L_ij=ψ (𝒟_b_i, 𝒟_b_j), 𝒟_a_i∈ℝ^1× d_p, ϕ and ψ denote the kernel functions. HSIC=0 indicates independence, when ϕ and ψ are universal kernels. However, HSIC is not invariant to isotropic scaling, which can be addressed by normalizing HSIC which is known as Centered Kernal Alignment (CKA) <cit.>:CKA(𝒟_a,𝒟_b) =HSIC(𝒟_a,𝒟_b)/√(HSIC(𝒟_a,𝒟_a) HSIC(𝒟_b,𝒟_b)),where CKA∈ [0,1], and CKA=0 implies independence.Computing the pair-wise independence requires n(n-1)/2 iterations, which is slow for large n.To reduce computational costs, we randomly sample m pairs of domains as Θ to calculate the ℒ_idp constraints in each training iteration:ℒ_idp = ∑_(i,j)∈ΘCKA(𝒟_i,𝒟_j). §.§ Context-specific PromptThe domain-specific prompts provide shared intra-domain information, which provides fine-grained knowledge compared to task-specific prompts. However, there are still diversities among contexts within the same domain, and utilizing such diverse information is critical for answering questions accurately.Therefore, we construct context-specific prompts to enhance the understanding of each context, which provides fine-grained knowledge compared to domain-specific prompts. Specifically, all contexts have a shared context-specific prompt 𝒞∈ℝ^κ× d_p, where κ denotes the length of the context-specific prompt. Furthermore, we propose the prompt generator to ensure that 𝒞 generates different prompts for different contexts, especially for those contexts unseen in the training data and discuss its other roles in the next section. §.§ Prompt GeneratorIn general, task-specific prompts are related to the task of specific datasets, while domain-specific and context-specific prompts both are closely related to the context. To better leverage domain-specific and context-specific prompts to enhance PLMs' understanding of the context semantics, we introduce a small-scale PLM to encode contexts and integrate them into the prompt generation process.For a context c_i, which belongs to the domain D_j. The encoder of the prompt generator takes the context c_i as its input, while the concatenation of domain-specific prompt 𝒟_j and context-specific prompt 𝒞 serves as the input 𝒳 for the decoder,𝒳 = [𝒟_j ; 𝒞],where 𝒳∈ℝ^(ρ + κ)× d_p. It should be noted that we have removed the original decoder embedding layer. The output of the prompt generator is mapped to key-value pairs 𝒫={𝒫_1,...,𝒫_L} through the MLP, 𝒫 = MLP(Prompt Generator (c_i, 𝒳)),where 𝒫∈ℝ^(ρ + κ) × 2dL, 𝒫_l = (𝒫_l,K,𝒫_l,V), 𝒫_l,K and 𝒫_l,V∈ℝ^(ρ + κ) × d, and L denotes the number of layers in the generative QA model. Intuitively, the knowledge related to the context c_i is steered from the encoder of PLMs, and then integrated into the prompt generation process in the decoder. In this way, our approach allows for better learning of the semantics between prompt and context than previous work <cit.>, since both domain-specific prompt and context-specific prompt are closely related to the context. §.§ Applying Multi-level PromptsOverall, 𝒫 contains the information of domain-specific and context-specific prompts as well as knowledge from PLMs related to the context, while 𝒯_task contains the shared information within the task. In order to exploit multi-level prompt information to enhance the performance on question answering, we integrate the above different levels of prompts into the encoder of the generative QA model. Specifically, for the self-attention computation of layer l in the encoder of the generative QA model, the original K_l and V_l are augmented as:K_l^' = [𝒯_E_l,K;𝒫_l,K;K_l],V_l^' = [𝒯_E_l,V;𝒫_l,V;V_l]where K_l^' and V_l^'∈ℝ^(t+ρ+κ+M)× d, M denotes the length of the input sequence. For the self-attention and cross-attention computation of layer l in the decoder, K_l and V_l are augmented as:K_l^' = [𝒯_Dm(Dc)_l,K;K_l], V_l^' = [𝒯_Dm(Dc)_l,V;V_l]where K_l^' and V_l^'∈ℝ^(t+M)× d.To train the multi-level prompts, the loss function is a weighted sum of the two loss terms:ℒ=ℒ_NLL + λℒ_idp,where λ is the hyperparameter used to control the independence constraint, ℒ_NLL is the text generation loss, as follows:ℒ_NLL=-∑_t=1^Nlog p(y_t|x,y_<t),where y_t denotes the t-th element of the target sequence, and x represents the input sequence. It is worth noting that, guided by Equation <ref>, we only update the MLP, task-specific, domain-specific, and context-specific prompts, while keeping all other parameters frozen.§ EXPERIMENTS §.§ Datasets and BaselinesDatasets. To cover a wide range of QA tasks in our experiments, we evaluated our approach on 12 benchmark datasets in the fields of Extractive QA (EX): SQuAD2 <cit.>, NewsQA <cit.>, Abstractive QA (AB): NarrativeQA <cit.>, DROP <cit.>, Multiple-choice QA (MC): MCTest <cit.>, ARC(easy, challenge) <cit.>, OpenBookQA <cit.>, QASC <cit.>,RACE <cit.>, and Yes/No QA (YN): BoolQ <cit.>, BoolQ-NP <cit.>. Table <ref> presents the statistics of these datasets. Following <cit.>, the above-mentioned datasets in different formats were converted to a unified format to suit generative QA tasks. Due to space limitations, more details are available in Appendix <ref>. Metrics. We evaluate each dataset using the metrics most often used in previous work. For SQuAD2 and DROP, we used the F1 score with token overlap between the answer text and the gold answers. For NewsQA and NarrativeQA, we use ROUGE-L metric <cit.>. For the multiple-choice and Yes/No QA, we use accuracy for evaluation (sometimes referred to as exact match), i.e., a generated answer is considered correct only if it exactly matches the gold answers.Baselines. To comprehensively evaluate the performance of MPrompt, we compared it with a wide range of state-of-the-art soft-prompt methods, such as Fine-tuning <cit.>, Prefix-tuning <cit.>, Prompt-tuning <cit.> and XPrompt <cit.>. §.§ ImplementationWe convert each dataset into a unified text-to-text format to suit generative question answering models following <cit.>. Our MPrompt is based on three scales of pre-trained UnifiedQA <cit.> (which is a T5 model for question-answering tasks): Base, Large, XL with 220M, 770M and 3B parameters, respectively. For the prompt generator, we utilize UnifiedQA-Small with 60M parameters to ensure that there is no excessive demand for GPU memory.In all experiments, we employ the AdamW optimizer <cit.> and set β_1=0.9, β_2=0.999, and the weight decay is 0.01. We train our method with a learning rate of 5e-5, 10% warmup ratio, λ=1e-4, 50 epochs and record the model with the best performance on the validation set. To ensure a fair comparison, we fix the length of task-specific prompts to 10 and adjust the lengths of domain-specific and context-specific prompts to {5, 10, 15, 20, 30, 40, 50, 60}. We use Kmeans <cit.> and SentenceTransformers (all-mpnet-base-v2) <cit.> to cluster the context and fix the number of clusters to 3 to obtain domain information D. The visualization of the clustering results by t-SNE <cit.> is deferred to Appendix <ref>. For all baselines, all hyperparameter settings are based on the reported values in the original paper to achieve optimal results. Our method is implemented with PyTorch <cit.> and Transformers <cit.> library and experiments are conducted on Ubuntu 22.04 systems with NVIDIA RTX A100 or 4090 GPUs. Other implementation details and optimal hyperparameters are deferred to Appendix <ref>.§.§ Performance Comparison Table <ref> displays the main experimental results of different methods on 12 benchmark datasets. We conduct a comprehensive comparison between MPrompt and state-of-the-art methods, including Prompt-tuning <cit.>, Prefix-tuning <cit.>, and XPrompt <cit.> for different parameter sizes of PLMs. The datasets cover a wide range of question-answering scenarios, which is beneficial for the comprehensive evaluation of different methods.We observe that: (1) Our method MPrompt outperforms other soft-prompt methods by a large margin across all tasks and model scales. For example, MPrompt achieves absolute improvements of 2.17%, 1.85%, and 1.82% relative to Prefix-tuning on UnifiedQA-Base, Large, and XL respectively. It is due to the input-independent prompt learning methods applying a uniform prompt to all inputs for a given task, which evidently under-utilizing the input semantics in answer generation. However, MPrompt significantly improves the performance in question-answering tasks by enhancing the contextual comprehension of the PLMs with multiple levels of prompts. (2) Prefix-tuning and XPrompt have comparable performance at the same model size. Both algorithms outperform Prompt-tuning on the NewsQA, DROP, OBQA, QASC, and BoolQ-NP datasets. It is because Prefix-tuning provides deeper prompts, while XPrompt removes negative prompts in Prompt-tuning. However, MPrompt achieves higher performance than Prefix-tuning and XPrompt at the same model sizes, demonstrating its effectiveness. (3) Due to the luxury of having high computational resources and a full-weight update scheme in full fine-tuning, there is still a significant performance gap between soft-prompt tuning and full fine-tuning. However, As shown in Table <ref>, MPrompt matches the fine-tuning performance on all tasks and even outperforms the fine-tuning performance of UnifiedQA-Base and XL on most tasks. Specifically for UnifiedQA-Base, MPrompt achieves the best performance on SQuAD2, NewsQA, NarQA, MCTest, ARC (easy), RACE, and BoolQ, resulting in +0.69%, +0.62%, +0.24%, +1.31%, +0.78%, 0.21%, and 0.25% improvements over fine-tuning, respectively. We incorporate context knowledge from other PLMs (such as UnifiedQA-small in this paper) into prompt generation to enrich the semantics. In summary, our method achieved excellent performance compared to state-of-the-art soft prompt methods, closing and even surpassing the performance gap over fine-tuning. This demonstrates that MPrompt effectively enhances contextual comprehension and enriches the semantics of the PLMs which significantly improves the quality of downstream question-answering tasks. §.§ Ablation AnalysisIn this part, we perform an ablation study on the various components of MPrompt, as shown in Figure <ref>. Firstly, we observe a decrease in performance when removing domain-specific or context-specific prompts. The domain-specific or context-specific prompts are constructed based on inputs of different granularity, which enhances the semantic comprehension of the input. Secondly, when removing the independence constraint, there was a significant decrease in performance. The independence constraint steers domain-specific prompts to focus on intra-domain information rather than inter-domain information, which can effectively avoid information redundancy. Furthermore, performance decreases when the prompt generator is removed. The prompt generator ensures that context-specific prompts are generated differently for different contexts, even those that never appear in the training data, which enhances the semantic understanding of the input context. Moreover, the prompt generator elicits context-related knowledge from PLM and incorporates it into the prompt generation process, which helps improve the context awareness of the prompts. §.§ Sensitivity AnalysesIn this part, we conducted comprehensive sensitivity analyses on our proposed method, including the length of prompts, the weight λ of the loss ℒ_idp, different clustering results D, different scales of PLMs in the prompt generator, and the number of sampled domain pairs m.§.§.§ The Length of Prompts In MPrompt, the length of prompts is a key factor that affects model performance. Here, we investigate how the length of domain-specific and context-specific prompts impacts the final performance. We fixed the length of one prompt to 10 and varied the other in the range of {5, 10, 15, 20, 30, 40, 50, 60}. As shown in Figure <ref>, in most cases, MPrompt shows stable performance for the length of domain-specific and context-specific prompts. Moreover, since DROP and OBQA require reasoning ability <cit.>, they are more sensitive to the prompt length compared to other datasets.§.§.§ The Weight of Loss ℒ_idpWe investigated the impact of loss weighing λ on the results, as shown in Table <ref>. We found the change of weighting has minor impact on the SQuAD2 dataset and there is an optimal weight of 0.0001 for DROP, OBQA, and BoolQ-NP datasets. ℒ_idp takes values between [0,1], a too large λ means that the model is not focusing on generating answers as its primary goal. An extremely small λ would make the domain-specific prompts lose focus on unique intra-domain information.§.§.§ Clustering ResultsWe investigated the impact of different numbers of clusters on performance, as shown in Table <ref>. Since the gold label of clustering results is not available in the question-answering datasets, it is difficult to determine the optimal number of clusters. Our evaluation shows, the performance of the model is not sensitive to the number of clusters. KMeans always outperforms randomly assigning cluster labels, which demonstrates that introducing contextual cluster information to the model improves context comprehension. §.§.§ Different Scales of Prompt GeneratorIn general, increasing the parameter number of PLMs brings abundant semantic knowledge. Therefore, we investigated the impact of PLMs with different scales on performance, as shown in Figure <ref>. The prompt generator delivers significant performance improvements. Our evaluation shows, larger-scale PLMs tend to have better results, but require more computational resources. To balance the trade-off between cost and performance, the UnifiedQA-small already delivers satisfactory performance gains with a small computational overhead (60M parameters). §.§.§ Number of sampled domain pairs We investigated the impact of sampled domain pairs on the results. The number of clusters is set to 6, which requires 15 iterations per batch. We evaluate the number of sample pair m in {1, 3, 5, 10, 15}. Our evaluation in Table <ref> shows that our algorithm is not sensitive to the number of sampled domain pairs m. Even with a smaller m per batch, it still provides sufficient sampling frequency in training, which greatly reduces the computational costs.§ CONCLUSIONIn this paper, we propose a novel Multi-level Prompt (MPrompt) tuning method for machine reading comprehension. Our method strengthens PLMs' utilization of input semantics through three levels of prompts: task-specific prompts, domain-specific prompts, and context-specific prompts. The task-specific prompts are input-independent and generate prompts specific to a task. The domain-specific prompts utilize the domain knowledge generated from the dataset while context-specific prompts are relying on the input context. Our experiments show the combination of three level prompts improves the answer generation performance on different sizes of PLMs and 12 benchmark datasets. In future work, we will extend our method to more tasks such as summarization, translation, and sentiment analysis. § LIMITATIONSIn our method, the length of prompts is the most critical parameter that affects performance. In our experiments, we observe that MPrompt is sensitive to prompt length for some challenging datasets. To obtain the optimal hyperparameter combination, it is inevitable to perform a grid search on the length of prompts. Our model is designed for encoder-decoder structure, so the decoder-only structure like LLaMA, GPT, or Bloom is not applicable. Our model requires access to the parameter of the model which any black box model is not applicable to our algorithm.§ ETHICS STATEMENTOur work is developed with the highest ethical standards in mind. Our work should not be used for any entity that may violate human rights. acl_natbib§ APPENDIX §.§ Datasets: Details We evaluated our method on 12 datasets covering a wide range of QA tasks. Due to some datasets (such as ARC, OpenBookQA and QASC) lacking the context, following <cit.>, we used the datasets that contain retrieved contexts. Due to limited test access for some datasets, such as SQuAD2, NewsQA, DROP, QASC, BoolQ, and BoolQ-NP, we used the validation set as the test set and re-randomized an equal number of samples from the training set as the validation set. For MCTest, we used the sum of mc160 and mc500. For RACE, we used RACE-middle, which consists of English reading comprehension questions designed for Chinese middle school students. The datasets would be available in our code. §.§ Visualization of context clustering results with Kmeans In the paper, we cluster the contexts by Kmeans and fix the number of clusters to 3, since we do not have access to the gold standard clustering results for each dataset. To observe the results of clustering, we conducte visualization using t-SNE <cit.>, as shown in Figure <ref>. Most of the datasets present better clustering results when the number of clusters is 3, which will provide better domain information. §.§ Implementation details In Table <ref>, we report the hyperparameters used for training our models recorded in the experimental section. For model inference (answer generation), we set num_beams to 2, min_length to 1, and early_stopping to True. For MLP, we set the hidden layer dimension to 512 and utilize the Tanh activation function. For domain-specific prompts and context-specific prompts, we initialize each prompt token as an embedded vector extracted from the prompt generator's vocabulary, as <cit.> done.
http://arxiv.org/abs/2310.18167v1
{ "authors": [ "Guoxin Chen", "Yiming Qian", "Bowen Wang", "Liangzhi Li" ], "categories": [ "cs.CL" ], "primary_category": "cs.CL", "published": "20231027142406", "title": "MPrompt: Exploring Multi-level Prompt Tuning for Machine Reading Comprehension" }
Stability and Accuracy analysis of the θ Method and 3-Point Time filterThe research was partially supported by NSF grant DMS-2110379.Nicholas HurlDepartment of Mathematics, Duquesne University, Pittsbugh, PA-15282 ([email protected]). Farjana SiddiquaDepartment of Mathematics, University of Pittsburgh, Pittsburgh, PA-15260([email protected] ). Shuxian XuDepartment of Mathematics, University of Pittsburgh ([email protected]).January 14, 2024 =========================================================================================================================================================================================================================================================================================================== This paper analyzes a θ-method and 3-point time filter. This approach adds one additional line of code to the existing source code of θ-method. We prove the method's 0-stability, accuracy, and A-stability for both constant time step and variable time step. Some numerical tests are performed to validate the theoretical results. time filter, theta method, A-stability65LO4, 65LO5, 65M06, 65M12, 65M22, 65M60, 76M10 § INTRODUCTIONTime filters have been studied to improve stability of the Crank-Nicolson-Leapfrog in <cit.> and later to improve the accuracy of the Backward Euler method in <cit.>. The accuracy of the Backward Euler method can be increased by a method introduced by Guzel and Layton in <cit.>. Herein we examine the effects on A-stability and accuracy of a general 3-point filter applied to the θ-method. The main result of this paper is analyzing the stability and accuracy of adding a post-processing step to the θ method. It is found that we need to add one more line to the source code that works for Backward Euler, Trapezoidal, or Forward Euler method to either increase stability or numerical accuracy or both.Consider the initial value problem (IVP),y'(t)=f(t, y(t)),for t>0 and y(0)=y_0.Denote the nth time step size by k_n. Let t_n+1=t_n+k_n, τ=k_n/k_n-1, ν be an algorithm parameter and y_n be an approximation to y(t_n). Let y_n+1^* and y_n+1 denote unfiltered and filtered values, respectively. Discretize this IVP using θ method followed by a simple filter which is shown below (for constant time step):Step 1: y_n+1^*=y_n+k((1-θ) f(t_n,y_n)+θ f(t_n+1,y_n+1^*)) , Step 2: y_n+1 =y_n+1^*-ν/2{ y_n+1^*-2y_n+y_n-1}.The combination of θ method and a 3-point filter produces a consistent approximation and achieves second-order accuracy for ν = 22θ-1/2θ+1 (Proposition 3.1). The method (<ref>) is 0-stable for -2 ≤ν < 2 and A-stable for θ≥1/2 and 2-4θ≤ (2θ+1)ν≤ 4θ-2 (Proposition 3.2). Since Step 2 with ν = 22θ-1/2θ+1 has greater accuracy than Step 1, we can have an estimator which is the difference between pre-filter and post-filterEST= y_n+1-y_n+1^* .In Section 4, variable time step case is considered and the steps are as follows:Step 1: y_n+1^*=y_n+k_n((1-θ) f(t_n,y_n)+k_n θ f(t_n+1,y_n+1^*)) , Step 2: y_n+1 =y^*_n+1-ν/1+τ( y^*_n+1- (1+τ) y_n+τ y_n-1).For ν = τ(1+τ)(2θ-1)/2θτ+1, (<ref>) is second-order convergent (Proposition <ref>). Recently in <cit.>, a θ scheme with a time filter has been implemented. But it only considered a constant time step and is developed for specific applications in the unsteady Stokes-Darcy model. In our paper, we are considering the general method with both constant and variable timesteps.Numerical tests in Section 5 confirm the theoretical prediction of good accuracy with an appropriate choice of ν. It is observed that we get a balance between stability and accuracy. For example, as we pick ν near 2, the stability region gets bigger, but as we pick ν near 2, the LTE goes to infinity. When θ>1/2, we always have A- stability, and in that case, we would choose ν to get second-order accuracy. If θ<1/2, we would not have A-stability. In this case, we choose ν to increase A_0- stability or A_π/4- stability rather than accuracy.§ NOTATIONS AND PRELIMINARIESIn this section, we provide fundamental mathematical definitions and theorems.(Local Truncation Error)Local truncation error (LTE), τ_n,at step n computed from the difference between the left- and the right-hand side of the equation for the increment y_n≈ y_n-1 + h A(t_n-1, y_n-1, h,f), where k_n = t_n-t_n-1:τ_n = y_n - y_n-1 - k_n A(t_n-1, y_n-1, h,f). (Consistent) The difference method is consistent of order p if τ = O(h_n^p+1) for positive integer p.(Order of Accuracy) The difference method has the order of accuracy p if τ = O(h_n^p+1) for positive integer p.(0-stability) A difference method is 0-stability if there are positive constants h_0 and K such that for any mesh function x_h and z_h with h≤ h_0,|x_n-z_n|≤ K{|x_0-z_0|+max_1≤ j≤ N |𝒩_hx_h(t_j)-𝒩_hz_h(t_j)|}, 1≤ n≤ N.(Dahlquist Equivalence Theorem) A difference method is convergent if and only if it is consistent and stable. The following lemma is found in Dahlquist<cit.> and summarizes A-stability for two-step methods with variable timesteps. Let ρ(η) =α_2 η^2+α_1 η+α_0, σ(η) =β_2η+β_1 η+β_0.be the characteristic polynomials of a 2-step method.(see page 4 Lemma 2)The consistent, A-stable two-step methods can be expressed in terms of three non-negative parameters, a,b,c2α_2=c+1,4β_2=1+b+(a+c),2α_1=-2c,4β_1=2(1-b),2α_0=c-1,4β_0=1+b-(a+c).Conversely, c=-α_1, b=1-2β_1, a+c=2(β_2-β_0). A simple check that a,b,c≥ 0 will be completed in sub-sections <ref> and <ref> to show A-stability. § CONSTANT TIME STEPWe consider the initial value problemy'(t)=f(t,y(t)),for t>0 and y(0)=y_0. Denote the n-th time stepsize by k_n. Let t_n+1=t_n+k_n and y_n an approximation to y(t_n). We discretize by theta method followed by a simple 3 point time filter. Step 1: y_n+1^*=y_n+k((1-θ) f(t_n,y_n)+θ f(t_n+1,y_n+1^*))Step 2: y_n+1 =y_n+1^*+{a y_n+1^*+by_n+cy_n-1} where a, b, c ∈ℝ. Notice that for θ=0, we getexplicit Forward Euler method, for θ=1/2, we get Trapezoidal method (implicit) and for θ=1 we get implicit Backward Euler method. We find proper values of a, b,c for which the method is consistent in Section 3.1.§.§ Consistency and AccuracyFirst, we study consistency and accuracy.Let the time step, k, be constant and ν≠ 2. The method (<ref>) is consistent if and only if a=-ν/2,b=ν,c=-ν/2 for some ν. Thus, the step 2 of (<ref>) is y_n+1=y_n+1^*-ν/2{ y_n+1^*-2y_n+y_n-1}Moreover, when θ =2+ν/4-2ν or equivalently ν = 22θ-1/2θ+1, (<ref>) is second order convergent.Rewriting the Step 2 we get y_n+1^*=1/1+a(y_n+1-by_n-cy_n-1). Putting this in Step 1, we get the following1/1+ay_n+1-1+a+b/1+ay_n-c/1+ay_n-1=k(1-θ)f(t_n,y_n)+kθ f(t_n+1,1/1+ay_n+1-b/1+ay_n-c/1+ay_n-1)Lety^*=1/1+ay(t_n+1)-b/1+ay(t_n)-c/1+ay(t_n-1) Hence using Taylor Expansion, we getf(t_n+1,y^*)=f(t_n+1,y(t_n+1))+∂ f/∂ y(t_n+1,y(t_n+1))(y^*-y(t_n+1))+1/2∂^2 f/∂ y^2(t_n+1,y(t_n+1))(y^*-y(t_n+1))^2+1/3!∂^3 f/∂ y^3(t_n+1,y(t_n+1))(y^*-y(t_n+1))^3+⋯+1/n!∂^n f/∂ y^n(t_n+1,y(t_n+1))(y^*-y(t_n+1))^n+⋯.Notice, y^*-y(t_n+1) =1/1+ay(t_n+1)-b/1+ay(t_n)-c/1+ay(t_n-1)-y(t_n+1),=-a/1+ay(t_n+1)-b/1+ay(t_n)-c/1+ay(t_n-1).Again by Taylor expansion, we gety(t_n)=y(t_n+1)-ky'(t_n+1)+k^2/2y”(t_n+1)+(-k)^3/3!y^(3)(t_n+1)+𝒪(k^4). y(t_n-1)=y(t_n+1)-2ky'(t_n+1)+(2k)^2/2y”(t_n+1)+(-2k)^3/3!y^(3)(t_n+1)+𝒪(k^4).Hence we get, y^*-y(t_n+1)=-a+b+c/1+ay(t_n+1)+b+2c/1+aky'(t_n+1)-b+4c/2(1+a)k^2y”(t_n+1)+1/3!b+8c/1+ak^3 y^(3)(t_n+1)+⋯+(-1)^n+11/n!b+2^nc/1+ak^ny^(n)(t_n+1)+𝒪(k^n+1).Insert the exact solution in (<ref>), we can get the local truncation error, LTE=1/1+ay(t_n+1)-1+a+b/1+ay(t_n)-c/1+ay(t_n-1)-k(1-θ)y'(t_n)-kθ f(t_n+1,y^*) =(1/1+a-1+a+b/1+a-c/1+a)y(t_n+1)+(1+a+b/1+a+2c/1+a-1)ky'(t_n+1)+𝒪(k^2). To prove the method isconsistent, we need to have (1/1+a-1+a+b/1+a-c/1+a)=0, (1+a+b/1+a+2c/1+a-1)=0 which implies two conditions a+b+c=0, b+2c=0 and a≠ -1. If we take b=ν as free variable, we get a=-ν/2, c=-ν/2. Thus we get the linear multistep method is consistency if and only if Step 2 reads y_n+1=y^*_n+1-ν/2( y^*_n+1-2y_n+y_n-1) for some ν∈ℝ. We need to investigate for higher order of convergence. Setting b=ν, a=-ν/2, c=-ν/2 in (<ref>) gives the equivalent method as 2/2-νy_n+1-2+ν/2-νy_n--ν/2-νy_n-1=k(1-θ)f(t_n,y_n)+kθ f(t_n+1,2/2-νy_n+1-2ν/2-νy_n--ν/2-νy_n-1).and y^*=2/2-νy(t_n+1)-2ν/2-νy(t_n)+ν/2-ν y(t_n-1). Next, y^*-y(t_n+1) =ν/2-νk^2y”(t_n+1)-ν/2-νk^3 y^(3)(t_n+1)+⋯+(-1)^n+11/n!(2-2^n)ν/2-νk^ny^(n)(t_n+1)+𝒪(k^n+1).and f(t_n+1,y^*) =y'(t_n+1)+y”(t_n+1)(y^*-y(t_n+1))+1/2y”'(t_n+1)(y^*-y(t_n+1))^2+1/3!y^(4)(t_n+1)(y^*-y(t_n+1))^3+⋯+1/n!y^(n+1)(t_n+1))(y^*-y(t_n+1))^n+⋯.and f(t_n,y_n)= y'(t_n) = y'(t_n+1) - k y”(t_n+1) + 𝒪(k^2).ThereforeLTE =2/2-νy(t_n+1)-2+ν/2-νy(t_n)--ν/2-νy(t_n-1)-k(1-θ)y'(t_n)-kθ f(t_n+1,y^*),= -2+ν/2-νk^2/2y”(t_n+1)+ν/2-ν2k^2y”(t_n+1)-k(1-θ)(-k)y”(t_n+1)+𝒪(k^3), = 1/2(-2+ν/2-ν+4ν/2-ν+2(1-θ))k^2y”(t_n+1)+𝒪(k^3), = 1/2(3ν-2/2-ν+2(1-θ))k^2y”(t_n+1)+𝒪(k^3). To get convergent of order 2, we need to have (3ν-2/2-ν+2(1-θ))=0 which implies θ =2+ν/4-2ν or equivalently ν = 2(2θ-1)/2θ+1. From the above proof, we see that for any choice of θ∈[0,1] and ν≠ 2, we gain a consistent two-parameter family of methods (<ref>). Under the condition 2θν +ν -4θ +2=0, the method becomes second order, but as it turns out their no choice of θ and ν, where the method becomes third order, which can be seen in the results of Proposition (<ref>). §.§ Constant time step: 0-stable and A-stability The equivalent Linear multistep method (<ref>) corresponds to a linear multistep methodα_2 y_n+1 +α_1 y_n+α_0 y_n-1=k(1-θ) f(t_n,y_n)+kθ f(t_n+1,β_2 y_n+1+β_1 y_n+β_0 y_n-1)where the coefficients areα_2=1/1-ν/2, α_1=-1+ν/2/1-ν/2, α_0=ν/2/1-ν/2, β_2=1/1-ν/2,β_1=-ν/1-ν/2, β_0=ν/2/1-ν/2.and ν≠ 2.The method (<ref>) is 0-stable for -2 ≤ν < 2 and A-stable for θ≥1/2 and 2-4θ≤ (2θ+1)ν≤ 4θ-2. Consider the test function y'=λ y. Recall Equation (<ref>), we can get α_2 y_n+1+α_1 y_n+α_0 y_n-1=k(1-θ)λ y_n+kθλ(β_2 y_n+1+β_1 y_n+β_0 y_n-1).We can get the characteristic polynomialsρ(η) =α_2 η^2+α_1 η+α_0,σ(η) =(1-θ)η+θ(β_2 η^2+β_1 η+β_0) =θβ_2η^2+(1-θ+θβ_1)η+θβ_0.The linear multistep method is 0-stable if and only if all roots z_i of the associated polynomialρ(η), satisfy |z_i|≤ 1. It gives two roots z_1=1,z_2=ν/2, hence for 0-stability, we require -1≤ν/2 <1 which implies -2≤ν <2.The linear multistep method is absolutely stable (A-stable) if for Re(z) ≤ 0 the roots ofρ(η) - z σ(η)=0satisfy |η|≤ 1. We apply the Lemma (<ref>). The two step method (<ref>) is A-stable if-α_1 =1+ν/2/1-ν/2≥ 0,1-2(1-θ+θβ_1) = 1-2(1-θ-θν/1-ν/2)≥ 0, 2(θβ_2-θβ_0)+α_1 =2(θ1/1-ν/2-θν/2/1-ν/2)-1+ν/2/1-ν/2≥ 0.The first condition, (<ref>), holds if and only if -2≤ν<2; When -2≤ν<2, we can get the following: The second condition, (<ref>), holds if and only if(1+2θ)ν≥ 2-4θ; The third condition, (<ref>), holds if and only if(2θ+1)ν≤ 4θ-2.Hence we get when θ≥1/2 and 2-4θ≤ (2θ+1)ν≤ 4θ-2, the two-step method (<ref>) is A-stable. §.§ Constant time step: A_0 stabilityThe following calculation is done to find more A-stability properties of the method. If f(y,t)=λ y the method becomesα_2 y_n+1 + α_1 y_n + α_0 y_n-1 = k λ (θ*β_2 *y_n+1+(1-θ+θβ_1)y_n + θβ_0 y_n-1)where β_2 = α_2 = 1/1-ν/2, β_1 = α_1 + 1 = -ν/1-ν/2, β_0=α_0=ν/2/2-ν/2.We gain the characteristic polynomialsρ(ζ) =α_2 ζ^2+α_1 ζ+α_0, σ(ζ) =(1-θ)ζ+θ(β_2 ζ^2+β_1 ζ+β_0)=θβ_2ζ^2+(1-θ+θβ_1)ζ+θβ_0.Considering the roots of ρ(ζ)-kλσ(ζ)=0 and setting ζ = e^iϕ, we calculate ρ(ζ)/σ(ζ).For ζ = e^iϕ the characteristic polynomials (<ref>) and (<ref>) have a ratio of ρ(ζ)/σ(ζ) =(4(2θ-1)+ν^2(2θ+1) - 8νθ cos(ϕ))(1-cos(ϕ)) + (2-ν)^2sin(ϕ)i/[2θ cos(2ϕ) + A cos(ϕ) +νθ]^2+[2θ sin(2ϕ)+A sin(ϕ)]^2.First multiply by numerator and denominator by 1-ν/2 to obtainρ(ζ)/σ(ζ) = ζ^2-(1+ν/2)ζ+ν/2/θζ^2+[(1-θ)(1-ν/2)-θν]ζ + θ *ν/2.To clear fractions multiple by 2,ρ(ζ)/σ(ζ)=2ζ^2-(2+ν)ζ + ν/2θζ^2 +[2-ν-θ(2+ν)]ζ +θν.Set A=2-ν-θ(2+ν). Next, substitute ζ = e^iϕ.ρ(ζ)/σ(ζ) = 2[cos(2ϕ)+i sin(2ϕ)]-(2+ν)[cos(ϕ)+i sin(ϕ)]+ν/2θ[cos(2ϕ)+i sin(2ϕ)]+A[cos(ϕ)+i sin(ϕ]+νθ.Grouping the real terms and the imaginary terms findρ(ζ)/σ(ζ) = [2cos(2ϕ)-(2+ν)cos(ϕ)+ν]+i[2 sin(2ϕ)-(2+ν)sin(ϕ)]/[2θ cos(2ϕ) + A cos(ϕ) +νθ]+i[2θ sin(2ϕ)+A sin(ϕ)].Next, rational the denominator by multiplying by the its conjugate. Let D=[2θ cos(2ϕ) + A cos(ϕ) +νθ]^2+[2θ sin(2ϕ)+A sin(ϕ)]^2. Note that D>0.ρ(ζ)/σ(ζ) =1/D[(2cos(2ϕ)-(2+ν)cos(ϕ)+ν)(2θ cos(2ϕ) + A cos(ϕ) +νθ)+(2 sin(2ϕ)-(2+ν)sin(ϕ))(2θ sin(2ϕ)+A sin(ϕ)) +i((2 sin(2ϕ)-(2+ν)sin(ϕ))(2θ cos(2ϕ) + A cos(ϕ) +νθ)-(2cos(2ϕ)-(2+ν)cos(ϕ)+ν)(2θ sin(2ϕ)+A sin(ϕ)))]. With a surprising amount of cancellation, the imaginary part simplifies as followsIm(ρ(ζ)/σ(ζ)) = 1/D[[2 sin(2ϕ)-(2+ν)sin(ϕ)][2θ cos(2ϕ) + A cos(ϕ) +νθ]-[2cos(2ϕ)-(2+ν)cos(ϕ)+ν][2θ sin(2ϕ)+A sin(ϕ)]] .First, make use of the trig identities sin(2ϕ)=2sin(ϕ)cos(ϕ) and cos(2ϕ)=2cos^2(ϕ)-1 to findIm(ρ(ζ)/σ(ζ)) = 1/D[[4 cos(ϕ) sin(ϕ)-(2+ν)sin(ϕ)][4θ cos^2(ϕ) + A cos(ϕ) +θ(ν-2)]-[4cos^2(ϕ)-(2+ν)cos(ϕ)+ν-2][4θ sin(ϕ)cos(ϕ)+A sin(ϕ)]] .Next, factor out a sin(ϕ)Im(ρ(ζ)/σ(ζ)) = sin(ϕ)/D[[4 cos(ϕ)-(2+ν)][4θ cos^2(ϕ) + A cos(ϕ) +θ(ν-2)]-[4cos^2(ϕ)-(2+ν)cos(ϕ)+ν-2][4θ cos(ϕ)+A ]] .Multiply out the termsIm(ρ(ζ)/σ(ζ)) = sin(ϕ)/D[ [16θ cos^3(ϕ) + 4A cos^2(ϕ) +4θ (ν-2) cos(ϕ)-4θ (2+ν) cos^2(ϕ) -(2+ν)A cos(ϕ) -(2+ν)θ (ν-2)] -[16θ cos^3(ϕ) -4θ (2+ν) cos^2(ϕ) +4θ(ν-2) cos(ϕ) +4A cos^2(ϕ) -(2+ν)A cos(ϕ) +(ν-2)A]] .Group like terms to findIm(ρ(ζ)/σ(ζ)) = sin(ϕ)/D[(16θ-16θ)cos^3(ϕ)+(4A-4θ(2+ν)+4θ(2+ν)-4A)cos^2(ϕ) +(4θ(ν-2)-(2+ν)A-4θ(ν-2)+(2+ν)A)cos(ϕ) + (-θ(ν-2)(ν+2)-(ν-2)A) ].Simplifying Im(ρ(ζ)/σ(ζ)) = sin(ϕ)/D[-θ(ν-2)(ν+2)-(ν-2)A].Factor out (ν-2) and recall the definition of A=2-ν-2θ-θν to findIm(ρ(ζ)/σ(ζ)) = (ν-2)sin(ϕ)/D[-θ(ν+2)-(2-ν-2θ-θν)].Simplify the remaining terms yieldsIm(ρ(ζ)/σ(ζ)) = (ν-2)sin(ϕ)/D[-2+ν].ThusIm(ρ(ζ)/σ(ζ)) = (2-ν)^2sin(ϕ)/D.We show the details for the real part.Re( ρ(ζ)/σ(ζ) )= 1/D[[2cos(2ϕ)-(2+ν)cos(ϕ)+ν][2θ cos(2ϕ) + A cos(ϕ) +νθ]+[2 sin(2ϕ)-(2+ν)sin(ϕ)][2θ sin(2ϕ)+A sin(ϕ)]],= 1/D[4θ (cos^2(2ϕ)+sin^2(2ϕ)) - (2+ν)A(cos^2(ϕ)+sin^2(ϕ)) + (2A-2θ (2+ν))(cos(2ϕ)cos(ϕ)+sin(2ϕ)sin(ϕ)) + 4νθ cos(2ϕ) + (ν A-νθ (2+ν)) cos(ϕ) +θν^2].Recall the Pythagorean identity and the double angle identities cos(2ϕ)cos(ϕ)+sin(2ϕ)sin(ϕ)=cos(ϕ) and cos(2ϕ)=2cos^2(ϕ)-1. Applying them yieldsRe( ρ(ζ)/σ(ζ) )= 1/D[(4θ-(2+ν)A+θν^2-4νθ) + (2A-2θ(2+ν) +ν A - νθ (2+ν))cos(ϕ)+8νθ cos^2(ϕ)].Notice, 4θ-(2+ν)A+θν^2-4νθ = 4(2θ-1)+ν^2(2θ+1),and the cosine coefficient simplifies to2A-2θ(2+ν)+ν A - νθ (2+ν) = 4(1-2θ)-ν^2 (2θ+1)-8θν.Therefore Re( ρ(ζ)/σ(ζ) )= 1/D[4(2θ-1)+ν^2(2θ+1)-[4(2θ-1)+ν^2(2θ+1)+8νθ]cos(ϕ) + 8νθ cos^2(ϕ)].Observe that ϕ=0 implies ζ=1 and recall ρ(1)=0. Therefore ϕ=0 will make the expression 0, which indicates that the expression will factor with 1-cos(ϕ) as one of the factors. We move forward with factor by groupingRe( ρ(ζ)/σ(ζ) )= 1/D[ (4(2θ-1)+ν^2(2θ+1))(1-cos(ϕ)) + 8νθ cos(ϕ)(cos(ϕ)-1)].Factoring out the 1-cos(ϕ)Re( ρ(ζ)/σ(ζ) )= 1/D[ (4(2θ-1)+ν^2(2θ+1) - 8νθ cos(ϕ))(1-cos(ϕ))].Note 1-cos(ϕ)≥ 0, ∀ϕ. Hence Re( ρ(ζ)/σ(ζ) )≥ 0, ∀ϕ is equivalent to4(2θ-1)+ν^2(2θ+1)-8νθ cos(ϕ)≥ 0, ∀ϕwhich is equivalent to4(2θ-1)+ν^2(2θ+1)-8 |ν| θ≥ 0.Recall at the beginning of the analysis on the ratio of characteristic polynomials we multiplied by 1-ν/2 and then by 2. Notice that ν = 2 makes the expression 0. We factor out 2-|ν|.[2(2θ-1)-|ν|(2θ+1)](2-|ν|) ≥ 0.Since -2≤ν≤ 2 the requirement becomes2(2θ-1)-|ν|(2θ+1)≥ 0,which is|ν| ≤ 2 ·2θ-1/2θ+1,or equivalentlyθ≥1/2·2+|ν|/2-|ν|.This condition implies θ≥1/2 is necessary for A-stability, which is consistent with Dahlquist <cit.>. When the condition is written as -2 ·2θ-1/2θ+1≤ν≤ 2 ·2θ-1/2θ+1one accomplishes 2nd order if the right-hand side has equality. Also, when the left-hand side is satisfied, we obtain A_0-stability, which is shown below.The θ-method with time filter (<ref>) is A_0-stable if ν > -2 ·2θ-1/2θ+1.The imaginary part of ρ(ζ)/σ(ζ) is (2-ν)^2 sin(ϕ), which implies the boundary of the stability region crosses the real axis when ϕ =0 and ϕ =π. Since ϕ =0 corresponds to the known root ζ =1, we focus on the intersection at ϕ = π. The result is obtained by evaluating the real part of ρ(ζ)/σ(ζ)D|_ϕ = π=[2θ cos(2π)+A cos(π) +νθ]^2+[2θ sin(2π) +A sin(π)]^2 = [2θ -A + νθ]^2.Substituting D into the simplified real part we findRe(ρ(ζ)/σ(ζ))|_ϕ=π = 4(2θ-1)+ν^2(2θ+1)-8νθ cos(π)](1-cos(π)/[2θ -A + νθ]^2.By factoring the numerator we find Re(ρ(ζ)/σ(ζ))|_ϕ=π = 2(2+ν)[(2θ+1)ν+2(2θ-1)]/[2θ -A + νθ]^2.Recalling the definition of A we findRe(ρ(ζ)/σ(ζ))|_ϕ=π = 2(2+ν)[(2θ+1)ν+2(2θ-1)]/[(2θ+1)ν+2(2θ-1)]^2.ThusRe(ρ(ζ)/σ(ζ))|_ϕ=π = 2(2+ν)/(2θ+1)ν+2(2θ-1).Since -2 ≤ν≤ 2 the condition Re(ρ(ζ)/σ(ζ))|_ϕ=π≥ 0 becomes (2θ+1)ν+2(2θ-1) > 0 which is equivalent to ν > -2·2θ-1/2θ+1.From this result observe that the forward Euler method cannot be made A_0-stable for any feasible values of ν,Re(ρ(ζ)/σ(ζ))|_ϕ=π, θ=0 = 2(2+ν)/ν-2.One can increase the amount of the negative real axis inside the stability region by choosing ν near 2 in this case, but the local truncation error is multiplied by a factor 1/2-ν. Hence choosing ν near 2 may result in a devastating amount of error. §.§ Stability Regions All stability regions are consistent with the theory.Figure (<ref>) shows the method is A-stable for Backward Euler plus filter for -2/3 < ν < 2/3. Recall ν=2/3 is 2nd order and A-stable. Note 2/3 <ν < 2 is A_0-stable and not A_0-stable for -2<ν<-2/3. Figure (<ref>) shows the method is not A-stable or A_0-stable for Forward Euler plus filter for -2 < ν <2. Notice the stability region grows as ν increases in size. Figure (<ref>) shows the method is A-stable for Trapezoid Rule plus filter if and only if ν = 0. If ν >0, then the method is A_0-stable. If ν<0 the method is not A_0-stable, and the stability region shrinks as ν decreases.§ VARIABLE TIME STEPIn this section, we consider variable time step k_n.We consider θ-method plus a general 3-point time filter described here:Step 1: y_n+1^*=y_n+k_n((1-θ) f(t_n,y_n)+k_n θ f(t_n+1,y_n+1^*))Step 2: y_n+1 =y_n+1^*+{a y_n+1^*+by_n+cy_n-1} §.§ Consistency and AccuracyFirst, we study consistency and accuracy.Consider the variable time step with τ=k_n/k_n-1 and ν≠ 1+τ. The method (<ref>) is consistent if and only ifa=-ν/1+τ,b=ν, c=-τν/1+τ for some ν. Thus, the step 2 of (<ref>) is y_n+1=y^*_n+1-ν/1+τ( y^*_n+1- (1+τ) y_n+τ y_n-1)Moreover, when θ = ν + τ + τ^2/2τ (1-ν+τ)or equivalently ν = τ(1+τ)(2θ-1)/2θτ+1, (<ref>) is second order convergent and the local truncation error (LTE) isLTE =(1+τ+ντ/1-ν+ττ^3/6-ντ/1-ν+τ(1+τ)^3/6+τ-2ντ+τ^2-ν/(1-ν+τ)ττ^3/2)k_n-1^3y”'(t_n+1)-(ν+ντ)ντ(1+τ)^2-ντ^2(1+τ)/2(1-ν+τ)^2k_n-1^3(y”(t_n+1))^2 +𝒪(k_n-1^4).Rewriting the Step 2 we get y_n+1^*=1/1+a(y_n+1-by_n-cy_n-1). Putting this in Step 1, we get the following1/1+ay_n+1-1+a+b/1+ay_n-c/1+ay_n-1=k_n(1-θ)f(t_n,y_n)+k_nθ f(t_n+1,1/1+ay_n+1-b/1+ay_n-c/1+ay_n-1).Lety^*=1/1+ay(t_n+1)-b/1+ay(t_n)-c/1+ay(t_n-1).Hence using Taylor Expansion, we getf(t_n+1,y^*) =f(t_n+1,y(t_n+1))+∂ f/∂ y(t_n+1,y(t_n+1))(y^*-y(t_n+1))+1/2∂^2 f/∂ y^2(t_n+1,y(t_n+1))(y^*-y(t_n+1))^2+1/3!∂^3 f/∂ y^3(t_n+1,y(t_n+1))(y^*-y(t_n+1))^3+⋯+1/n!∂^n f/∂ y^n(t_n+1,y(t_n+1))(y^*-y(t_n+1))^n+⋯.Notice y^*-y(t_n+1) =1/1+ay(t_n+1)-b/1+ay(t_n)-c/1+ay(t_n-1)-y(t_n+1),=-a/1+ay(t_n+1)-b/1+ay(t_n)-c/1+ay(t_n-1).Let τ=k_n/k_n-1. By doing Taylor expansion, we get[y(t_n)=y(t_n+1)-k_n y'(t_n+1)+k_n^2/2y”(t_n+1)+(-k_n)^3/3!y^(3)(t_n+1)+𝒪(k_n-1^4),; =y(t_n+1)-k_n-1τ y'(t_n+1)+k_n-1^2τ^2/2y”(t_n+1)+(-k_n-1τ)^3/3!y^(3)(t_n+1)+𝒪(k_n-1^4). ]and[y(t_n-1)=y(t_n+1)-(k_n+k_n-1) y'(t_n+1)+(k_n+k_n-1)^2/2y”(t_n+1);+(-k_n-k_n-1)^3/3!y^(3)(t_n+1)+𝒪(k_n-1^4),; = y(t_n+1)-k_n-1(1+τ) y'(t_n+1)+(k_n-1(1+τ))^2/2y”(t_n+1); +-(k_n-1(1+τ))^3/3!y^(3)(t_n+1)+𝒪(k_n-1^4). ]and[ f(t_n,y(t_n))=y'(t_n) =y'(t_n+1)-k_n y”(t_n+1)+k_n^2/2y^(3)(t_n+1)-k_n^3/3!y^(4)(t_n+1)+𝒪(k_n-1^4),;=y'(t_n+1)-k_n-1τ y”(t_n+1)+k_n-1^2τ^2/2y^(3)(t_n+1);-k_n-1^3τ^3/3!y^(4)(t_n+1)+𝒪(k_n-1^4). ]Hence we get y^*-y(t_n+1) =-a+b+c/1+ay(t_n+1)+bτ+c(1+τ)/1+ak_n-1y'(t_n+1)-bτ^2+c(1+τ)^2/2(1+a)k_n-1^2y”(t_n+1)+1/3!bτ^3+c(1+τ)^3/(1+a)k_n-1^3 y^(3)(t_n+1)+⋯+(-1)^N+11/N!bτ^N+c(1+τ)^N/(1+a)k_n-1^Ny^(N)(t_n+1)+𝒪(k_n-1^N+1).Insert the exact solution in (<ref>) to get the local truncation error,LTE =1/1+ay(t_n+1)-1+a+b/1+ay(t_n)-c/1+ay(t_n-1)-k_n(1-θ)y'(t_n)-k_nθ f(t_n+1,y^*), =(1/1+a-1+a+b/1+a-c/1+a)y(t_n+1)+(1+a+b/1+aτ+c/1+a(1+τ)-τ)k_n-1y'(t_n+1)+𝒪(k_n-1^2). To prove the method is consistent, we need to have (1/1+a-1+a+b/1+a-c/1+a)=0, (1+a+b/1+aτ+c/1+a(1+τ)-τ)=0which implies two conditions a+b+c=0, bτ+c+cτ=0 and a≠ -1. If we take b=ν as free variable, we get a=-ν/1+τ, c=-τν/1+τ. Thus we get the consistent equivalent linear multistep method as 1+τ/1+τ-ν y_n+1 - 1+τ+ντ/1+τ -ν y_n +τν/1+τ-ν y_n-1 = k_n (1-θ) f(t_n,y_n)+ k_n θ f(t_n+1, 1+τ/1+τ-ν y_n+1 - ν+ντ/1+τ -ν y_n +τν/1+τ-ν y_n-1). We need to investigate for higher order convergence. We already have b=ν, a=-ν/1+τ, c=-τν/1+τ for consistency and therefore y^*-y(t_n+1) =ντ(1+τ)^2-ντ^2(1+τ)/2(1-ν+τ)k_n-1^2y”(t_n+1)-1/3!ντ^3(1+τ)-ντ(1+τ)^2/2(1-ν+τ)k_n-1^3 y^(3)(t_n+1)+⋯+(-1)^N+11/N!ντ^N(1+τ)-ντ(1+τ)^N/1-ν+τk_n-1^N y^(N)(t_n+1)+𝒪(k_n-1^N+1).and f(t_n+1,y^*) =y'(t_n+1)+y”(t_n+1)(y^*-y(t_n+1))+1/2y”'(t_n+1)(y^*-y(t_n+1))^2+1/3!y^(4)(t_n+1)(y^*-y(t_n+1))^3+⋯+1/n!y^(n+1)(t_n+1))(y^*-y(t_n+1))^n+⋯.The local truncation errors simplifies toLTE =1+τ/1-ν+τy(t_n+1)-1+τ+ντ/1-ν+τy(t_n)+ντ/1-ν+τy(t_n-1)-k_n-1τ(1-θ)y'(t_n)-k_n-1τθ f(t_n+1,y^*),=(-1+τ+ντ/1-ν+ττ^2/2+ντ/1-ν+τ(1+τ)^2/2+(1-θ)τ^2)k_n-1^2y”(t_n+1)+𝒪(k_n-1^3),=1/2((ν+2ντ-τ-τ^2)τ/1-ν+τ+2(1-θ)τ^2)k_n-1^2y”(t_n+1)+𝒪(k_n-1^3). To get second order convergence, we need to have ((ν+2ντ-τ-τ^2)τ/1-ν+τ+2(1-θ)τ^2)=0 which implies θ = ν + τ + τ^2/2τ (1-ν+τ) or equivalently ν = τ(1+τ)(2θ-1)/2θτ+1. In this case, we findLTE =1+τ/1-ν+τy(t_n+1)-1+τ+ντ/1-ν+τy(t_n)+ντ/1-ν+τy(t_n-1)-k_n-1τ(1-θ)y'(t_n)-k_n-1τθ f(t_n+1,y^*),=(1+τ+ντ/1-ν+ττ^3/6-ντ/1-ν+τ(1+τ)^3/6 - (1-θ)τ^3/2)k_n-1^3y”'(t_n+1)-θτντ(1+τ)^2-ντ^2(1+τ)/2(1-ν+τ)k_n-1^3(y”(t_n+1))^2 +𝒪(k_n-1^4),=(1+τ+ντ/1-ν+ττ^3/6-ντ/1-ν+τ(1+τ)^3/6 -τ-2ντ+τ^2-ν/2(1-ν+τ)ττ^3/2)k_n-1^3y”'(t_n+1)-(ν+ντ)ντ(1+τ)^2-ντ^2(1+τ)/2(1-ν+τ)^2k_n-1^3(y”(t_n+1))^2 +𝒪(k_n-1^4).which can be further simplified to LTE=-τ (ν (2+3τ)+τ^2(τ+1))/12(1+τ-ν)k_n-1^3y”'(t_n+1)-ντ (ν+τ+τ^2)(τ+1)/4(1+τ-ν)^2 k_n-1^3(y”(t_n+1))^2 +𝒪(k_n-1^4).From here, it is clear that there is no choice of ν that will make both 𝒪(k_n-1^3) terms equal to 0, the method cannot achieve 3rd order. §.§ Variable time step: StabilityTo maintain the consistency of (<ref>), we consider the followingStep 1: y_n+1^*=y_n+k_n((1-θ) f(t_n,y_n)+θ f(t_n+1,y_n+1^*))Step 2:y_n+1 =y^*_n+1+-ν/1+τy^*_n+1+ν y_n-ντ/1+τy_n-1We can derive a linear multistep method from (<ref>)α_2 y_n+1 +α_1 y_n+α_0 y_n-1=k_n(1-θ) f(t_n,y_n)+k_nθ f(t_n+1,β_2 y_n+1+β_1 y_n+β_0 y_n-1)This corresponds to the linear multistep method (<ref>) with coefficients[ α_2=1+τ/1+τ-ν, α_1=-1+τ+τν/1+τ-ν,α_0=τν/1+τ-ν,; β_2=1+τ/1+τ-ν, β_1=-ν+τν/1+τ-ν,β_0=τν/1+τ-ν. ]The method (<ref>) is 0-stable for -1+τ/τ≤ν < 1+τ/τ and A-stable forθ≥1/2, τ>0 and - (2θ-1)(1+τ)/1+2θτ≤ν≤min{1+τ,(2θ-1)(1+τ)/(1+2θ)τ} . Consider the test function y'=λ y. Recall Equation (<ref>), we can get α_2 y_n+1+α_1 y_n+α_0 y_n-1=k_n(1-θ)λ y_n+k_nθλ(β_2 y_n+1+β_1 y_n+β_0 y_n-1).We can get the characteristic polynomialsρ(η) =α_2 η^2+α_1 η+α_0, σ(η) =(1-θ)η+θ(β_2 η^2+β_1 η+β_0)=θβ_2η^2+(1-θ+θβ_1)η+θβ_0The linear multistep method is 0-stable if and only if all roots z_i of the associated polynomialρ(η)=0,satisfy |z_i|≤ 1.It gives two roots z_1=1,z_2=τν/1+τ. Hence, for 0-stability, we require -1≤τν/1+τ <1 which implies -1+τ/τ≤ν < 1+τ/τ.To prove the linear multistep method is absolutely stable, we need |y_n+1|≤ |y_n| for those values z=kλ.This corresponds to values for which all values of (<ref>) satisfy |η|≤ 1.ρ(η)=kλσ(η). The A-stability of general two step method is characterized in terms of their coefficients. We apply the Lemma 1 from <cit.>. The two step method (<ref>) is A-stable if-α_1 =1+τ+τν/1+τ-ν≥ 0, 1-2(1-θ+θβ_1) = 1-2(1-θ-θν+τν/1+τ-ν)≥ 0, 2(θβ_2-θβ_0)+α_1 =2(θ1+τ/1+τ-ν-θτν/1+τ-ν)-1+τ+τν/1+τ-ν≥ 0.The first condition holds if and only if -1+τ/τ≤ν<1+τ.The second condition holds if θ1+τ+τν/1+τ-ν≥1/2which holds if and only if ν≥1-2θ+τ-2θτ/1+2θτ=(1-2θ)(1+τ)/1+2θτ.As ν<1+τ, the third condition hold ifν≤-1+2θ-τ+2θτ/τ+2θτ=(2θ-1)(1+τ)/(1+2θ)τ. From (<ref>) and (<ref>), we need the following result (1-2θ)(1+τ)/1+2θτ≤(2θ-1)(1+τ)/(1+2θ)τ.The inequality is not true for 0≤θ < 1/2 and τ>0 since the left-hand side is positive and the right-hand side is negative. The inequality achieves equality when θ =1/2 and in this case we must have ν=0. Requiring θ > 1/2 and τ>0 we see 2θ-1 > 0. and τ +1 >0, which allows division in (<ref>) to find-1/1+2θτ≤1/(1+2θ)τ,which is clearly true since 1+2θτ >0 and (1+2θ)τ >0. Thus we impose the requirement θ≥1/2.Combine the results of three conditions, max{-1+τ/τ, 1-2θ+τ-2θτ/1+2θτ}≤ν≤min{1+τ,-1+2θ-τ+2θτ/τ+2θτ} A simply calculation reveals that- 1+τ/τ < (1+τ)(1-2θ)/1+2θτ for θ, τ > 0. Thus the condition (<ref>) becomes1-2θ+τ-2θτ/1+2θτ≤ν≤min{1+τ,-1+2θ-τ+2θτ/τ+2θτ}. This restriction on ν is illustrated in figure (<ref>) for several values of θ. The region becomes larger as θ increases.I think we should eliminate this.We discuss the situation case by case, * τ=1,1+τ=2>4θ-2/2θ+1=-1+2θ-τ+2θτ/τ+2θτ,-1+τ/τ=-2<2-4θ/1+2θ= 1-2θ+τ-2θτ/1+2θτ,1-2θ+τ-2θτ/1+2θτ≤ν≤-1+2θ-τ+2θτ/τ+2θτ.* 0<τ<1, 1+τ--1+2θ-τ+2θτ/τ+2θτ=(2τ^2-2)θ +τ^2+2τ+1/τ+2θτ=(2(τ-1)θ+τ+1)τ+1/τ+2θτ, we can get * θ≤τ+1/2-2τ 1+τ≥-1+2θ-τ+2θτ/τ+2θτ*-1+τ/τ=-2<2-4θ/1+2θ= 1-2θ+τ-2θτ/1+2θτ,1-2θ+τ-2θτ/1+2θτ≤ν≤-1+2θ-τ+2θτ/τ+2θτ.Another quick calculation can confirm that the A-stability restriction, (<ref>), on ν implies the 0-stability restriction -τ+1/τ< ν < τ+1/τ. For decreasing or constant timestep (i.e 0<τ≤0) the method is second order and A-stable with the choice θ≥1/2 and ν = τ (1+τ)(2θ-1)/2θτ+1. But for increasing timestep (i.e.τ>1) the method cannot be both A-stable and 2nd order sinceτ (1+τ)(2θ-1)/2θτ+1 > (2θ-1)(1+τ)/(2θ+1)τin this case. § NUMERICAL TESTS. §.§ The Lorenz systemConsider the Lorenz system <cit.>dX/dt = 10(Y-X),dY/dt = -XZ + 28X-Y, dZ/dt = XY-8/3Z.The initial conditions are (X_0,Y_0,Z_0)= (0,1,0). The system is solved over the time interval [0,5]. The reference solution is obtained by self-adaptive RK45. The results are shown in Figures <ref> and <ref>.§.§ Periodic and quasi-periodic oscillationsConsider the pendulum test problem <cit.> given by dθ/dt = v/L,dv/dt=-gsinθ.where θ, v, L and g denote angular displacement, velocity along the arc, length of the pendulum and the acceleration due to gravity, respectively. Set θ(0)=0.9π, v(0) = 0, g=9.8, time step k =0.1 and L=49. The result are shown in Figures <ref> and <ref>.§.§ Test problem with exact solutionConsider the test problemy' = λ(y-sint) +cost,y(0)=1,0≥ t<1,whose exact solution is y(t) = e^λ t +sint. Consider using time step k =0.01 for λ =10,1,0,-1,-10,-500.In Proposition 3.1, we showed second-order accuracy is attained when ν =(4θ-2)/(2θ+1). * θ = 0, (4θ-2)/(2θ+1)= -2,* θ = 1/2, (4θ-2)/(2θ+1)= 0,* θ = 1, (4θ-2)/(2θ+1)= 2/3.§ CONVERGENCE RATEConsider the test problemy' = λ(y-sint) +cost,y(0)=1,0≥ t<1,whose exact solution is y(t) = e^λ t +sint. Consider using time step k =0.00125, 0.0025, 0.005, 0.01, 0.02 for λ =10,1,0,-1,-10 to calculate the convergence rate. We got the same result for all cases of λ. Hence we showed for the case when λ=-10 in the following tables.In Proposition <ref>, we get second-order accuracy when ν =(4θ-2)/(2θ+1). * θ = 0, (4θ-2)/(2θ+1)= -2,* θ = 1/2, (4θ-2)/(2θ+1)= 0,* θ = 1, (4θ-2)/(2θ+1)= 2/3.§ CONCLUSIONSThough the result for Backward Euler with time filter is known, we explored all possible choices of θ in our paper. We have shown that for different choices of ν, we have different stability regions. We have shown that when θ>1/2, we always have A-stability and when θ<1/2, we have A_0-stability or A_π/4- stability.§ ACKNOWLEDGEMENTWe want to thank our advisor Professor William J. Layton, for his insightful ideas and guidance throughout the research. We thank NSF for funding the project. plain
http://arxiv.org/abs/2310.17771v1
{ "authors": [ "Nicholas Hurl", "Farjana Siddiqua", "Shuxian Xu" ], "categories": [ "math.NA", "cs.NA" ], "primary_category": "math.NA", "published": "20231026202939", "title": "Stability and Accuracy analysis of the $θ$ Method and 3-Point Time filter" }
[email protected] Institute of Applied Physics of the Russian Academy of Sciences, 603950 Nizhny Novgorod, Russia Earlier, it was a standard assumption that the entire core of neutron stars is superconducting.However, the matter contents in the inner core has been unknown even qualitatively, because the density of matter in that region is expected to be higher than the nuclear saturation density 0.16 fm^-3. As a consequence, no reliable model exists that would describe the neutron star matter in the inner core of neutron stars.Thus, a possibility of presence of normal, nonsuperconducting, plasma in the inner core cannot be excluded as of today.This point is supported by the numerical calculations performed in <cit.>.The calculations are based on the equation of state and the proton Cooper pairing gap energy derived from the chiral effective field theory.The numerical results show that the superconducting gap goes to zero beyond the depth about 1 km below the crust-core boundary.Given that the stellar radius is of the order of 12 km, therefore the superconducting proton matter is expected to exist only in a thin layer at the tip of the outer core.Recently it has been realized that the symmetry of superconductor is anisotropic in the lasagna region of the pasta phases located at the bottom of the crust.However the question of whether this symmetry is continuous or discreet was unsolved.The numerical calculations performed in <cit.> have shown that the tunneling rate between the adjacent slabs in the entire range of the corresponding densities is negligibly small.Thus, a discreet model is necessary for the description of the lasagna region.Uncertainties and future directions of the research are discussed. Location and symmetry of superconductivity in neutron stars Dmitry Kobyakov 2023-10-25 ===========================================================The location of superconducting protons and symmetry properties of the order parameter are crucial for the spectrum of hydromagnetic waves in neutron stars. The hydromagnetic waves transfer energy from the inner part of the star to its outer layers in the observable processes such as the magnetar giant flares and the following quasiperiodic oscillations, the glitches of the spin frequency. Thus, understanding of the spectrum of the hydromagnetic waves is a crucial part of astrophysical models designed for generating novel knowledge about the structure of the neutron star matter.The existence of superconductivity in neutron star has been for the first time considered in 1969 in <cit.> and it has been shown that the strong proton-proton interaction must lead to the S-wave Cooper pairing and to superconductivity in the core (where the nuclear matter is expected to be a uniform quantum liquid). Since then, the standard physical picture has been that the neutron star core is completely filled by the proton superconductor with the isotropic order parameter.However, recent calculations <cit.> based on the energy-dependent proton Cooper pairing gap energy derived from the chiral effective field theory of baryons <cit.>, have shown that the superconductor fills only a thin layer at the tip of the core with thickness of the order of 1 kilometer.To the best of my knowledge this result has not been discussed before and therefore is novel.The conclusion that the superconductor fills only a thin layer of the core does not depend neither on the equation of state (EoS) in the crust, nor on the polytropic exponentials in the inner core (where the matter density is higher than the nuclear saturation density). This result is shown in figure 11 of <cit.> and has been obtained from the solution of the equations of the force balance between the gravitational force and the pressure gradient supplemented by the EoS consisting of three parts: The first is the EoS of the solid crust, the second is the EoS of the outer core and the third is the extrapolation of the EoS into the inner core. The EoS of the crust has been taken from the literature with the data obtained from the Barcelona-Catania-Paris-Madrid EoS (see the references in <cit.>). However, for the self-consistency it is desirable to calculate the pressure from the same EoS as used in the outer core. In the outer core, I used the EoS derived from the chiral effective field theory. The energy per baryon is given by the following expression:ε(n,x)=ε_0[3/5[x^5/3 + (1-x)^5/3](2n/n_0)^2/3-[α_1(x-x^2) + α_2]n/n_0+[η_1(x-x^2) + η_2](n/n_0)^γ].where ε_0=36.84 MeV, α_1=2α-4α_L, α_2=α_L, η_1=2η-4η_L, η_2=η_L. The function ε(n,x) is shown in Fig. 1 by light-gray.For comparison, in Fig. 1 the energy per baryon from the work by Baym, Bethe and Pethick (1971) (see the references in <cit.>) by middle-gray; for convenience, the dark-gray shows the zero surface. The mass density includes the rest mass of protons and neutrons, their interaction energy and the relativistic mass of the electrons:ρ=m_pxn+m_n(1-x)n+nε/c^2+(9π)^2/3/4ħ/c(xn)^4/3,where x=n_p/n. From the empirical properties of nuclear matter we have the (minus) binding energy per baryon ω_0=16MeV and the pressure of atomic nucleus P_ nuc(n=n_0,x=1/2)=0. From these properties I obtainα=4/5 + 2γ/γ-1(1/5 + ω_0/ε_0), η=2/γ-1(1/5 + ω_0/ε_0).The incompressibility, symmetry energy and its slope are given byK=9ε_0[-2/15 + γ(1/5 + ω_0/ε_0)], S_0=ε_0( 3/52^2/3 + ω_0/ε_0 - α_L + η_L ), L=3ε_0( 2/5 - α_L + γη_L ).In the numerical calculations in <cit.>, I used two sets of the parameters: (γ,α_L,η_L)=(4/3,1.385,0.875) and (γ,α_L,η_L)=(1.45,1.59,1.11), which lead, correspondingly, to(K,S_0,L)=(236MeV,32.3MeV,20.1MeV), (K,S_0,L)=(261MeV,33.4MeV,46.4MeV).In the inner core, I used the generalized polytropic EoS, where the pressure was defined in three consequent regions of the matter density as following:P[ρ(r)]∝ρ^Γ,where r is the radial coordinate and the values of Γ are given in the caption to figure 2 in <cit.>.From the other hand, at densities higher than the nuclear saturation density the contents of the neutron star matter is even qualitatively unknown and a reliable description of the structure of the inner core does not exist. Therefore, at present it is impossible to exclude the existence of the normal plasma (without superconductivity) in the inner core of neutron stars.In <cit.>, I have considered the structure of the superconductor at the crust-core boundary. In 2018, it has been for the first time noticed <cit.> that the ground state of neutron star matter at the crust core boundary corresponds to the structure when the proton liquid is distributed over thin slabs, thus making the symmetry of the order parameter anisotropic. Thus, a question arises whether the anisotropic symmetry is continuous or discreet. In case of a continuous symmetry, the superconducting density may be described by a tensor. On the contrary, if the superconductor is distributed over the slabs which are not connected by the Josephson tunneling, the system should be described by a discreet model. In order to specify the symmetry type in <cit.>, I have calculated probabilities of quantum mechanical tunneling between the adjacent layers (in figures 6-9) and found that in mostly, the tunneling that defines the Josephson coupling is negligibly small. It then follows that the ground state of the superconductor should be described by the discreet model.The results obtained in <cit.> specify the basic assumptions of future models for the calculation of the magnetic torque exerted on the structure by the stellar magnetic induction. Such calculations are necessary for investigation of the role of thermal fluctuations in the neutron star matter in order to find out whether the pasta region is ordered or disordered at typical realistic conditions.In the future work, it is necessary to investigate the following issues. (i) Specify the radial position and independently confirm the existence of the lasagna region of the pasta structure within other EoS. (ii) Investigate whether the ground state in the realistic neutron star matter is ordered or disordered. (iii) Specify the thickness and separation of the slabs. (iv) Investigate the uncertaintyrelated to the mutual orientation of the slabs and the magnetic field. (v) Study the hydromagnetic waves at the boundary between the superconducting and the normal plasmas.Acknowledgements. This work was supported by the Center of Excellence “Center of Photonics” funded by The Ministry of Science and Higher Education of the Russian Federation, contract No. 075-15-2022-316.Translated by the author.Kobyakov2023b D. Kobyakov, 2023, arXiv:2308.09116 [nucl-th]. Unified Description of Superconductivity in Neutron Stars. https://doi.org/10.48550/arXiv.2308.09116 BPP1969a G. Baym, C. Pethick and D. Pines, Nature (London), 224, 673 (1969). Superfluidity in Neutron Stars. https://doi.org/10.1038/224673a0 LimHolt2021 Y. Lim and J. W. Holt, Phys. Rev. C 103, 025807 (2021). Proton Pairing in Neutron Stars from Chiral Effective Field Theory. https://doi.org/10.1103/physrevc.103.025807 Kobyakov2018 D. N. Kobyakov, Phys. Rev. C 98, 045803 (2018). Application of Superconducting-Superfluid Magnetohydrodynamics to Nuclear “Pasta” in Neutron Stars. https://doi.org/10.1103/physrevc.98.045803 KobyakovPethick2018 D. N. Kobyakov and C. J. Pethick, Sov. Phys. JETP 127, 851 (2018). Superfluid Liquid Crystals: Pasta Phases in Neutron Star Crusts. https://doi.org/10.1134/s1063776118110067
http://arxiv.org/abs/2310.18013v1
{ "authors": [ "Dmitry Kobyakov" ], "categories": [ "nucl-th", "astro-ph.HE" ], "primary_category": "nucl-th", "published": "20231027094010", "title": "Location and symmetry of superconductivity in neutron stars" }
Fast and simple unrooted dynamic forests Benjamin Aram BerendsohnFreie Universität Berlin, Germany. Email: . Supported by DFG Grant . ================================================================================================ Despite the increasing adoption of biometric technologies, their regulation has not kept up with the same pace, particularly with regard to safeguarding individuals' privacy and personal data. Policymakers may struggle to comprehend the technology behind biometric systems and their potential impact on fundamental rights, resulting in insufficient or inadequate legal regulation. This study seeks to bridge this gap by proposing a taxonomy of biometric technologies that can aid in their effective deployment and supervision. Through a literature review, the technical characteristics of biometric systems were identified and categorised. The resulting taxonomy can enhance the understanding of biometric technologies and facilitate the development of regulation that prioritises privacy and personal data protection.§ INTRODUCTIONOver the past few decades, there has been significant growth in the development and adoption of biometric technologies, which are utilised in various fields such as access control, border management, device security, and digital identification.However, the growing concern over safeguarding individual privacy has become more prominent, particularly with advanced technologies like facial and emotion recognition. Since biometric data is inherently linked to an individual, any breach or exposure can lead to a violation of privacy and potentially cause long-lasting problems, as biometric data is irreplaceable.Due to increased protection for personal data and the demand for comparable guarantees for international data transfers, many countries have started to legislate on the issue, providing their citizens with a minimum level of protection. This trend began after the adoption of the European Union's General Data Protection Regulation in 2018 (EU GDPR).Most of these countries view privacy as a fundamental human right and have established a robust and all-encompassing legal framework that regulates this right based on their constitutional traditions. The Council of Europe's Convention No. 108 has played a crucial role in shaping this approach. Despite this, such extensive legislation on data privacy often falls short of achieving its intended goals Reidenberg_2015.Nevertheless, most data protection legislation enacted so far makes little or no reference to more specific topics, such as biometric recognition. For example, the EU GDPR employs the word ‘biometric’ only six times throughout the entire regulation, all of which refer to ‘biometric data’. It is important to note that the EU GDPR, in its Article 9, does not provide a dedicated set of rules for biometric technologies but instead applies the same rules as those enacted for personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and genetic data, data concerning health or data concerning a natural person’s sex life or sexual orientation, all under the umbrella term of ‘special categories of personal data’.The Brazilian general data protection law (Lei n. 13.709, de 14 de Agosto de 2018 - Lei Geral de Proteção de Dados) makes only one reference to ‘biometric data’ in Article 5º, II, when defining the concept of ‘sensitive personal data’. The California Consumer Privacy Act of 2018 (CCPA) (Cal. Civ. §1798.100), in its § 1798.140, presents five references to ‘biometric information’, all of them when defining that concept and the concept of ‘personal information’. Just like the EU GDPR, the CCPA includes ‘biometric information’ under the umbrella term of ‘sensitive personal information’, applying to it the same rules as the ones enacted for consumer’s social security, driver’s license, or passport number, their precise geolocation, their racial or ethnic origin, religious or philosophical beliefs, or union membership, genetic data, or their health, sex life or sexual orientation.A significant aspect of the structure and language adopted in these legislations is that they induce biometric systems to be regulated by the same set of rules as the processing of personal data originating from other applications, ignoring the differences and complexities existing among these applications and the several types of biometric systems.Another relevant aspect of this broad, almost generic, regulation is the consequential enactment of legislation banning or imposing moratoria or strict requirements on the use of some of these systems, as we have seen in some cities in the USA, such as San Francisco and Boston Conger_Fausset_Kovaleski_2019, and the EU Heikkilä_2021. This approach can also be considered problematic since it focuses on some issues of the technology (usually bias in the data collection process) and ignores its potential benefits if adequately regulated. And this regulation shall not only take into consideration social and ethical questions about its use, as different systems present different implications, requiring different legal responses, but also must be written in terms that every person impacted by the technology can understand it Baecker_2022.Two non-exclusive aspects may cause the lack of adequate regulation of the matter: (i) the use of different expressions to designate the same things or the same expressions for different things; and (ii) difficulty by policymakers to understand the technology involved in the biometric systems and its potential consequences for the citizens' fundamental rights.The first aspect has recently been addressed by an ISO and IEC Joint Technical Committee, which completed the harmonisation of the vocabulary of the subject field of biometrics through the revision and publication of the ISO/IEC 2382-37:2022 standard. This document intends to clarify the terms related to biometrics, providing a systematic description of the concepts used in that subject field.The second aspect can be addressed by providing a new tool for understanding biometric systems by classifying some of their components. Approaching biometric systems through different categories can provide lawmakers, legal scholars, and the general public with a broader understanding of its impacts and contribute to drafting legislation that adequately focuses on some common minimum standards and potential issues each category may present. This can enhance specific rules for developing, deploying, using, and supervising biometric systems according to their impact on privacy and personal data.This work aims the development of a taxonomy of biometric systems that can clarify and facilitate their understanding and be incorporated into new legislation to regulate better biometric systems' development, deployment, assessment, and accountability. It intends to be a dynamic and technologically neutral tool, capable of avoiding the risks of under- and overinclusion that may result from rapid technological advances.While the scientific literature on the technical aspects of biometric technologies is extensive and dates back several years, research on the legal aspects of these technologies is comparatively recent and limited. However, it has gained considerable momentum, primarily due to the impact these technologies have on individuals' privacy and personal data protection Jain_Kumar_2012. Thus, this study contributes to the scientific literature on the legal aspects of biometric technologies.The following section briefly describes biometric systems and some reasons to classify them into smaller groups. In Section <ref>, the proposed taxonomy is presented and discussed. Section <ref> analyses some specific legislations adopted to regulate biometric technologies and how embracing the proposed taxonomy can improve the legal framework. Moreover, in Section <ref>, the conclusions from this paper are presented.§ BACKGROUNDBiometric technologies have long been used in cases where personal identity plays an essential role, and for that reason, numerous applications can benefit from the use of biometric data. At the same time, many legal questions remain to be addressed concerning their deployment.Many authors have presented some definitions for the term ‘biometric systems’, considering slight differences among them. From some definitions presented in the literature, it is possible to derive a more detailed one: biometric systems are machine-based systems that process biometric data and compare it to a database to (i) identify or (ii) verify the identity or a claim of persons. This comparison can be performed through a recognition process (i) fully automated or (ii) assisted by a human being, based on their distinguishing and repeatable biological (physical and physiological) and behavioural characteristics[According to the ISO/IEC 2382-37:2022, “Behavioural and biological characteristics cannot be completely separated which is why the definition uses ‘and’ instead of ‘and/or’.” ].In general terms, a biometric system uses a capture device (which can be any piece of hardware and supporting software and firmware and may comprise several components) to collect digital representations (also known as biometric samples) of biometric characteristics, to which are then applied an algorithm to convert them into a biometric template. These biometric templates are stored in a database that is accessed when on the following occasions, a biometric sample is presented to the system for comparison. After converting the second biometric sample into a template, a comparison can be executed van_der_Ploeg_1999.In the case of biometric systems, a taxonomy can provide a deeper understanding of its components, allowing the enactment of legislation focusing on specific aspects according to different interests at the moment (e.g., focusing on specific applications, impact on individual privacy or data protection), instead of exceedingly broad regulations.The use of taxonomies can contribute to making law's complexities more manageable as it reflects the legal culture of a given legal system, and it can evolve to accommodate doctrinal, legal, and social changes within itself while allowing transferring knowledge from one area to another Mattei_1997. As stated by Birks, it is impossible to achieve legal certainty if and so long as taxonomy is neglected because this neglection leads to errors and confusion Birks_1996.In the EU, taxonomies have been used, for example, to create a common classification system for sustainable economic activities (i.e., the 'EU Taxonomy') as part of the Union's sustainable finance framework.The proposed taxonomy is based on extant scientific literature and previous biometric systems classifications. Extant biometric classifications usually are referred to in works focusing on other aspects (generally technical ones) or specific applications of biometric systems and do not delve into the details of the classifications (e.g., Singh_Singh_2013,Shyam_Singh_2014,Ferrag_Maglaras_Derhab_Korba_2018).It also fills the gap identified on the scientific literature and provides a systematic classification of biometric systems according to their several components, applications, and configurations. One aspect of this problem, related to the absence of systematisation in the nomenclature used, has long been discussed in the literature Jain_Ross_2008.The decision to develop a taxonomy of biometric systems is based on the fact that taxonomies play an essential role in research and management because the classification of objects helps researchers and practitioners understand and analyse complex domains. It can also order otherwise disorderly concepts and allow researchers to postulate the relationships among the concepts Nickerson_Varshney_Muntermann_2013.Considering the existence of various options for visualising a taxonomy, e.g., morphological box, hierarchy, or mathematical set Szopinski_Schoormann_Kundisch_2020, the proposed taxonomy is presented using a hierarchical visualisation. since it is well suited for the purpose of structuring and organising knowledge and increasing understanding in the discussion, pedagogy, and research Glass_Vessey_1995.For practitioners, policymakers, and other stakeholders, the taxonomy gives an overview of different aspects of biometric systems and contributes to the development of legal regulations that can focus on specific aspects, improving the protection of the privacy and personal data of individuals.§ PROPOSED TAXONOMYThe proposed taxonomy aims to classify biometric systems according to their several components, applications, and configurations. It does not intend to exhaust the topic, nor be a static tool, but rather be a dynamic instrument to contribute to developing better regulations of biometric systems. Other categories shall be included in the future.The structure of the proposed taxonomy is presented in Figure <ref>. §.§ Nature of CharacteristicsThis category classifies the individual characteristics a biometric system uses to extract repeatable biometric features and recognise a person according to their prevailing nature.It is often common to describe biometric systems as using 'biological' Dargan_Kumar_2020, 'physical' Woodward_1997, 'physiological' Lumini_Nanni_2017,Shaheed_Mao_Qureshi_Kumar_Abbas_Ullah_Zhang_2021, 'behavioural' Jain_Kumar_2012,Ferrag_Maglaras_Derhab_Korba_2018, 'psychological' Schatten_Baca_Rabuzin_2008 and 'psychophysiological' Ross_Banerjee_Chen_Chowdhury_Mirjalili_Sharma_Swearingen_Yadav_2019 characteristics.Although some of these terms may be treated as synonyms at first glance, to develop a taxonomy of biometric systems, it is essential to identify the differences among them. Physical characteristics relate to the body itself, while physiological refer to its functions. Albeit the discussion concerning the separation between physical and biological categories is old Haldane_Thompson_Mitchell_Hobhouse_1917, we recognise the complexities involved and will follow the vocabulary proposed by the ISO/IEC 2382-37:2022, where ‘biological’ encompasses both ‘physical’ and ‘physiological’ characteristics.Regarding the differences between ‘behavioural’ and ‘psychological’ characteristics, notice that behaviours can be actions (or inactions) or mannerisms, innate or learned, concerning the surrounding environment, meaning a response that could be from internal or external stimuli which bring out the voluntary or involuntary behaviour. On the other hand, the term ‘psychological’ refers to any individual or organism's cognitive and emotional aspects, consisting of how the organism thinks and feels. It ultimately affects the behavioural response. As it is possible to recognise behaviour patterns, and, in essence, a biometric system is a system developed to recognise patterns van_den_Broek_2010, the term ‘behavioural’ is the more suitable to define the nature of a biometric characteristic.It is also possible to describe another class of characteristics, defined in the literature as ‘soft’ Dantcheva_Elia_Ross_2016,Lumini_Nanni_2017 or ‘light’ Ailisto_Lindholm_Makela_Vildjiounaite_2004 biometrics. This class encompasses, e.g., age, gender, hair colour, and weight. Although these characteristics alone do not allow the unique identification of a person, they are relevant for the discussion because, according to Bisztray_Gruschka_Bourlai_Fritsch_2021, soft biometrics are not covered by the protection granted by Article 9 of the EU GDPR. §.§ FunctionalityThe biometric recognition process encompasses two main functions: biometric verification and biometric identification Prabhakar_Pankanti_Jain_2003. Biometric verification is the process of confirming a claim of identity made by a user of a biometric system by comparing a biometric sample provided by that user with a reference already stored in the system. This mode of operation is also referred to as a ‘one-to-one comparison’ (1:1 comparison), as it compares the acquired information only with those templates corresponding to the claimed identity Kindt_2012, returning a positive result if both information belongs to the same person. For that reason, this function is also called positive. According to Dargan_Kumar_2020, this mode of operation is less expensive and more robust in terms of computation, searching and complexity than identification. It aims to prevent multiple people from using the same identity Prabhakar_Pankanti_Jain_2003.Biometric identification is the process of comparing a biometric sample submitted to the system with the complete database of biometric references pertaining to all individuals already recorded to retrieve the biometric reference attributable to a single individual. This mode of operation is also referred to as a ‘one-to-many comparison’ (1:N comparison) Kindt_2012. This process is intricated, as it operates by searching for one single user from a database with multiple identities Dargan_Kumar_2020.According to Kindt_2012, the identification functionality allows checking if someone is enrolled in a particular list or database, which can be a so-called ‘watch list’ or a ‘block list’. However, it does not necessarily provide identity information, only the confirmation that the person is or is not on the list. The output of this process is a list of candidates that can present one or more individuals if the stored characteristic matches with the one presented to the system. It aims to prevent a single person from using multiple identities Prabhakar_Pankanti_Jain_2003.The biometric verification function presents a lower risk to the privacy of individuals, as it does not require the maintenance of a database of biometric characteristics but only the storage of one specific set of characteristics, which can be done centrally or locally. §.§ Quantity of ModesThis category classifies the biometric systems according to the number of modes used during the recognition process. According to the ISO/IEC 2382-37:2022, a mode is a combination of a biometric characteristic type, a sensor type, and a processing method. In this context, a biometric system can be unimodal or multimodal.As the name indicates, a unimodal biometric system encompasses only one biometric characteristic type, sensor type, and processing method Roy_Marcel_2010. This configuration can present several limitations in terms of accuracy, universality, distinctiveness, and acceptability Lumini_Nanni_2017.On the other hand, a multimodal biometric system is one that combines several biometric modes (a process also known as biometric fusion), which can be a combination of different biometric characteristics types, the use of different sensors for capturing the same biometric characteristic, or the combination of different processing methods. This category of biometric systems can overcome some of the limitations presented by unimodal systems Delac_Grgic_2004.Considering the protection of personal rights, combining several biometric characteristics or several instances of the same characteristic contributes to higher levels of privacy Merkle_Kevenaar_Korte_2012,Anand_Donida_Labati_Genovese_Munoz_Piuri_Scotti_Sforza_2016. However, it is also argued that multimodal biometric systems present higher security risks as they deal with multiple traits of the same subject Lumini_Nanni_2017.There are references in the literature on ‘cross-modal’ biometrics, which can be defined as ‘the association of data pertaining to one biometric modality with that of another modality’, aiming to improve the performance of biometric systems Nagrani_Albanie_Zisserman_2018,Ross_Banerjee_Chen_Chowdhury_Mirjalili_Sharma_Swearingen_Yadav_2019.According to Roy_Marcel_2010, in the case of multimodal biometric systems, the modes are considered separately, while in cross-modal systems, there is the exploitation of information which might be embedded in both modes used by the system. In that sense, it can be understood that a ‘cross-modal’ biometric system is a species of a multimodal biometric system. Simply put, all cross-modal biometric systems are multimodal, but not all multimodal biometric systems are cross-modal. §.§ Form of InteractionAn individual's interaction with a biometric system to have their biometric characteristics captured by the system is called a ‘biometric presentation’. This process can be performed with or without their awareness.According to the ISO/IEC 2382-37:2022 Information technology - Vocabulary - Part 37: Biometrics, the presentation made with the subject's awareness is called a ‘cognizant presentation’, while the presentation made without their awareness is called a ‘non-cognizant presentation’. The ‘non-cognizant presentation’ is often referred to in the literature as being performed ‘without the knowledge of the user’, Yampolskiy_Govindaraju_2008, 'at a distance' Choi_2022, or 'remote biometric' Donohue_2012.As stated by Article 9 of the EU GDPR, the processing of special categories of personal data, which include biometric data, shall be prohibited, except if performed according to some very strict possibilities, the first of them is the data subject has given explicit consent to the processing of their personal data.This explicit consent can only be given if the data subject is aware that a biometric system is processing their personal data.The permissions for processing given by Article 9(2)(e) and (g), related to ‘personal data which are manifestly made public by the data subject’ and when ‘necessary for reasons of substantial public interest’ are usually invoked to justify the use of remote biometric recognition systems; however, these possibilities are still the focus of intense debate Koptelov_2021, Vilanova_Jou_2021. §.§ Form of CooperationBesides the cognisance of the subject discussed in Section <ref>, the 'biometric presentation' can be carried out with or without the cooperation of the individual.When the individual is motivated to achieve a successful completion of the acquisition process, which encompasses a series of actions (e.g., to obtain an International Civil Aviation Organisation (ICAO) compliant passport photograph, the individual will have to undertake several steps, e.g. remove glasses, look directly at the camera and not smiling, etc., and then the collected information shall be processed to produce a biometric sample) undertaken to effect a biometric capture, they are called a ‘cooperative subject’.The cooperative subject may be classified as subversive or non-subversive according to their willing attempt to subvert the correct and intended biometric system policy and avoid being matched to their own biometric reference.On the other hand, when the individual is motivated not to achieve a successful completion of the acquisition process, they are called ‘uncooperative subjects’. To be an uncooperative subject, they must first be aware that their biometric data is being collected and not provide explicit consent.In the literature, it is possible to find another classification, known as 'stand-off' biometrics Wheeler_Perera_Abramovich_Yu_Tu_2008,Gorodnichy_2009. This category would include systems capable of operating at a greater‐than‐normal distance between subject and sensor and with less‐constrained subject behaviour International_Biometrics_Group_2011, therefore collecting biometric data with minimal or no direct engagement of the subject and, in many cases, even without their knowledge of the capture process, which can be considered a ‘clandestine use’ of the biometric system Woodward_Orlans_Higgins_2003. This situation would involve a ‘non-cognizant presentation’ but cannot be classified according to the form of cooperation, as to be cooperative or uncooperative, the subject must be aware of the collection process. §.§ Sample AcquisitionThe process of obtaining and recording, in a retrievable form, signals of biometric characteristics directly from individuals or from representations of those biometric characteristics and later performing additional processing to attempt to produce a suitable biometric sample is denominated as a ‘biometric acquisition process’.This biometric acquisition process needs a device to collect the signal from the biometric characteristic and convert it to a captured biometric sample. This device can be any piece of hardware (and supporting software and firmware) and may comprise components such as an illumination source, one or more sensors, etc.When an individual must interact directly with the device to perform the biometric acquisition process, the biometric system is defined as being ‘invasive’ Jain_Kumar_2012 or 'obtrusive' Ailisto_Lindholm_Makela_Vildjiounaite_2004,Yampolskiy_Govindaraju_2008. On the other hand, when this process can be performed without the direct interaction of the individual with the device, it is named ‘non-invasive’ or ‘unobtrusive’.The obtrusiveness of the biometric acquisition process can interfere with the system’s accuracy by potentially influencing the behaviour of the individuals interacting with the system. Unobtrusive biometric systems have been used to mitigate this issue, especially to assess emotional responses from individuals Gonzalez_Viejo_Fuentes_Howell_Torrico_Dunshea_2019,Fuentes_Wong_Gonzalez_Viejo_2020. §.§ StorageThe biometric recognition process can be performed according to two main functions, as discussed in Section <ref>. Typically, this process involves the comparison of incoming biometric samples with records stored as biometric references in a database.However, biometric systems can depend on several different databases according to the stage of the processing being performed (a general description of these stages can be found in Wayman_1997). For example, a biometric application database stores biometric data (which need not be attributable to a specific subject) and associated metadata developed from and supporting the operation of a biometric application. A biometric enrolment database stores biometric enrolment data records that are attributed to a specific subject and contains non-biometric data associated with biometric reference identifiers. This database can optionally include the biometric reference database, including indexed data records containing biometric references. The merging or unbundling of these databases can be defined by security, privacy, legislation, architecture, performance, or other reasons. Depending on the purpose of the biometric system, the templates used for performing the comparisons can be stored in a central database or recorded locally, e.g., on a smart card issued to the individual Prabhakar_Pankanti_Jain_2003. The decision to store the biometric templates in a centralised or localised database will imply different consequences to biometric systems and the protection of collected personal data. §.§ Level of RiskIn April 2021, the European Commission published a proposal for a regulation laying down harmonised rules on Artificial Intelligence, known as the Artificial Intelligence Act (AI Act). Although not focusing specifically on biometric technologies, this proposal presents several references to biometric systems and a three-tier classification of artificial intelligence practices based on the level of risk to fundamental rights that we considered as a possible categorisation of biometric systems in our taxonomy.As proposed in the AI Act, the three levels of risk are: (i) unacceptable, (ii) high, and (iii) low or minimal.AI systems whose use can create unacceptable risks of violating fundamental rights are prohibited from being developed, placed on the market, or used in the EU. An example of a biometric system that fits in this category is the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement. Their use is prohibited unless certain limited exceptions apply, as stated in Article 5(1)(d), (2), and (3) of the AI Act.The AI Act will authorise the placing on the EU market of AI systems considered to be of high risk only if they comply with certain mandatory requirements, as described in Chapter 2 of Title III of the proposal. In this category can be included systems used for biometric identification and categorisation of natural persons, like AI systems intended to be used for the ‘real-time’ and ‘post’ remote biometric identification of natural persons.Biometric systems whose use is not considered to present an unacceptable or high risk are allowed to be freely used and would be subject to minimum transparency obligations.§ EXAMPLES OF APPLICATIONSTo demonstrate the applicability of the proposed taxonomy, we present some situations where it would be possible to develop regulations focusing on specific aspects of biometric systems, thus providing different legal frameworks according to the system's potential impact on individuals' privacy.As previously stated, the enactment of broad and generic legislation aiming to protect personal data and individual privacy can result in the banishment or imposition of moratoria or strict requirements on the use of some specific biometric technologies.To address the issue of government surveillance in the USA, in 2019, the San Francisco Board of Supervisors approved an ordinance banning the ‘acquisition, retention and use of surveillance technology [and] allowing the acquisition and retention of face recognition technology under certain conditions’, making San Francisco the first major city in the United States to ban government use of facial recognition surveillance systems.Following this example, in 2020, the Boston City Council approved an ordinance banning the use of facial recognition technology by Boston police and other city departments amid evidence that the existing systems misidentify people of colour at an exorbitantly high rate.More recently, the Baltimore City Council has enacted the City of Baltimore Ordinance 21-038,which prohibits the ‘Baltimore City government from purchasing or obtaining certain face surveillance technology; […] contracting or subcontracting with another for the purpose of face surveillance technology; prohibiting any person in Baltimore City from obtaining, retaining, accessing, or using certain face surveillance technology or any information obtained from certain face surveillance technology’.Currently, at least 17 communities across the USA have adopted some kind of local ban on the use of facial recognition systems Sheard_Schwartz_2022. Also in the USA, a coalition of civil society organisations is proposing a federal ban, instead of any form of regulation, on the use of facial recognition technologies by US law enforcement agencies. It resulted, in 2020, in the introduction of a proposal entitled ‘Stop Biometric Surveillance by Law Enforcement Act’ (H.R. 7235), aiming to prohibit the ‘use of facial recognition technology on any image acquired by body-worn cameras of law enforcement officers’.In Italy, the Data Protection Agency prohibited the use of facial recognition technologies by government agencies until a specific law regulating the issue is adopted, unless the processing is carried out for investigations by the judiciary or the prevention and repression of crimes Garante_2022.While the use of surveillance technology, particularly facial recognition surveillance systems, can violate individual privacy and inhibit freedom of expression, enacting legislation prohibiting the acquisition and use of a specific biometric technology does not solve the problem, as even more intrusive methods of surveillance are constantly being developed Thomas_2019.The strictest regulation of facial recognition technologies may cause a switch to one or several of the other forms of remote surveillance technologies currently being developed (e.g., gait analysis or heartbeat signature), which can be even more invasive and harmful to the privacy and protection of personal data and would not be cover by the current prohibitions. On the other hand, attempting to regulate biometric surveillance technologies one by one is likely to be worthless, as the legislative process cannot follow the pace of development of these technologies.This requires drafting more detailed legislation focusing on the peculiarities of biometric systems to provide a legal framework that adequately protects personal data and privacy without prejudicing the development of new technologies.The AI Act adopts definitions capable of regulating several biometric systems simultaneously, according to their peculiarities. When prohibiting the use of ‘real time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, the AI Act does not focus specifically on facial recognition systems but forbids the development, placing on the market, or use in the EU of any biometric system presenting a ‘non-cognizant presentation’, as discussed in Section <ref>.Systems developed for biometric identification, such as surveillance systems, pose a greater risk to individual privacy as they require the maintenance of extensive biometric characteristic databases of enrolled individuals. Therefore, they must be subject to more detailed regulation than systems developed only for biometric verification, as discussed in Section <ref>. Additionally, the biometric identification process does not confirm a specific identity but instead provides a list of candidates whose biometric characteristics resemble the one presented to the system, increasing the risk of false identification, which can be particularly severe in certain circumstances.Depending on the location of the biometric templates database, it is possible to develop specific rules as well. Systems that maintain smaller, decentralised databases could benefit from softer rules than systems that depend on a bigger, more complex, and centralised database of biometric templates. As the identification process requires the existence of a large, centralised database, perhaps it should observe some additional requirements, such as the need to demonstrate the observation of best cybersecurity practices and the maintenance of a user login ledger. Also, it can be imposed that databases containing biometric images must be encrypted, thus inhibiting their compromise in bulk.§ CONCLUSIONDespite the increasing adoption of biometric technologies, the related regulation has progressed at a different pace, particularly in safeguarding individuals' privacy and personal data. The widespread deployment of biometric systems has led to various privacy concerns, including unintended functional scope, unintended application scope, and covert surveillance. Nevertheless, these concerns can be mitigated by enacting more suitable legislation.The implementation of regulations such as the EU GDPR has emphasised the importance of 'designing privacy-preserving methods in the context of biometric systems' Ross_Banerjee_Chen_Chowdhury_Mirjalili_Sharma_Swearingen_Yadav_2019. However, it is equally crucial to establish more comprehensive legislation that specifically targets the unique aspects of these systems, providing a legal framework that effectively safeguards personal data and privacy while still fostering technological advancements.To address this issue, the proposed taxonomy aims to bridge the gap and aid in the enactment of new regulations to govern the development, deployment, assessment, and accountability of biometric systems. The taxonomy categorises biometric systems based on their various components, applications, and configurations, with the intention of being a dynamic tool that can evolve and expand to serve its purpose continually. Although not exhaustive, the taxonomy will continue to incorporate additional categories as it is evaluated for its effectiveness in promoting better regulation of biometric systems Mattei_1997.Some of the proposed categories may present sub-divisions, such as the classification presented by Yampolskiy_Govindaraju_2008, where behavioural biometrics are classified into five categories. However, at the moment, we consider further studies necessary to decide on a more elaborated taxonomy.
http://arxiv.org/abs/2312.00013v1
{ "authors": [ "Luis Felipe M. Ramos" ], "categories": [ "cs.CY", "cs.CR", "J.1; J.4; K.4.1; K.5.2" ], "primary_category": "cs.CY", "published": "20231027102346", "title": "Biometric Technologies and the Law: Developing a Taxonomy for Guiding Policymakers" }
Efficient Fully Bayesian Approach to Brain Activity Mapping with Complex-Valued fMRI Data Zhengxin Wang Clemson University Daniel B. Rowe Marquette University Xinyi Li Clemson University D. Andrew Brown Address for correspondence:D. Andrew Brown, School of Mathematical and Statistical Sciences, Clemson University, Clemson, SC, USA. Email: [email protected] Clemson University===========================================================================================================================================================================================================================================================================================================Functional magnetic resonance imaging (fMRI) enables indirect detection of brain activity changes via the blood-oxygen-level-dependent (BOLD) signal. Conventional analysis methods mainly rely on the real-valued magnitude of these signals. In contrast, research suggests that analyzing both real and imaginary components of the complex-valued fMRI (cv-fMRI) signal provides a more holistic approach that can increase power to detect neuronal activation. We propose a fully Bayesian model for brain activity mapping with cv-fMRI data. Our model accommodates temporal and spatial dynamics. Additionally, we propose a computationally efficient sampling algorithm, which enhances processing speed through image partitioning. Our approach is shown to be computationally efficient via image partitioning and parallel computation while being competitive with state-of-the-art methods. We support these claims with both simulated numerical studies and an application to real cv-fMRI data obtained from a finger-tapping experiment.Key words and phrases: Gibbs sampling, parallel computation, spike and slab prior, variable selection§ INTRODUCTION Functional magnetic resonance imaging (fMRI) is a non-invasive brain imaging technique that records signals generated by changes in blood oxygenation levels associated with neuronal activity. This so-called blood-oxygenation-level-dependent (BOLD) signal thus facilitates indirect monitoring of brain activity over time <cit.>. During task-based fMRI experiments, subjects experience intermittent stimuli, such as viewing images or finger tapping. As the brain responds to a particular stimulus, neuronal activity in certain regions intensifies, leading to increased oxygen consumption. This metabolic change subsequently increases the BOLD response in that region. These BOLD fluctuations impact local magnetic susceptibility, thereby affecting the resulting fMRI signal <cit.>. Empirical studies have demonstrated that the expected BOLD response in an activated brain region, in reaction to binary “boxcar” stimuli (repeated identical on-off periods), can be accurately modeled by convolving the boxcar 0-1 stimulus variable with a gamma or double-gamma hemodynamic response function (HRF) <cit.>.Signals generated by magnetic resonance imaging machines are complex-valued with both real and imaginary components due to forward and inverse Fourier transformations that occur in the presence of phase imperfections <cit.>. However, most fMRI studies for brain activity mapping only analyze the magnitudes of the MR signals, as the phase components are typically discarded as part of preprocessing. To identify active voxels in response to a stimulus, a linear model is commonly used <cit.>. Specifically, any voxel (volumetric pixel) whose BOLD signal magnitude significantly changes over time in response to the stimulus will be considered an active voxel. The magnitude-only approach carries several limitations. For one, the magnitude-only models typically operate on the assumption of normally distributed errors. However, even when the original real and imaginary components of the data possess such Gaussian errors, the magnitude follows a Ricean distribution that is approximately normal only for large signal-to-noise ratios (SNRs) <cit.>. Large SNRs are not always present, making the Gaussian assumption less tenable, thereby losing power. Moreover, by discarding phase information, we ignore half of the available data that may contain information about the underlying neurophysiological processes. On the other hand, using complex-valued fMRI (cv-fMRI) data for analysis has shown promising results. By fully incorporating both real and imaginary components, cv-fMRI studies allow for more comprehensive and accurate models with greater power to detect task-related neuronal activity. Such models often handle SNR more appropriately and make full use of the data at hand, thereby yielding potentially more informative insights into brain activity <cit.>.To determine task-related brain activation maps from fMRI signals, fully Bayesian approaches stand out due to their ability to flexibly model spatial and temporal correlations. In this paper, we propose a fully Bayesian model for brain activity mapping using single-subject cv-fMRI time series. Specifically, we aim to determine which voxels' fMRI signal magnitudes (assuming constant phase) change significantly in response to a particular task, as well as the amount of the change. An effective Bayesian approach for fMRI data analysis should fully utilize both the real and imaginary parts of the fMRI data, capture spatiotemporal correlations, provide high prediction accuracy, and be computationally efficient. Although previous studies have made progress in some of these areas <cit.>, no single model has yet achieved all of these goals. Our proposed approach uses autoregressive models for the temporal correlations and Gaussian Markov random fields <cit.> to capture spatial associations in the cv-fMRI data. Moreover, we employ image partitioning and parallel computation to facilitate computationally efficient Markov chain Monte Carlo <cit.> algorithms.The remainder of the paper is organized as follows. Section <ref> details our proposed model, outlines the priors and posteriors, and explains our strategy for brain partitioning. We demonstrate estimation and inference in Section <ref>, where we use simulated datasets to test the performance of our model in terms of the determination of brain activity maps. Section <ref> shows the results of implementing our proposed approach on cv-fMRI data obtained from real finger-tapping experiment. Lastly, Section <ref> summarizes our findings, highlights our contributions, and outlines potential work for future research in this domain.§ MODEL In this section, we present our model for brain activity mapping with cv-fMRI data, including an equivalent real-valued representation. We also describe the brain parcellation strategy for parallel computation.We derive the posterior distribution of the parameters of interest, as well as an MCMC algorithm for accessing it. §.§ Model Formulation FMRI, both real- and complex-valued, are known to exhibit temporal correlations. This can be captured by autoregressive (AR) error structure. Thus, our complex-valued model is based on that proposed by <cit.>, with some modifications. For the v^th voxel, v=1, ..., V, the measured signal is modeled as^v=β^v+^vρ^v+^v,where all terms are complex-valued except . The term ^v ∈ℂ^T is the vector of signals at voxel v collected at evenly-spaced time points, where T is the total observed time points, and ∈ℝ^T is the vector of the expected BOLD response associated with a particular stimulus, with β^v ∈ℂ the associated regression coefficient. We assume that low-frequency trends in ^v have been removed by preprocessing, and that both ^v andare centered. The term ^v ∈ℂ^T is the vector of lag-1 prediction errors for the assumed AR(1) model, with ρ^v ∈ℂ the scalar autoregression coefficient. The AR(1) model has been shown to often be sufficient for capturing temporal dynamics in fMRI data <cit.>. We suppose that the error term ^v follows the standard complex normal distribution, that is, ^v∼_T(^v=, ^v=2σ_v^2, ^v=), where _T denotes a complex normal distribution of dimension T with mean ^v, complex-valued, Hermitian and non-negative definite covariance matrix ^v, and complex-valued symmetric relation matrix ^v. In the appendix, we provide details similar to those presented by <cit.> that demonstrate the equivalence between the model of <cit.> and the cv-fMRI model proposed by <cit.> with constant phase. <cit.> and <cit.> provide an equivalent real-valued representation of model (<ref>) as[ _Re^v; _Im^v ]__r^v = [; ]__r[ β_Re^v; β_Im^v ]__r^v + [_Re^v -_Im^v;_Im^v_Re^v ]__r^v[ ρ_Re^v; ρ_Im^v ]__r^v + [ _Re^v; _Im^v ]__r^v,where all terms are real-valued. Using the symbols in the underbraces, this is more concisely written as_r^v=_r_r^v+_r^v_r^v+_r^v,_r^v∼_2T(, ^v),where^v= [ _Re, Re^v _Re, Im^v; _Im, Re^v _Im, Im^v ],and_Re, Re^v=1/2Re(^v+^v)=σ_v^2_T,_Re, Im^v=1/2Im(-^v+^v)=_T, _Im, Re^v=1/2Im(^v+^v)=_T,_Im, Im^v=1/2Re(^v-^v)=σ_v^2_T.Observe that our assumption on the covariance structure here simply means that ^v=σ_v^2_2T. We assign the voxel- specific variances σ_v^2 and autoregression coefficient _r^v Jeffreys prior and uniform prior, respectively. That is, p(σ_v^2) = 1/σ_v^2 and p(_r^v) = 1, for v= 1, …, V.§.§ Brain Parcellation and Spatial PriorsIn addition to temporal dependence, fMRI signals also exhibit spatial associations. These spatial dependencies can originate from several sources, including the inherent noise of the data <cit.>, unmodeled neuronal activation <cit.>, and preprocessing steps such as spatial normalization <cit.>, image reconstruction <cit.>, and spatial smoothing <cit.>. Hence voxels, as artificial partitions of the human brain, often exhibit behavior similar to that of their neighbors. These spatial dependencies can be modeled by imposing spatial structure in the prior on β^v or the hyperparameters in such priors. Brain parcellation <cit.> propose a brain parcellation technique that seeks to identify active voxels within each parcel/partition, and subsequently combines these results to generate a comprehensive whole-brain activity map. The authors partition their brain images into initial parcels of size approximately 500 voxels each. If a parcel is found to be too large or too small, it is broken down into voxels and these voxels are merged into adjacent parcels while ensuring the merged parcels contain less than 1000 voxels each. Alternatively, the partitioning strategy could be based on anatomical atlases such as Brodmann areas <cit.>, or based on equal geometric size in the image rather than equal numbers of contained voxels. <cit.> remark that this method of partitioning induces negligible edge effects, that is, the classification of voxels on the borders of parcels is not strongly affected. In our study, we partition the two- or three-dimensional fMRI image into G parcels of approximately equal geometric size. We then process each parcel independently using the same model and method, facilitating parallel computation and hence computational efficiency. We find that our parcellation strategy incurs minimal edge effects, echoing the observations of <cit.>. We discuss the optimal number of parcels and corresponding number of voxels in each parcel in Section <ref>.Prior distribution of β^v For parcel g, g=1,,G, containing V_g voxels, a voxel v (v=1,,V_g) is classified as an active voxel under the stimulus if its regression coefficient of slope β^v=β_Re^v+iβ_Im^v≠0, where i is the imaginary unit. As this is a variable selection problem, we use a spike-and-slab prior <cit.>:β^v|γ_v∼γ_v_1(0, 2τ^2_g, 0)+(1-γ_v)_0,where _0 denotes the point mass at 0. The binary indicator γ_v∈{0, 1} reflects the status of a voxel. Specifically, γ_v=1 indicates that voxel v is responding to the task, while γ_v=0 otherwise. We take τ^2_g ∈ℝ to be constant across all voxels within each parcel. <cit.> shows that a real-valued representation of (<ref>) is given by:_r^v= [ β_Re^v; β_Im^v ]|γ_v ∼_2(,  γ_vτ_g^2).The parcel specific variances τ_g^2 are assigned a Jeffreys prior, p(τ_g^2) = 1/τ_g^2,  g= 1, …, G. Spatial prior on γ_v To further reduce computational effort and to capture pertinent spatial structure with a low-dimensional representation, we employ the sparse spatial generalized linear mixed model (sSGLMM) prior, as developed by <cit.> and <cit.>, which is in turn an extension of the the prior proposed by <cit.>. Such priors use GMRFs and reduce the dimension by examining the spectra of the associated Markov graphs. For voxel v (v=1, ..., V_g) within parcel g (g=1, ..., G), we suppose thatγ_v|η_v iid∼ern{(ψ+η_v)},η_v|_g ∼_1(_v'_g, 1),_g|κ_g ∼_q{, (κ_g_g'_g_g)^-1},κ_g ∼amma(a_κ, b_κ),where (·) denotes the CDF of standard normal distribution and ψ∈ℝ is a fixed tuning parameter. The terms _v', _g, and _g are derived from the adjacency matrix _g of parcel g. The adjacency matrix _g ∈{0, 1}^V_g × V_g is such that _g,uv=1 if voxels u and v are neighbors in the image, and 0 otherwise, where “neighbor” is defined by the user. Typically, voxels that share an edge or a corner are taken to be neighbors. The matrix _g ∈ℝ^V_g × q contains the first q principal eigenvectors of _g, typically with q≪ V_g. The term _v' is a 1×q row vector of “synthetic spatial predictors” <cit.> corresponding to the v^th row of _g. The matrix _g=diag(_g_V_g)-_g is the graph Laplacian. The term _g is a q×1 vector of spatial random effects, and κ_g is the spatial smoothing parameter.The design of the prior distribution for binary indicator γ_v aims to capture both spatial dependencies and the sparsity of active voxels. This reflects the hypothesis that a voxel is more likely to be active/inactive if their neighboring voxels are also active/inactive <cit.>. Furthermore, in the context of simple tasks, only a small percentage of voxels across the entire brain are expected to be active <cit.>. Thus the sSGLMM prior is well-suited to the work and compatible with the parcellation approach. <cit.> remark that _g is capable of capturing smooth patterns of spatial variation at various scales. The parameters ψ, q, a_κ, and b_κ are fixed a priori and determined based on several factors. In our simulation studies, we examine various values of ψ to identify the one providing the highest prediction accuracy. For real human datasets, the initial value of ψ is set to ^-1(0.02)=-2.05 for all voxels, following the suggestion of <cit.>. This value can be further adjusted based on the proportion of active voxels detected in previous experiments. We set q=5 (when V_g is approximately 200) per <cit.>, indicating that such a reduction is often feasible. We find there is no detectable difference using larger q. The shape and scale parameters of the gamma distribution, a_κ=1/2 and b_κ=2000 respectively, are selected to yield a large mean for κ_g (a_κb_κ=1000). This choice serves to reduce the chances of creating misleading spatial structures in the posterior distribution, mitigating the risk of identifying spurious brain activity patterns that could be attributed to noise or other confounding factors. §.§ MCMC algorithm and posterior distributions We use Gibbs sampling to obtain the joint and marginal posterior distributions of parameters of interest. The necessary full conditional distributions and derivations are outlined in the appendix.The fixed-width approach proposed by <cit.> is used to diagnose convergence. Specifically, we consider the algorithm to have converged if the Monte Carlo standard error (MCSE) of any γ_v is less than 0.05. In our numerical studies that follow, we run 10^3 iterations. We take the means of the sampled parameters (after discarding burn-in iterations) as the point estimates. Active voxels are determined by γ_v>0.8722 <cit.>, and β^v_Re and β^v_Im are used to construct the estimated magnitude maps, computed as √((β_Re^v)^2+(β_Im^v)^2).§ SIMULATION STUDIESIn this section, we simulate three types of two-dimensional complex-valued time series of fMRI signals: data with iid noise, data with noise following AR(1) temporal dependence, and a more realistic simulated iid dataset imitating the human brain. We evaluate three models based on their performance in both classification and estimation fidelity. The models under consideration include: * The model of <cit.>, which uses a sSGLMM prior for magnitude-only data and incorporates brain parcellation (denoted as MO-sSGLMM).* The model of <cit.> for cv-fMRI, which does not incorporate a spatial prior or brain parcellation (denoted as CV-nonSpatial). In this model, the prior for γ_v in model (<ref>) is taken to be γ_v|η_viid∼ern(η_v), η_v∼eta(1, 1).* Our proposed model, which uses an sSGLMM prior for complex-valued data and incorporates brain parcellation (denoted as CV-sSGLMM).All three models are fully Bayesian, suitable for autoregressive noise, and leverage Gibbs sampling to approximate their respective posterior distributions. Both MO-sSGLMM and CV-sSGLMM use the best combination of parcel number G and tuning parameter ψ in terms of the prediction accuracy (G=9 and ψ=Φ^-1(0.47) for both), and determine the active voxels by thresholding at γ_v>0.8722. The CV-nonSpatial model uses a threshold of 0.5, as suggested by <cit.>.Following the model comparisons, we concentrate on our proposed CV-sSGLMM model to examine the impacts of the tuning parameter ψ, the number of parcels G, and the length of time series T. Additional results for marginal posterior distributions, time series, and phase are provided in the appendix.All of the results are generated by running the code on a custom-built desktop computer with an Intel Core i9-9980XE CPU (3.00GHz, 3001 Mhz, 18 cores, 36 logical processors), NVIDIA GeForce RTX 2080 Ti GPU, 64 GB RAM, and operating on Windows 10 Pro. §.§ Simulated datasets with IID noise and AR(1) noiseWe discuss how we generate the true maps and simulate fMRI signals here, followed by the results.Designed stimulus, expected BOLD response, and true activation/magnitude map We use the same pattern of stimulus as simulated by <cit.>. The designed stimulus is a binary signalconsisting of five epochs, each with a duration of 40 time points, resulting in a total of T=200 time points. Within each epoch, the stimulus is turned on and off for an equal duration of 20 time points. The expected BOLD response, denoted as , is generated by convolving the stimulus signal with a double-gamma HRF. Both the designed stimulus and expected BOLD response, depicted in Figures <ref>a and <ref>b, are shared for all simulated datasets.To simulate 100 replicates on a 50× 50 panel, we use thefunction in thelibrary <cit.> in<cit.>. Each map features three non-overlapping active regions with varying characteristics such as centers, shapes, radii, and decay rates as shown in Table <ref>. The central voxel of an active region has a magnitude of one, while the magnitudes of the surrounding active voxels decrease based on their distance to the center and the decay rate ϱ. These magnitudes are further scaled by a multiplier of 0.04909 (which determines to the contrast-to-noise ratio via Eq. (<ref>)), yielding a range of 0 to 0.04909. Examples of the true activation map and true magnitude map are shown in Figures <ref>c and <ref>d. Simulating fMRI signals with non-AR noise and AR(1) noise We simulate 100 datasets with iid noise using the expected BOLD response and each true magnitude map for CV-nonSpatial and CV-sSGLMM. We then extract the moduli to use with MO-sSGLMM. The cv-fMRI signal of voxel v at time t is simulated by:y_t, Re^v =(β_0+β^v_1x_t) cos(θ)+ε_t, Re^v, ε_t, Re^v∼(0, σ^2), y_t, Im^v =(β_0+β^v_1x_t) sin(θ)+ε_t, Im^v, ε_t, Im^v∼(0, σ^2),where x_t represents the expected BOLD response from Figure <ref>b at time t, and β^v_1 refers to the true magnitude of voxel v taken from Figure <ref>d. The phase, θ, is set to be the constant π/4, and σ is set to the constant 0.04909. As a result, the maximum contrast-to-noise ratio (CNR) is maxβ_1^v/σ=1. We determine the intercept β_0 based on the signal-to-noise ratio (SNR) such that SNR=β_0/σ=10, leading to β_0=0.4909.Next, we generate 100 datasets with AR(1) noise in a similar manner as Eq. (<ref>).The difference lies in the simulation of error terms, which is done so that[ ε_t, Re^v; ε_t, Im^v ] = [0.2 -0.9;0.90.2 ][ ε_t-1, Re^v; ε_t-1, Im^v ] + [ ξ_Re^v; ξ_Im^v ] ,[ ξ_Re^v; ξ_Im^v ]∼_2(, σ^2).This is a real-valued equivalent of the complex AR(1) error model,ε_t^v = (0.2+0.9i)ε_t-1^v+ξ_v, ξ_v∼_1(0, 2σ^2, 0).Results Results from our simulations are displayed in Figure <ref>, which depicts the estimated maps for a single dataset. The yellow grid lines correspond to the partitions in cases of brain parcellation. The performance across the three models reveals a consistent trend. All models perform well for the iid case, while MO-sSGLMM fails to detect any activity in the presence of the AR(1) noise.This is because the complex-valued AR structure in equation (<ref>) cannot be recovered after extracting the moduli of the data. Further quantitative results, such as the receiver operating characteristic area under curve (ROC-AUC), true vs estimated magnitude regression slope, the concordance correlation coefficient (CCC), and true vs estimate pairwise mean square error (X-Y pairwise MSE), are illustrated in Figure <ref>. These offer a comprehensive performance evaluation in terms of classification and estimation. Figure <ref> shows similar comparative performance as can be gleaned from Figure <ref>. All procedures do well in the presence of iid noise, whereas both complex-valued models considerably outperform the magnitude-only model when the errors are correlated. In each case, we can observe slightly better MSE, CCC, and estimation fidelity (Figure <ref>(b), (c), (d), (f), (g), (h)), but these are small when compared to the outperformance of the complex-valued models versus magnitude only. Table <ref> summarizes the average metrics across 100 iid noise and 100 AR(1) noise replicated datasets. In the iid case, the F1-score, slope, CCC, and X-Y MSE clearly favor MO-sSGLMM, followed by our CV-sSGLMM, and CV-nonSpatial ranks last. This demonstrates the proficiency of MO-sSGLMM on datasets where the necessity to capture complex-valued noise dependence is not crucial. The ROC-AUC score of MO-sSGLMM is comparable to that of CV-nonSpatial, and slightly surpasses that of our proposed CV-sSGLMM. In the analysis of AR(1) datasets, our proposed CV-sSGLMM shows a clear advantage over the two competitors. Due to MO-sSGLMM's limitations already shown, we focus our comparison here between CV-nonSpatial and CV-sSGLMM. The CV-sSGLMM outperforms CV-nonSpatial across multiple metrics, such as F1-score, slope, CCC, and X-Y MSE. The superior performance of the CV-sSGLMM in terms of both classification and estimation can be attributed to the inclusion of the sSGLMM prior. In addition to our results, the value of using spatial priors to enhance the model's performance on correlated datasets has been demonstrated by <cit.>. Perhaps the most notable and favorable performance of our proposed model is in the vastly computational efficiency due to the brain parcellation and parallel computation, 5.39 seconds with CV-sSGLMM versus 42.2 seconds for the CV-nonSpatial. In other words, we obtain results as good or better than current state-of-the-art, but are able to do so 87% faster. Effects of experimental and parameter settings on CV-sSGLMM The performance of our CV-sSGLMM is determined in part by three choices: the tuning parameter ψ, the parcel number G, and the time length T. Here we assess their influence using the AR(1) data exclusively. For a single dataset, estimated activation maps generated from varying these settings are depicted in Figure <ref>, with their corresponding estimated magnitude maps displayed in Figure <ref>. A summary of average metrics over 100 replicated datasets is shown in Table <ref>.Figure <ref>(a)-(c) illustrates the results using ψ values of Φ^-1(0.02), Φ^-1(0.20), Φ^-1(0.35), respectively, which govern the a priori likelihood of a voxel being determined active. Along with Figure <ref>(f) using ψ=Φ^-1(0.47), we can observe a trade-off in selecting ψ: larger values lead to an increase in active voxels and false positives, whereas smaller values result in fewer active voxels and increased false negatives, all of which are as expected. In a simulated scenario, the optimal ψ can be determined by maximizing metrics like prediction accuracy or F1-score. In practical applications, ψ can be tuned to achieve a target percentage of active voxels based on prior experiments, cross-validation, WAIC <cit.>, etc. The effects of varying G=1, 4, 16 are exhibited in Figure <ref>(d)-(f), respectively. Along with Figure <ref>(f) using G=9, we observe negligible edge effects, that is, voxel classifications at parcel borders remain unaffected. Some metrics, such as F1-score, slope, CCC, and X-Y MSE, even exhibit slight improvements through G=1, 4, 9. Moreover, the computation time drops significantly as G increases, as expected. These results coincide with the findings of <cit.>. However, with G=16, performance starts decreasing compared to that of using G=9 due to insufficient numbers voxels within each parcel. The choice of G and corresponding parcel size V_g can be guided by prior experience or domain-specific knowledge of, e.g., anatomical regions.Figure <ref>(g)-(i) depicts the impact of varying the time length T=80, 500, 1000, respectively. The length of each epoch remains the same as 40 time points so that the number of epochs will change correspondingly. Along with Figure <ref>(f) using T=200, we observe improvements in both classification and estimation as T increases. in this case, an accuracy of 100% is achieved when T=1000, and its estimated magnitude map almost perfectly reproduces the truth. It is worth noting that we adopt a relatively low ψ=Φ^-1(0.02) for T=1000, suggesting a stringent selection of active voxels. Thus, when an ample number of repeated epochs are available for the stimulus, the signal is strong enough to let us select most of the positive voxels while avoiding false positives. This suggests that choosing a low ψ can enhance discriminative capability.§.§ Realistic simulationHere we simulate a dataset similar that that done by <cit.> in which we mimic the environmental conditions of a human brain. The data contain iid noise. The dataset comprises seven slices, each of size 96× 96 voxels, with signals generated across T=490 time points. The brain's active regions are two 5× 5× 5 cubes formed by two 5× 5 squares within each of slice 2-6. In contrast to the data produced by Eq. (<ref>), which exhibits a constant phase, this dataset has a dynamic phase. The cv-fMRI signal for voxel v at time t is thus simulated asy_t, Re^v =(β_0+β^v_1x_t) cos(θ_0+θ^v_1x_t)+ε_t, Re^v, ε_t, Re^v∼(0, σ^2),y_t, Im^v =(β_0+β^v_1x_t) sin(θ_0+θ^v_1x_t)+ε_t, Im^v, ε_t, Im^v∼(0, σ^2).The slice with the greatest maximum magnitude and phase CNR is slice 4 (Eq. (<ref>)):CNR_Mag =(maxβ_1^v)/σ=0.5/1,CNR_Ph =(maxθ_1^v)/SNR_Mag=(π/120)/25.Activation then decreases from slice 4 to slices 3 and 5 and is weakest in slices 2 and 6. Slices 1 and 7 exhibit no activation. It's important to note that, with dynamic phase, the model from <cit.> is not equivalent to that from <cit.> as indicated in <cit.>. This discrepancy suggests the proposed model is under model misspecification in this scenario. However, as both β_Re^v and β_Im^v in model (<ref>) include magnitude and phase information, and given that prior studies <cit.> have used the <cit.>-based model to process this dataset, we deem it worthwhile to test our model on these data. We set G=49 and a threshold of 0.8722 for both MO-sSGLMM and CV-sSGLMM, with ψ set to Φ^-1(0.50) and Φ^-1(0.11), respectively. For CV-nonSpatial, the threshold is set to 0.5, again following the advice of <cit.>. Activation maps are presented in Figure <ref>. We indeed observe that our model tends to overestimate the magnitude.Since the magnitudes are overestimated, we scale the estimated magnitude to the range of true magnitude in the corresponding slice. True and (scaled) estimated magnitude maps are displayed in Figure <ref>. Further numerical results, displayed in Table <ref>, show a pattern of the CV-sSGLMM model outperforming both the MO-sSGLMM and CV-nonSpatial models across different slices in terms of detecting true positives (TP). It should be noted, however, that the MO-sSGLMM model achieves a 100% precision (no false positives, FP) for most slices, albeit at the cost of a low recall rate (high false negatives, FN), indicating that the model is more conservative in identifying activated voxels. For the CV-nonSpatial model, although it exhibits good precision across the slices, the recall rates remain lower, specifically in the slices with weaker activation strengths (slices 2 and 6). This performance pattern suggests that the model struggles to detect activations in areas with low CNR, highlighting a limitation when dealing with real-world fMRI datasets that often feature low CNR. In comparison, the CV-sSGLMM model consistently detects a higher number of true positives across all slices, demonstrating a stronger detection power even in slices with weak activations (slices 2 and 6). This underscores the benefit of incorporating spatial information, which enhances the model's capacity to detect weaker activations in the presence of complex noise conditions. The model also maintains a 100% precision across all slices, suggesting that the inclusion of spatial information does not lead to an increase in false positives. As anticipated, both the MO-sSGLMM and CV-sSGLMM models, which employ brain parcellation, demonstrate superior computational efficiency, even when the parallel computation is gated by a 16-core CPU. This advantage becomes even more pronounced when handling larger datasets. § ANALYSIS OF HUMAN CV-FMRI DATAIn this study, we consider the fMRI dataset that is analyzed by <cit.>, which is acquired during a unilateral finger-tapping experiment on a 3.0-T General Electric Signa LX MRI scanner. The experimental paradigm involves 16 epochs of alternating 15s on and 15s off periods, leading to T=490 time points, including a warm-up period. The data are sourced from seven slices, each of size 96× 96. For the MO-sSGLMM and CV-sSGLMM models, we set the parcel number to G=25 and again use a threshold of 0.8722 on the inclusion probabilities. The tuning parameter ψ is set to Φ^-1(0.02) and Φ^-1(0.1), respectively. For CV-nonSpatial, the threshold is set to 0.5 as before. The consequent activation and magnitude maps generated from these analyses are depicted in Figure <ref> and Figure <ref>. With computation times closely paralleling those in Section <ref> due to comparable dataset sizes, all three models show the same patterns of activation maps. Our CV-sSGLMM consistently demonstrates superior prediction power, particularly evident in the weakly active areas observed in slices 1 and 7, maintaining its consistent performance as discussed in Section <ref>. The active regions identified through our CV-sSGLMM method align with those reported in <cit.>, reinforcing the validity of our results and the efficacy of our proposed approach. More importantly, the active regions correspond to areas of the brain that are known to typically be engaged in finger-tapping tasks, affirming the biological relevance of our findings.§ CONCLUSIONIn this study, we propose an innovative fully Bayesian approach to brain activity mapping using complex-valued fMRI data. The proposed model, which incorporates both the real and imaginary components of the fMRI data, provides a holistic perspective on brain activity mapping, overcoming the limitations of the conventional magnitude-only analysis methods. This model showcases the potential to detect task-related activation with higher accuracy. The adoption of an autoregressive error structure, together with spatial priors, allows us to capture both temporal and spatial correlations in brain activity. Moreover, the employment of brain parcellation and parallel computation significantly enhances the model's computational efficiency. Analyses of both simulated and real fMRI data underscores the benefits of our approach, particularly when temporally-correlated, complex-valued noise is present.There are still areas for exploration. For instance, while we achieve significant results by assuming the phases are constant, we believe that future Bayesian studies based on the dynamic phase model of <cit.> should be proposed to account for potential phase variations during brain activity <cit.>. Additionally, our current proposal assumes circular data, that is, ^v= for ^v in model (<ref>), implying that β^v_Re and β^v_Im are independent. It would be prudent to develop a more generalized non-circular model where ^v≠ to account for the possibility of non-circular data. § ACKNOWLEDGEMENT Research reported in this publication was supported by the National Institute Of General Medical Sciences of the National Institutes of Health under Award Number P20GM139769 (Xinyi Li), National Science Foundation awards DMS-2210658 (Xinyi Li) and DMS-2210686 (D. Andrew Brown).The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or the National Science Foundation. plainnatAppendix§ DEMONSTRATING THE EQUIVALENCE BETWEEN MODELS USING REAL AND IMAGINARY PARTS, AND MODELS USING MAGNITUDE AND PHASEThis appendix is influenced by <cit.>, and seeks to demonstrate that, when there's only one stimulus: * <cit.>'s model is approximately equivalent to <cit.>'s dynamic phase model when the intercept in the magnitude is absent.* <cit.>'s model is fully equivalent to <cit.>'s constant phase model.For the first scenario, assuming no intercept in the magnitude, the v^th voxel's complex-valued fMRI signal can be simulated using <cit.>'s dynamic phase model as per equation:_Re^v =D_Re^vβ^v, _Im^v =D_Im^vβ^v,where _Re^v and _Im^v are simulated complex-valued fMRI vectors of length T, andis the expected BOLD response of length T with β^v as the scalar magnitude. The matrices D_Re^v and D_Im^v are T× T and diagonal with cos(θ_0+θ_1x_t) and sin(θ_0+θ_1x_t) as the t^th diagonal element, which represent the dynamic phase. By equating this with the means of the <cit.>'s model (without intercept), we have:β_Re^v =D_Re^vβ^v, β_Im^v =D_Im^vβ^v,where β_Re^v and β_Im^v are the scalar real and imaginary parts of the regression coefficient, and the maximum likelihood estimators of them are:β_Re^v =(')^-1'D_Re^vβ^v, β_Im^v =(')^-1'D_Im^vβ^v,then,β_Re^v,2+β_Im^v, 2 =[(')^-1'D_Re^vβ^v]^2+[(')^-1'D_Im^vβ^v]^2=β^v, 2(')^-2[('D_Re^v)^2+('D_Im^v)^2]=β^v, 2(')^-2['D_Re^v'D_Re^v+'D_Im^v'D_Im^v]=β^v, 2(')^-2['(D_Re^v'D_Re^v+D_Im^v'D_Im^v)].Notice that D_Re^v'D_Re^v and D_Im^v'D_Im^v are T× T symmetric matrices with the following terms as the (i, j)th element, respectively:x_ix_jcos(θ_0+θ_1x_i)cos(θ_0+θ_1x_j),x_ix_jsin(θ_0+θ_1x_i)sin(θ_0+θ_1x_j).Using the fact that cos(a)cos(b)+sin(a)sin(b)=cos(a-b), we have:D_Re^v'D_Re^v+D_Im^v'D_Im^v= '⊙,whereis a T× T symmetric matrix and _(i, j) =cos(θ_1(x_i-x_j)), and ⊙ denotes the point-wise product. It's important to note that in both simulated and real data,closely approximates the all-ones matrix _T× T. This is because the difference between x_i and x_j is typically small, even when considering the extreme values. After multiplying this small difference with a small θ_1 and then taking the cosine, the result tends to be very close to 1. Thus,√(β_Re^v, 2+β_Im^v, 2)≈√(β^v, 2(')^-2['(')])=β^v.In this case, <cit.>'s model can be considered as approximately equivalent to <cit.>'s dynamic phase model. For the second scenario, when the phase is constant and the intercept is included in the magnitude, using <cit.>'s constant phase model to simulate the data, we get:_Re^v =Λ_Re^v[ ][ β_0^v; β_1^v ], _Im^v =Λ_Im^v[ ][ β_0^v; β_1^v ],where Λ_Re^v=cos(θ) _T× T and Λ_Im^v=sin(θ) _T× T. Upon equating this with the means of the <cit.>'s model, we have:[ ][ β_Re, 0^v; β_Re, 1^v ] =Λ_Re^v[ ][ β_0^v; β_1^v ], [ ][ β_Im, 0^v; β_Im, 1^v ] =Λ_Im^v[ ][ β_0^v; β_1^v ].Since Λ_Re^v and Λ_Im^v don't contain , we can remove the means so that to remove the intercept in the model, which yields:_cβ_Re, 1^v =Λ_Re^v_cβ_1^v, _cβ_Im, 1^v =Λ_Im^v_cβ_1^v,where _c is the centered . This becomes similar to the previous model:β_Re, 1^v, 2+β_Im, 1^v, 2=β_1^v, 2(_c'_c)^-2 [_c'(_c_c'⊙)_c]=β_1^v, 2(_c'_c)^-2[_c'(_c_c')_c]=β_1^v, 2,asis exactly _T× T now. Consequently, <cit.>'s model is found to be equivalent to <cit.>'s constant phase model.§ FULL CONDITIONAL POSTERIOR DISTRIBUTIONS IN THE CV-SSGLMM MODEL FOR GIBBS SAMPLINGThis appendix gives full conditional posterior distributions of γ_v, _r^v, _r^v, σ_v^2, τ_g^2, η_v, _g, κ_g for Gibbs sampling. All derivations will omit the subscript of g (parcel index) from the parcel-level parameters τ_g^2, _g, and κ_g, since all parcels run the algorithm identically.§.§ Full conditional distribution of γ_v For the voxel v (v=1, ..., V):p(γ_v=1|_r^v, _r^v, _r^v, σ_v^2, τ^2, η_v) =p(γ_v=1|η_v)/p(γ_v=1|η_v)+L_0/L_1·p(γ_v=0|η_v),whereL_0 =p(_r^v, _r^v, _r^v, σ_v^2, τ^2|γ_v=0),L_1 =p(_r^v, _r^v, _r^v, σ_v^2, τ^2|γ_v=1).To determine L_0 and L_1, which are the joint distributions of _r^v, _r^v, _r^v, σ_v^2, τ^2 under the condition of γ_v=0 and γ_v=1, respectively, we recall the CV-sSGLMM model:^v=β^v+^vρ^v+^v,^v∼_T(, 2σ_v^2, ).Applying Prais-Winsten transformation (order one backward operator) on ^v and , we have:^v* =_now^v-ρ^v_lag1^v, ^v* =_now-ρ^v_lag1,where _now^v and _lag1^v are vectors containing the last and the first T-1 elements in ^v, respectively. The vectors _now and _lag1 are fromby the same rule of truncation. Now it becomes a model without autoregressive errors:^v*=^v*β^v+^v,^v∼_T-1(, 2σ_v^2, ),with equivalent real-valued representation:[ _Re^v*; _Im^v* ]__r^v* = [_Re^v* -_Im^v*;_Im^v*_Re^v* ]__r^v*[ β_Re^v; β_Im^v ]__r^v + [ _Re^v; _Im^v ]__r^v.Using the symbols in underbraces for a more compact form:_r^v*=_r^v*_r^v+_r^v, _r^v∼_2(T-1)(, σ_v^2).Therefore, when γ_v=1:L_1=p(_r^v, _r^v, _r^v, σ_v^2, τ^2)∝ p(_r^v|_r^v, _r^v, σ_v^2) p(_r^v|τ^2),wherep(_r^v|_r^v, _r^v, σ_v^2) =(2πσ_v^2)^-2(T-1)/2 exp{-1/2σ_v^2(_r^v*-_r^v*_r^v)'(_r^v*-_r^v*_r^v)},p(_r^v|τ^2) =(2πτ^2)^-2/2 exp{-1/2τ^2(_r^v)'(_r^v)}.Similarly, when γ_v=0:L_0=p(_r^v, _r^v=, _r^v, σ_v^2, τ^2)∝ p(_r^v|_r^v=, _r^v, σ_v^2) p(_r^v=|τ^2),wherep(_r^v|_r^v=, _r^v, σ_v^2) =(2πσ_v^2)^-2(T-1)/2 exp{-1/2σ_v^2(_r^v*)'(_r^v*)},p(_r^v=|τ^2) =1.Integrating _r^v out of L_1 yields:L_1^* = (2πσ_v^2)^-2(T-1)/2·σ_v^2/τ^2· exp{-1/2σ_v^2(_r^v*)'_r^v*}{ det[(_r^v*)'_r^v*+σ_v^2/τ^2]}^-1/2· exp{1/2σ_v^2[(_r^v*)'_r^v*]'[(_r^v*)'_r^v*+σ_v^2/τ^2]^-1[(_r^v*)'_r^v*]}.Then, the ratio is:L_0/L_1^* =τ^2/σ_v^2 { det[(_r^v*)'_r^v*+σ_v^2/τ^2]}^1/2/ exp{1/2σ_v^2[(_r^v*)'_r^v*]'[(_r^v*)'_r^v*+σ_v^2/τ^2]^-1[(_r^v*)'_r^v*]}.Using this ratio and p(γ_v=1|η_v)=(ψ+η_v), the full conditional distribution of γ_v is:π(γ_v|_r^v, _r^v, _r^v, σ_v^2, τ^2, η_v)=ern(P),whereP=p(γ_v=1|_r^v, _r^v, _r^v, σ_v^2, τ^2, η_v) =(ψ+η_v)/(ψ+η_v)+L_0/L_1^*·[1-(ψ+η_v)]. §.§ Full conditional distribution of _r^v For the voxels with γ_v=0, we assign them _r^v=. For the voxels with γ_v=1:π(_r^v|_r^v, _r^v, σ_v^2, τ^2)∝p(_r^v|_r^v, _r^v, σ_v^2)p(_r^v|τ^2)∝ exp{-1/2σ_v^2(_r^v*-_r^v*_r^v)'(_r^v*-_r^v*_r^v)} exp{-1/2τ^2(_r^v)'(_r^v)}∝ exp{-1/2[(_r^v)'(_r^v*)'_r^v*/σ_v^2_r^v-2(_r^v)'(_r^v*)'/σ_v^2_r^v*+(_r^v)'1/τ^2(_r^v)]}= exp{-1/2[(_r^v)'τ^2(_r^v*)'_r^v*+σ_v^2/σ_v^2τ^2_r^v-2(_r^v)'(_r^v*)'/σ_v^2_r^v*]},which is a kernel of multivariate normal distribution. Thus:π(_r^v|_r^v, _r^v, σ_v^2, τ^2, γ_v=1)=_2(__r^v, __r^v),where__r^v =[τ^2(_r^v*)'_r^v*+σ_v^2/σ_v^2τ^2]^-1(_r^v*)'/σ_v^2_r^v*=[(_r^v*)'_r^v*+σ_v^2/τ^2]^-1(_r^v*)'_r^v*, __r^v =[τ^2(_r^v*)'_r^v*+σ_v^2/σ_v^2τ^2]^-1=σ_v^2[(_r^v*)'_r^v*+σ_v^2/τ^2]^-1.Full conditional distribution of _r^v Since _r^v is the autoregression coefficient for AR(1) errors, let:^v=^v-β^vbe the predicted errors. Let _now^v and _lag1^v be the vectors containing the last and the first T-1 components in ^v, then:_now^v=_lag1^vρ^v+^v, ^v∼_T-1(, 2σ_v^2, ),with equivalent real-valued representation:[ _now, Re^v; _now, Im^v ]__now, r^v = [_lag1, Re^v -_lag1, Im^v;_lag1, Im^v_lag1, Re^v ]__lag1, r^v[ ρ_Re^v; ρ_Im^v ]__r^v + [ _Re^v; _Im^v ]__r^v.Using the symbols in underbraces for a more compact form:_now, r^v=_lag1, r^v_r^v+^v, ^v∼_2(T-1)(, σ_v^2).Assigning a uniform prior, p(_r^v)∝ 1, the full conditional distribution of _r^v is:π(_r^v|_r^v, ·)=_2(__r^v, __r^v),where__r^v =[(_lag1, r^v)'_lag1, r^v]^-1(_lag1, r^v)'_now, r^v, __r^v =σ_v^2[(_lag1, r^v)'_lag1, r^v]^-1. §.§ Full conditional distribution of σ_r^v The full conditional distribution of σ_r^v is also from:_now, r^v=_lag1, r^v_r^v+^v, ^v∼_2(T-1)(, σ_v^2).Assigning a Jeffreys prior, p(σ_v^2)∝ 1/σ_v^2, we have:π(σ_v^2|_r^v, ·)=(2(T-1)/2, 1/2(_now, r^v-_lag1, r^v_r^v)'(_now, r^v-_lag1, r^v_r^v)). §.§ Full conditional distribution of τ^2 The full conditional distribution of τ^2 should be related to the number of active voxels and could be imposed a Jeffreys prior, p(τ^2)∝ 1/τ^2. After updating =(γ_1, , γ_V)' and filtering _r=(β_Re^1,⋯,β_Re^V,β_Im^1,⋯,β_Im^V)' byto make them strictly zeros and non-zeros in each iteration, we have:π(τ^2|_r)=(2'/2, 1/2_r'_r). §.§ Full conditional distribution of η_v Without considering the condition of γ_v, we focus on π(η_v|κ) first. Let _s=' and _κ s=κ_s=κ', then:π(η_v|κ) =∫π(η_v, |κ)d=∫π(η_v|)π(|κ)d=∫(_v', 1)×(, _κ s^-1)d∝∫ exp{-η_v^2-2_v'η_v+'_v_v'/2} exp{-'_κ s/2}d= exp{-η_v^2/2}∫ exp{-1/2['(_κ s+_v_v')-2_v'η_v]}d= exp{-η_v^2/2[1-_v'(_κ s+_v_v')^-1_v]^-1}.Thus, η_v|κ follows normal distribution with mean 0 and variance:[1-_v'(_κ s+_v_v')^-1_v]^-1.By Woodbury's matrix identity:[1-_v'(_κ s+_v_v')^-1_v]^-1=1+_v'_κ s^-1_v.That is:π(η_v|κ)=(0,1+_v'_κ s^-1_v).If the condition of γ_v is considered, by <cit.>:π(η_v|γ_v, )= (_v',1,0, ∞) ifγ_v=1 (_v', 1, -∞, 0) ifγ_v=0,wheredenotes the truncated normal distribution. Thus, when γ_v=1:π(η_v|γ_v ,κ)=∫π(η_v, |γ_v, κ)d=∫π(η_v|γ_v, )π(|κ)d=∫(_v',1,0, ∞)×(, _κ s^-1)d=(0, 1+_v'_κ s^-1_v,0,∞).Similarly, when γ_v=0:π(η_v|γ_v ,κ)=(0, 1+_v'_κ s^-1_v, -∞, 0).Notice that the variance 1+_v'_κ s^-1_v=1+_v'(κ_s)^-1_v. As κ functions as a spatial smoothing parameter, it can be moved out of the parentheses to control the entire variance and play the same role. That is:π(η_v|γ_v ,κ)= (0,1/κ(1+_v'_s^-1_v)_ν_v^2,0, ∞) ifγ_v=1 (0,1/κ(1+_v'_s^-1_v)_ν_v^2,-∞,0) ifγ_v=0.Since +_s^-1' doesn't contain any parameters, it can be pre-calculated, then ν_v^2=1+_v'_s^-1_v is its v^th diagonal element. This will accelerate the computation.§.§ Full conditional distribution ofThe full conditional distribution ofis:π(|, κ)=((_κ s+')^-1', (_κ s+')^-1).Similar to how we deal with κ for η_v, this distribution becomes:π(|, κ)=(1/κ(_s+')^-1__s^-1', 1/κ(_s+')^-1__s^-1),where _s^-1=(_s+')^-1 can be pre-calculated to accelerate the computation.§.§ Full conditional distribution of κ We assume η_1, ..., η_V are conditionally independent when given κ, thus:π(|κ) = ∏_v=1^Vπ(η_v|κ)= [(1/κ)^-V/2∏_v=1^V(1+_v'_s^-1_v)^-1/2] exp{-∑_v=1^Vη_v^2/2·1/κ·(1+_v'_s^-1_v)} ∝κ^V/2· exp{-κ·1/2·∑_v=1^Vη_v^2/(1+_v'_s^-1_v)}.Therefore, the full conditional distribution of κ is:π(κ|) ∝π(|κ)π(κ)∝κ^V/2· exp{-κ·1/2·∑_v=1^Vη_v^2/(1+_v'_s^-1_v)}·κ^1/2-1· exp{-κ/2000}=κ^V+1/2-1 exp{-κ[1/2(∑_v=1^Vη_v^2/(1+_v'_s^-1_v))+1/2000]}.That is:π(κ|)= amma(a=V+1/2, b=[1/2(∑_v=1^Vη_v^2/(1+_v'_s^-1_v))+1/2000]^-1)= amma(a=V+1/2, b=[1/2(η_1^2/ν_1^2+⋯+η_V^2/ν_V^2)+1/2000]^-1),where b is the scale, and the details for ν_v^2 are in the full conditional distribution of η_v. § MORE ESTIMATIONS BY THE CV-SSGLMM MODEL The CV-sSGLMM model is applied to estimate the marginal posterior distributions from three distinct types of voxels (strongly active, moderately active, inactive) within an AR(1) dataset, as showcased in Figure <ref>. The bell-shaped distributions of β_Re and β_Im corroborate the theoretical derivation and affirm the reliable performance of the MCMC algorithm during the sampling process. The true and estimated time series from these three voxel are presented in Figure <ref>. The congruence between the generator using true parameters (in black) and that using estimated parameters (in red) is evident. Additionally, both sets of time series aptly capture the pattern of the simulated time series (in blue). This alignment serves as a further testament to the good estimation performance of our CV-sSGLMM model. The phase of voxels is also estimated by the CV-sSGLMM model, and the outcomes are displayed in Figure <ref>. Figure <ref>(a) presents the true phase map, simulated using a constant phase value of θ=π/4≈ 0.79 for active voxels. Figure <ref>(b) demonstrates that the CV-sSGLMM model effectively estimated this phase map by θ_v=arctan(β_Im^v/β_Rm^v).
http://arxiv.org/abs/2310.18536v1
{ "authors": [ "Zhengxin Wang", "Daniel B. Rowe", "Xinyi Li", "D. Andrew Brown" ], "categories": [ "stat.ME", "stat.AP" ], "primary_category": "stat.ME", "published": "20231027232551", "title": "Efficient Fully Bayesian Approach to Brain Activity Mapping with Complex-Valued fMRI Data" }
The Experimental Cavern North 3 (ECN3) is an underground experimental cavern on the CERN Prévessin site. ECN3 currently hosts the NA62 experiment, with a physics programme devoted to rare kaon decays and searches of hidden particles approved until Long Shutdown 3 (LS3). Several options are proposed on the longer term in order to make best use of the worldwide unique potential of the high-intensity/high-energy proton beam extracted from the Super Proton Synchrotron (SPS) in ECN3. The current status of their study by the CERN Physics Beyond Colliders (PBC) Study Group is presented, including considerations on beam requirements and upgrades, detector R&D and construction, schedules and cost, as well as physics potential within the CERN and worldwide landscape.Post-LS3 Experimental Options in ECN3 C. Ahdida^1, G. Arduini^*,1, K. Balazs^1, H. Bartosik^1, J. Bernhard^1, A. Boyarsky^2, J. Brod^3, M. Brugger^1, M. Calviani^1, A. Ceccucci^1, A. Crivellin^4,5, G. D'Ambrosio^6, G. De Lellis^6,7, B. Döbrich^8, M. Fraser^1, R. Franqueira Ximenes^1, A. Golutvin^9, M. Gonzalez Alonso^10, E. Goudzovski^11, J.-L. Grenard^1, J. Heeck^12, J. Jaeckel^*,13, R. Jacobsson^1, Y. Kadi^1, F. Kahlhoefer^#,+,14, F. Kling^15, M. Koval^16, G. Lanfranchi^+,17, C. Lazzeroni^11, F. Mahmoudi^1,18, D. Marzocca^19, K. Massri^1, M. Moulson^17, S. Neshatpour^6, J. Osborne^1, M. Pospelov^+,20,21, T. Prebibaj^1, T. R. Rabemananjara^22,23, Ch. Rembser^#,1, J. Rojo^22,23, A. Rozanov^#,24, G. Ruggiero^25, G. Rumolo^1, G. Schnell^&,26, M. Schott^27, Y. Soreq^28, T. Spadaro^17, C. Vallée^*,24, T. Zickler^1, J. Zupan^3.January 14, 2024 =================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================§ AFFILIATIONS ^1CERN, Geneva, Switzerland^2Institute Lorentz, Leiden University, Niels Bohrweg 2, Leiden, NL-2333 CA, the Netherlands^3Department of Physics, University of Cincinnati, Cincinnati, Ohio 45221,USA^4Physik-Institut, Universität Zürich, Winterthurerstrasse 190, CH–8057 Zürich, Switzerland^5Paul Scherrer Institut, CH–5232 Villigen PSI, Switzerland^6INFN-Sezione di Napoli, Complesso Universitario di Monte S. Angelo, Via Cintia Edificio 6, 80126 Napoli, Italy^7Universita` degli Studi di Napoli Federico II, I-80126 Napoli, Italy^8Max-Planck-Institutfür Physik (Werner-Heisenberg-Institut), Föhringer Ring 6, 80805 München, Germany^9Blackett Laboratory, Imperial College London, Prince Consort Road, London, SW7 2AZ, UK^10Department de Física Teòrica IFIC, Universitat de València-CSIC, Parc Científic, Paterna 46980, Valencia, Spain^11School of Physics and Astronomy, University of Birmingham, Edgbaston, Birmingham, B15 2TT, United Kingdom^12Department of Physics, University of Virginia, Charlottesville, Virginia 22904-4714, USA^13Institut für Theoretische Physik, Universität Heidelberg, Philosophenweg 16, 69120 Heidelberg, Germany^14Institute for Theoretical Particle Physics (TTP), Karlsruhe Institute of Technology (KIT), D-76131 Karls- ruhe, Germany^15Deutsches Elektronen-Synchrotron DESY, Notkestr. 85, 22607 Hamburg, Germany^16Charles University, Prague, Czech Republic^17INFN Laboratori Nazionali di Frascati, Frascati (Rome), Italy^18Université de Lyon, Université Claude Bernard Lyon 1, CNRS/IN2P3, Institut de Physique des 2 Infinis de Lyon, UMR 5822, F-69622, Villeurbanne, France ^19INFN, Sezione di Trieste, SISSA, Via Bonomea 265, 34136, Trieste, Italy^20 William I. Fine Theoretical Physics Institute, School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455, USA^21School of Physics and Astronomy, University of Minnesota, Minneapolis, MN 55455, USA^22Nikhef Theory Group, Science Park 105, 1098 XG Amsterdam, The Netherlands^23Physics and Astronomy, Vrije Universiteit Amsterdam, NL-1081 HV Amsterdam, The Netherlands^24Aix Marseille Univ, CNRS/IN2P3, CPPM, Marseille, France^25Faculty of Science and Technology, University of Lancaster, Lancaster, United Kingdom^26Department of Physics & EHU Quantum Center, University of the Basque Country UPV/EHU, 48080 Bilbao and IKERBASQUE, 48009 Bilbao, Spain^27Johannes Gutenberg-Universität Mainz, 55128 Mainz, Germany^28Technion—Israel Institute of Technology, Haifa 32000, Israel^*Physics Beyond Collider (PBC) coordinator^#PBC Beyond the Standard Model (BSM) Working Group convenor^+PBC Feebly Interacting Particle Physics Centre (FPC) convenor^&PBC QCD Working Group convenor§ PBC WORKING GROUP CONTRIBUTORSThe studies benefited from contributions of the PBC physics working groups (https://pbc.web.cern.ch/bsmBSM Working Group, https://pbc.web.cern.ch/fpc-mandateFIPs Physics Centre andhttps://pbc.web.cern.ch/qcdQCD Working Group) and of several PBC accelerator working groups including a dedicated ECN3 Beam Delivery Task Force (ECN3-TF) <cit.>.The contributing members of the accelerator working groups are listed below with the names of the conveners underlined. 0pt Accelerator complex capabilities:H. Bartosik, T. Prebibaj, G. Rumolo.0ptBeam Dump Facility:O. Aberle, C. Ahdida, P. Arrutia, K. Balazs, M. Calviani, Y. Dutheil, L.S. Esposito, R. Franqueira Ximenes, M. Fraser, F. Galleazzi, S. Gilardoni, J.-L. Grenard, T. Griesemer, R. Jacobsson, V. Kain, L. Krzempek, D. Lafarge, S. Marsh, J.M. Martin Ruiz, G. Mazzola, R.F. Mena Andrade, Y. Muttoni, A. Navascues Cornago, P. Ninin, J. Osborne, R. Ramjiawan, F. Sanchez Galan, P. Santos Diaz, F. Velotti, H. Vincke, P. Vojtyla.0ptConventional Beams ECN3:C. Ahdida, D. Banerjee, A. Baratto Roldan, J. Bernhard, M. Brugger F. Butin, A. Ceccucci, N. Charitonidis, L.A. Dyks, L. Gatignon, J.-L. Grenard, Y. Kadi, L. Krzempek, G. Lanfranchi, C. Lazzeroni, K. Massri, M. Moulson. L.J. Nevay, E. Nowak, E.G. Parozzi, M. Van Dijk. 0ptECN3 Beam Delivery Task Force: M. Brugger, C. Ahdida, J. Bernhard, M. Calviani, Y. Dutheil, L.A. Dyks, L. S. Esposito, R. Folch, R. Franqueira Ximenes, M. Fraser, J.-L. Grenard, Y. Kadi, E. Nowak, R. Ramjiawan, F. Sanchez-Galan, P. Schwarz, M. van Dijk, F. Velotti, C. Vendeuvre, H. Vincke, T. Zickler.§ EXECUTIVE SUMMARY tocsectionExecutive summary The PBC study group has supported the preparation of the proposals for future experiments in the CERN SPS North Area ECN3 experimental cavern beyond the currently approved programme, including their implementation and physics potential within the worldwide landscape. §.§ Context There is strong and growing evidence from both particle physics and astrophysical observations for the existence of physics Beyond the Standard Model (BSM). Yet, so far it has evaded direct discovery in high energy colliders.This calls for novel experiments increasing the scope to search for new, low mass Feebly Interacting Particles (FIPs) as well as to indirectly probe the multi-TeV domain beyond direct LHC reach. High precision and high intensity are crucial tools in this endeavour. In this context the CERN SPS complex provides a worldwide unique combination of high energy beams up to 400 GeV, high intensity and high duty cycle.At CERN, completion of the CNGS neutrino beam program in 2012, together with injector upgrades performed for HL-LHC, leaves room for a new high-intensity facility as regards proton yield. The best opportunity for such an implementation is the ECN3 underground experimental hall in the SPS North Area (NA), which was initially designed for high-intensity beams and currently hosts the NA62 experiment. NA62 data taking interleaves K^+ beam for K^+ rare decays measurements with Beam Dump (BD) mode for FIP searches. The program is approved until the LS3 shutdown scheduled from 2026 to 2028, and foresees to collect integrated intensities of ≈10^19 PoT and ≈10^18 PoT in the two modes, respectively.Two main options are in competition in ECN3 beyond LS3. HIKE/SHADOWS combines an upgrade of NA62, HIKE, to perform higher precision measurements of rare kaon decays in two consecutive phases respectively devoted to K^+ and K^0 beams, with the possibility to take data in BD mode by closing a collimator, as is done by NA62, to look for FIPs. In the BD mode HIKE would be complemented by an off-axis detector, SHADOWS, to extend the acceptance at higher FIP masses and perform neutrino measurements. A possible longer term third phase optimized for the ultra-rare decay K^0→π^0νν̅ is not part of the current HIKE proposal and has not been considered in this study. Alternatively, BDF/SHiP is the implementation in ECN3 of the SHiP detector and the associated Beam Dump Facility (BDF). The latter was initially proposed as a new underground complex, and can be realized in ECN3 with a significant cost reduction. BDF/SHiP is designed as a state-of-the-art Beam Dump experiment with a dual spectrometer for searches of FIPs and neutrino measurements. It has been slightly downsized as compared to the former proposal to fit the ECN3 experimental hall, and brought closer to the proton beam dump to preserve the initial acceptance. §.§ Beam and infrastructure upgrades HIKE/SHADOWS (resp. BDF/SHiP) request ≥ 4.5 (resp. ≥ 1)-long proton spills with integrated intensities of up to 1.2 (resp. 4.0) × 10^19 PoT/year.New SPS operation modes have been designed to fulfill these needs in ECN3. They are compatible with thedelivery of more than 0.4 × 10^19 (resp. 0.6 × 10^19) PoT/year to the other SPS experimental areas for the HIKE/SHADOWS (resp. BDF/SHiP) scenario, which is comparable with the PoT delivered in recent years. The optimal operation mode for such high-intensity was found to consist in dedicated ECN3 spills which are directly transferred from the SPS slow extraction area to the target serving ECN3, and are characterized by significantly lower transfer losses as compared to the present operation mode.The required proton beam line upgrades are the same for the two experimental options. They benefit from the already funded NA consolidation program, to which they add an extra material cost estimated to 14 MCHF with an uncertainty from 30 to 50 %. The target serving ECN3 has to be fully rebuilt for both BDF/SHiP and HIKE/SHADOWS in order to stand the higher intensity and harsher radiation environment. The total target- and infrastructure-related costs are estimated to be in the range of 50 MCHF and similar for BDF/SHiP and HIKE/SHADOWS, though the design of the HIKE Phase 2 beamline is still ongoing including radiation protection and integration studies. The overall uncertainty for the cost estimate ranges from 30 to 50%. §.§ Experimental detectors The three considered detectors have similar global layouts consisting in a very low-pressure decay vessel followed by a spectrometer, with subdetector technologies adapted to the different operational constraints of the kaon and BD modes. In addition, SHiP and SHADOWS plan to host a small fine-grained dense detector with emulsions for FIP indirect detection and neutrino measurements. The HIKE detector will keep the NA62 structure and components with upgrades for each HIKE phase. Phase 1 primarily aims at a better timing resolution to stand the higher data taking rates, and at a better radiation hardness. This can benefit from HL-LHC-oriented R&D (e.g. for silicon trackers) to match the stringent requirements. Phase 2 will adapt to the K^0 decay modes and associated background by removing some subdetectors and re-arranging others. The SHiP and SHADOWS detectors use well-established technologies with, however, harsher irradiation conditions for SHADOWS. Critical components of all projects are the magnet systems, especially those aimed to sweep the muon background out in BD mode. Final magnet designs will have to compromise between cost, electricity consumption, construction schedule and ability to achieve the very low background required by the experiments. The total material costs of the detectors are estimated to 27 MCHF (HIKE phase 1&2 upgrades), 12 M€ (SHADOWS) and 51 MCHF (SHiP), with uncertainties ranging from 10 to 30%.§.§ Construction and operation schedules The preliminary beam upgrade schedule would allow the modifications required upstream of the target serving ECN3 to be implemented before the end of LS3 provided a timely decision. Because of resource competition with HL-LHC accelerator and detector upgrades, the ECN3-specific upgrades will extend beyond LS3 by at least one year.This decoupling would allow other NA users to restart operation after LS3 while ECN3 components are installed and commissioned during Run 4. The three detector construction schedules are feasible but tight, especially for components still under R&D, and will require timely decisions on subdetector options and funding to match the beam upgrades schedule.Indicative operation schedules of BDF/SHiP and HIKE/SHADOWS options have been sketched. They span over more than 15 years of nominal data taking extending to the second half of the 2040s and they assume operation of the North Area will follow a pattern similar to the present one after the HL-LHC shutdown. HIKE/SHADOWS operation foresees 9 years shared between K^+ and BD mode and 6 years devoted to K^0 mode (BD operation in this mode is still under evaluation). The corresponding integrated intensities used as references to quantify the physics reach are: 6 × 10^20 PoT for BDF/SHiP; 5 × 10^19 PoT for HIKE/SHADOWS BD mode; 3.6 × 10^19 and 7.2 × 10^19 PoT for HIKE Phase 1 and Phase 2, respectively. §.§ Physics reach in worldwide landscape The main physics goals of the proposed projects include precision kaon physics, which is specific to HIKE, new neutrino measurements by SHiP and SHADOWS, and searches for FIPs by all three experiments. All planned measurements are based on rare processes and therefore highly sensitive to background. The dominant backgrounds which may affect the signals are random coincidences and DIS interactions of muons and neutrinos issued from the target area, as well as, for rare K decays, contamination from the dominant K-decay channels. They were estimated for all projects with state-of-the-art detailed simulation tools and taking into account detector resolution. In addition, HIKE Phase 1 and SHADOWS benefit from extrapolations of NA62 real data in K^+ and BD modes, and SHiP has performed dedicated beam tests of muon production in a BDF target replica. The present results indicate that the most dangerous backgrounds will be kept under control for the targeted reference integrated intensities. The background estimations however require consolidation, especially for the K^0 beamline which is still under design. In case unexpected backgrounds show up in first real data, the long lifetime of the experiments should allow for detectors to be upgraded in order to mitigate them and ensure that backgrounds will in-fine not be the limiting factor of the measurements.Kaon precision physics would extend the approved NA62 program with a K^+ integrated intensity higher by a factor ≈ 4 and a novel study of rare K^0 decays. This gives access to hypothetical BSM high-mass states beyond the range directly accessible at the LHC, and to insights into the CKM matrix unitarity and Lepton Flavour Universality (LFU). Quantification of the agreement to the SM within BSM effective theories confirms the complementarity with B physics. K^+ precision physics at CERN is unique worldwide, and HIKE Phase 2 addresses K^0channels that are complementary to the K^0→π^0νν̅ mode addressed in priority by KOTO at JPARC.The SPS 400 GeV proton beam gives a worldwide unique possibility to efficiently search for FIPs in the MeV–GeV range up to the b quark mass. HIKE has sensitivity to low-mass FIPs in the forward direction in BD mode, and (uniquely) to very-low mass FIPs from rare decays in Kaon mode. The addition of SHADOWS off-axis in BD mode extends the sensitivity to high-mass states, so that the HIKE/SHADOWS combination would significantly extend the exploration of FIPs within the worldwide landscape. The BDF/SHiP configuration is fully optimized for FIP searches in BD mode by providing sensitivity to low-mass FIPs produced forward, high-mass FIPs decaying at large angle, and scattering of invisible FIPs. It would provide ultimate sensitivity in the full mass range reachable at the SPS energy, in most cases beyond what would be achievable at CERN by the LHC proposed Forward Physics Facility (FPF) and large angle FIP projects, as well as at FNAL by the DarkQuest Collaboration on the 120 GeV Main Injector.The highlight of neutrino studies planned by SHiP and SHADOWS would be the first quantitative measurement of τ neutrino and anti-neutrino interactions.SHADOWS may, however, be limited in ν_τ statistics due to the lower neutrino flux in the off-axis position of its detector and its lower integrated intensity. SHiP on the other hand plans to measure several thousand of ν_τ and ν̅_̅τ̅ interactions, a sample which may be limited by systematic uncertainties rather than statistics. More studies are needed to quantify the projects' fundamental reach with neutrinos. Similar measurements are planned at the FPF though with a somewhat lower statistics than SHiP and in a different, complementary energy range. All-in-all, a future high-intensity facility in ECN3 will have a unique impact in the worldwide landscape of the next decades. The physics criteria to select the experimental program will depend on the relative weights given to improvements in precision kaon physics, ultimate exploration of the FIPs territory in the SPS energy range, and novel neutrino measurements. § INTRODUCTIONThe PBC Study Group was initially mandated by the CERN Management to prepare the European Particle Physics Strategy Update (EPPSU) for CERN projects other than high-energy frontier colliders. Following the EPPSU process, the PBC Study Group was confirmed on a permanent basis with an updated mandate <cit.> taking into account the strategy recommendations. The Study Group is now in charge of supporting the proponents of new ideas to address the technical issues and physics motivation of the projects ahead of their external review by the CERN Scientific Committees and decision by the Management.The European Particle Physics Strategy Update has highlighted that the quest for dark matter and the exploration of flavour and fundamental symmetries are crucial components of the search for new physics and it has reaffirmed the importance of a diverse programme that is complementary to the energy frontier <cit.>.The SPS North Experimental Area (NA) is one of the major experimental facilities available at CERN and it is at the very heart of many present and proposed explorations for Beyond the Standard Model (BSM) Physics. The area is presently ongoing an extensive consolidation campaign with major activities planned during the forthcoming LS3 (currently scheduled from 2026 to 2028) and the following LS4 under the NA Consolidation (NA-CONS) Project. ECN3 is an underground cavern in the North Area suited for experiments requiring high-intensity.ECN3 currently hosts the NA62 experiment <cit.> with an approved programme until LS3. The following experimental proposals to be hosted in TCC8/ECN3[TCC8 is the Target Chamber Cavern upstream of ECN3.]have been studied within PBC:* HIKE (High Intensity Kaon Experiment) proposing an extension of the current NA62 programme with charged kaons at higher intensity in a first phase and neutral kaons in a second phase. HIKE proposes phases 1 (K^+) and 2 (K^0) for approval in 2023. This programme will be complemented by the search for visible decays of Feebly-Interacting Particles (FIP) in Beam Dump (BD) mode on-axis <cit.>;* SHADOWS (Search for Hidden And Dark Objects With the SPS) to search for visible decays of FIPs and perform neutrino measurements by operating off-axis in parallel to HIKE BD mode <cit.>;* BDF (Beam Dump Facility) and the associated SHiP (Search for Hidden Particles) experiment to search generically for Hidden Sector particles <cit.> through both scattering and decay signatures. The detector system for scattering signatures is also suited for neutrino interaction physics, in particular exploring the tau neutrino.Decisions should be taken well ahead of LS3 for a timely implementation of the chosen options and to profit of the potential synergies with the NA-CONS Project. The present document is aimed as an input to recommendations by the SPS and PS Experiments Committee (SPSC) and decision by the CERN Management. After a short reminder of the current ECN3 hall set-up and beam characteristics (Section <ref>), the main technical aspects and physics motivations of future options are presented (Section <ref>). The technical issues of beam production, operation mode and detectors integration are summarized in Sections <ref> and <ref>. Technically-driven schedules and first cost estimates are given in Section <ref>. Finally, Section <ref> presents the physics reach of the various options within the CERN and worldwide physics landscape.§ CURRENT STATUS §.§ The North Experimental AreaThe NA (Figure <ref>) is located on the CERN Prévessin site. The three beryllium Targets T2, T4 and T6 in the TCC2 Target Hall (see Figure <ref>) are served by slow-extracted beams from the SPS via a dedicated transfer line (TT20). NA comprises two surface halls <cit.>, EHN1 and EHN2, and an underground cavern, ECN3. EHN1 is the biggest surface hall at CERNand houses the H2, H4, H6, and H8 beamlines. The T2 target feeds the H2 and H4 beamlines, which are normally operated as versatile secondary or tertiary beams but may occasionally be configured as attenuated primary beams. The H4 beam is a particularly clean electron beam but can also serve its users with high-quality hadron and muon beams. The H6 and H8 beamlines are fed by secondary particles produced in the T4 target. These are versatile hadron and electron beams that can also provide low or medium intensity muon beams. The EHN1 beamlines are used for test-beam activities and currently host two physics experiments: the NA61 experiment <cit.> on the H2 beamline has a rich and varied physics programme with hadron and ion beams, and the NA64 experiment <cit.> on H4 performs a competitive dark photon search with high purity electron beams <cit.>. A future heavy ion experiment, NA60+ <cit.>,is also in discussion for implementation on H8 <cit.>.EHN2 is served by the M2 beamline <cit.> from the T6 target. M2 provides a worldwide unique high-energy, high-intensity muon beam, and can also be operated as a high-intensity hadron beam. An option to operate it as a tertiary electron beam exists, but the rates are very low. EHN2 currently hosts the NA66/AMBER experiment <cit.> (successor of COMPASS), proposed to operate as a long-term QCD facility, and NA64μ <cit.>, with similar objectives as NA64 in H4, but employing muon beams. M2 may also host MUonE <cit.> and other projects in the future.ECN3 is served by the K12 beamline derived from the T10 target: the primary protons not interacting in T4 are transported by the P42 beamline over almost 900 metres to the T10 beryllium target located in the Target Hall TCC8. T10 initiates the K12 beamline which delivers a high-intensity mixed secondary hadron beam at 75 GeV/c with a ≈ 6 % kaon component to the NA62 experiment <cit.> in ECN3.§.§ TCC8, ECN3 Experimental Cavern and the NA62 ExperimentThe current overall layout of the TCC8/ECN3 underground complex is shown in Figure <ref> together with the K12 beam and the main detector components of NA62. TCC8 is split in two parts by an over-pressure double "Blue Wall" aimed to separate the air volumes of the target and detector/beamline areas during operation. It is followed by the ECN3 experimental hall.The K12 mixed beam is produced by interaction of the primary protons with the 400 long beryllium T10 target and focused onto a pair of dump collimators (TAX for "Target Attenuator eXperimental areas") made of massive copper and steel blocks. The beamline optics, and in particular a set of four strong dipoles surrounding the TAX ("first achromat"), ensure a selection of secondary particles at a momentum of 75/c with a 1.1 % RMS momentum resolution. Off-momentum and neutral particles are directly dumped into the TAX and positrons are filtered out with the help of a thin tungsten converter. The ≈ 6 % kaon component of the selected mixed hadron beam is tagged by the KTAG Cherenkov detector. After collimation and cleaning stages, the beam passes a second set of dipoles ("second achromat") that has been equipped with the fast Silicon strip detectors of the NA62 750 GigaTracker (GTK), which measure the momentum, position and direction of each beam particle. A key component of the K12 beamline is the active muon sweeping system, to reduce the muon rate from hadron decays in the NA62 detector, consisting of several iron-filled dipole magnets and a toroid.The NA62 experiment can also be operated in beam-dump mode. In this case the T10 target is moved out of the beam remotely and the full beam is dumped on the TAX collimators including primary and secondary particles. The muon sweeping system is left activated but with a modified configuration.The NA62 experiment <cit.> is currently mainly focusing on the core of its baseline programme devoted to the K^+→π^+νν̅ ultra-rare decay. 20 candidate events have been observed before LS2, in agreement with the Standard Model (SM) expectation of 10 physics + 7 background events. The main goal of the approved programme until LS3 isto perform a 𝒪(15-20%) measurement of the K^+→π^+νν̅ branching ratio. Other rare K decays are being investigated in parallel.To perform the approved programme, NA62 has recently implemented detector upgrades allowing to operate at the nominal beam intensity of 3×10^12 protons per 4.8 long spill, with the goal to accumulate 10^19 Protons on Target (PoT) until LS3. The approved programme also includes several months of data taking in beam dump mode to search for hidden particles up to an integrated intensity of 10^18 PoT.A sample of about 1.4× 10^17 PoT has already been collected in dump mode, and confirms that the expected combinatorial background is under control.§.§ Current operation mode and limitationsThe number of protons that can be delivered to NA is primarily driven by the present performance of the SPS, which can accelerate more than 4×10^13 particles (protons) per pulse (ppp) at an energy of 400. In the present NA shared operation mode, more than 3.5×10^13 ppp can be routinely extracted from the SPS extraction Long Straight Section (LSS) 2, transported via the TT20 transfer line and distributed to the three NA targets in TCC2 by means of two consecutive magnetic beam splitters located in the TDC2 area (Figure <ref>) according to the user needs. The global transfer efficiency from SPS to the targets of 76 % corresponds to a total of ≈2.7×10^13 ppp impinging on the NA targets. Its measurement suffers from large uncertainties related to the calibration of the intensity monitors at the target stations and the above value should be considered as pessimistic.An SPS cycle includes a 400 GeV flat-top (FT), during which the slow extraction over 4.8 takes place, preceded by the injection plateau and acceleration ramp and followed by the magnet ramp down, for a total cycle duration of 10.8. The minimum repetition period is 14.4, limited by the maximum average power dissipation in the SPS main magnets (≈ 41) <cit.>.The SPS cycle that serves the NA is part of a global "supercycle" with cycles serving other CERN users. LHC injection cycles are present only a few hours per day in average, so that the supercycle duration is primarily defined by non-LHC user needs. The typical NA duty cycle (NA spill length over supercycle length) is ≈20 %. A typical number of ≈ 3000 spill/day can be assumed, taking into account an SPS availability for physics of approximately 80 %. For a typical 200 days of SPS operation within a year, the 2.7×10^13 ppp deliverable to the NA targets therefore corresponds to a maximum of 1.6×10^19 PoT/year. This estimate is based on the assumption that maximum intensity is reached from the start of the run and does not take into account running time with ions. Typically, both the accelerator and detectors require some time for the intensity ramp-up at the beginning of each yearly run.NA beam operation poses several radiation protection (RP) constraints that are already nowadays a challenge for operation and maintenance of accelerator components. Beside residual and prompt dose rate constraints, also activation of air, water and soil and radioactive waste production have to be considered.Due to the nature of the slow extraction process and the need to serve multiple target stations simultaneously, significant activation of components occurs in LSS2, the TDC2 splitter area and the TCC2 area hosting the target stations. These areas were designed and built in the 1970s when RP regulations were less restrictive in comparison to today. Interventions in these areas are challenging due to the very high dose rates of some of the components and the lack of extensive remote handling and manipulation features. As a consequence, significantly long cool-down times might be needed before interventions. In addition, radiation damage to cables, considering the typical frequency of recabling campaigns, limits the annual integrated intensities to NA in shared mode to ≈ 1×10^19 PoT  <cit.> unless beam loss reduction measures are put in place.The T2, T4, T6 and T10 NA targets have been partially renovated during LS1, and are designed to withstand the slow extraction (typical spill length of 4.8) of a maximum proton intensity of at least 1.5×10^13 ppp with a repetition period of 14.4 <cit.>. Operational experience with the T6 TAX indicates that similar conditions are acceptable for the downstream TAXs (deformation of the TAX holes or risk of local melting could occur for higher peak or average power deposition). The current nominal intensity of the P42 line is 5 times lower and amounts to 3.3×10^12 ppp.The corresponding intensity of the K12 selected 75/c mixed beam is 2×10^9 ppp, for a spill duration of 4.8, to match the specifications of the NA62 GTK.The operation of the K12 TAX in beam dump mode at the present intensity already puts the materials of the TAX blocks close or beyond their operational limits <cit.>. Operation at significantly higher intensities would therefore require a redesign of the overall target systems located in TCC8, including T10 target and K12 TAX <cit.>.In addition to high residual dose rates, prompt beam losses may also cause elevated dose rates in the areas of the NA that are accessible during beam operation. Recent studies <cit.> have identified two critical locations above the P42 line where the current ECN3 beam operation provokes elevated prompt radiation levels close to or even exceeding the classification limit of the given area: * ramp on the Salève side of EHN1 where only ≈ 1.2 m of soil is present between the P42 line and the ramp, in the following referred to as EHN1 ramp;* bridge over a watercourse flowing above a section of the P42 line where only ≈1.2 m of soil is present, in the following referred to as ECN3 bridge.Tiny losses in the beamline elements below the EHN1 ramp can produce the observed prompt radiation fields <cit.> for the present-day beam parameters and therefore requiring a series of mitigation measures that were already, or are currently, being implemented <cit.>.P42 has an uninterrupted vacuum sector that spans from the T4 XTAX to the T10 target. Historically, the vacuum in P42 was achieved by means of turbomolecular pumps, however, these were moved to K12 and replaced by rotary pumps, as part of the preparation of the NA62 experiment, for financial reasons. The resulting pressure is now limited to 10^-3 mbar and deemed adequate for proton transport today, but contributing to distributed losses and prompt radiation as vacuum levels are degrading due to ageing problems of the vacuum equipment. Access to ECN3 is not possible during beam operation, but can proceed immediately after beam stop downstream the Blue Wall between the TCC8 and ECN3 caverns. In the TCC8 target area upstream of the Blue Wall, a cool down period of 30 followed by an air flush during 90 is needed after beam stop and before access.§ POST-LS3 EXPERIMENTAL PROPOSALS §.§ Overview of possible operational scenariosFrom the existing experimental proposals two possible operational scenarios can be envisaged:* An extension of kaon physics and hidden sector exploration at higher intensity combining the HIKE <cit.> and SHADOWS <cit.> projects. The continuation of high-intensity kaon experiments at CERN with HIKE provides a flavour probe into BSM physics. HIKE phase 1 would include an upgrade of the K^+ beam intensity, ultimately by a factor 4 (requiring up to 1.2×10^13 ppp on the T10 target over ≥ 4.5), together with corresponding improvements of detector performances. During HIKE phase 2, a high intensity K^0 beam would be produced by up to 2×10^13 ppp on the T10 target and the detector configuration changed for K^0 decays, still keeping tracking devices, with the main goal of observing for the first time the ultra-rare decay K^0_L→π^0 l^+ l^- and performing a wide-range exploration of K^0_L decays. Operation in BD mode at 2×10^13 ppp on the T10 TAX would alternate with kaon beam runs during HIKE phase-1. In order to maximize the reach of this extended BD operation, the SHADOWS decay spectrometer is proposed to be built off-axis downstream of the T10 TAX and to be operated in parallel to HIKE during BD runs.* Hidden Sector exploration with the implementation in ECN3 of the proposed SHiP detector and the associated Beam Dump Facility <cit.>, formerly proposed on a new dedicated site in Prévessin <cit.>. Following the EPPSU recommendations, the BDF proposal has been further optimized and other possible locations have been considered and compared, identifying ECN3 as the most suitable and cost-effective option <cit.>. Fitting the SHiP detector within ECN3 requires a resizing of the detector components, and a shortening of the distance to the beam dump in order to preserve the signal acceptance. SHiP is proposed as a state-of-the art dual spectrometer, able to measure hypothetical hidden particles, both through their scattering in an instrumented high-density interaction target, and through their decays in a large acceptance decay spectrometer. The BDF implementation in ECN3 would correspond to a further increase of the proton beam intensity to 4×10^13 ppp over ≥ 1.0.The experimental requirements are summarized in Table <ref> and the corresponding SPS/NA operation modes and proton sharing are discussed in Section <ref>.An indicative schedule based on the presently available long-term CERN Accelerator Complex schedule up to the end of High Luminosity-LHC (HL-LHC) <cit.>, consistent with the above described operational scenarios and compatible with the requirements summarized in Table <ref>, is shown in Figure <ref>. The above schedule assumes: * ECN3 nominal operation starting in 2031;* equal sharing of the operation time between K^+ and BD mode during the first 8 years of nominal operation for the HIKE/SHADOWS scenario;* same calendar time span for both scenarios. For both operational scenarios the experimental programme extends to the second half of the 2040s, well beyond the HL-LHC operation. It is assumed that the operation of the North Area will follow a pattern similar to the present one also after the end of HL-LHC. The distribution and duration of LSs might change the experiments calendar-year duration. In the following, the experimental sensitivities of the projects are quantitatively estimated for the total numbers of PoT given in Table <ref> in compliance with the indicative schedule ofFigure <ref>. §.§ HIKE §.§.§ Physics caseThe continuation of high-intensity kaon experiments at CERN with HIKE provides a unique probe into BSM physics, that can reach mass scales of 𝒪(100) TeV and gives access to a different, and in some cases higher, sensitivity to new physics than the B and D meson sectors (see Section <ref>).The primary goal of HIKE is to improve the accuracy of the kaon rare decay measurements, in order to match and possibly challenge the theory precision, to study and measure for the first time channels not yet observed, and to search with unprecedented sensitivity for kaon decays forbidden by the SM. HIKE can also address BD physics in a complementary mass range and phase space to other existing and planned experiments (see Section <ref>). A summary of HIKE sensitivity reach in the flavour sector is reported in Table <ref>.Sensitivity projections in BD mode are produced assuming 5× 10^19 PoT. Operation at 2× 10^13 ppp for 4.8 s spills is assumed (although HIKE could accept a somehow higher beam intensity when running in BD mode). Parallel operation of SHADOWS with HIKE in BD mode increases acceptance at large angle and improves searches for large-mass hidden particles such as Heavy Neutral Leptons (HNLs), light Dark Scalars and Axion Like Particles (ALPs) with respect to HIKE alone.§.§.§ Experiment descriptionThe HIKE programme <cit.> uses shared detectors and infrastructure to address flavour physics both with charged and neutral kaon beams: a charged kaon phase anda neutral kaon phase with tracking are put forward for SPSC review in 2023. The setup and detectors in the charged kaon phase (Phase 1) will be optimised for the precision detection to 𝒪(5%) of the branching ratio of K^+ →π^+ νν̅. While the conceptual layout is based on the successful one of NA62, new detectors will replace those of NA62 with the goal of improving the performance and sustaining higher rates; prime examples are the beam tracker and the tracking spectrometer. The detector configuration for Phase 1 is illustrated in Figure <ref>.Thanks to the relatively compact detector, the neutral beam plus tracking phase (Phase 2) allows for a 90 m long fiducial decay volume to be accommodated in the present ECN3 experimental hall, with no major civil engineering work.This phase will use an experimental setup with minimal modifications with respect to the charged kaon phase, but important modifications will have to be implemented in the K12 beamline. The beam tracker, kaon-identification, pion-identification detectors will be removed; the main tracking spectrometer will be shortened, and central holes of the chambers will be realigned on the neutral beam axis; the Large Angle Veto detectors will be moved and possibly reduced in number and the small angle calorimeters will be moved. The detector configuration for Phase 2 is illustrated in Figure <ref>. Many of the same requirements arise in the design of the electromagnetic calorimeter (ECAL) for the K^+ and K_L phases.A design for a fast calorimeter with excellent photon detection efficiency and energy resolution to be used in all phases of the HIKE programme is therefore preferable and chosen as the baseline. However, the LKr calorimeter remains a valuable option for the early commissioning and data taking phases. The foreseen evolution of the detector configuration is summarized in Table <ref>. The efficiency of the trigger and data acquisition system of NA62 is affected by increasing intensity; besides, a hardware triggered approach as currently used by NA62 is intrinsically prone to limitations. For these reasons, a trigger-less approach is foreseen for HIKE, where data filtering is mostly at high-level-trigger level.HIKE BD operation will build upon the experience accumulated in NA62 with BD data taking,where the proton beam is made to interact in the T10 TAX. A similar procedure will be possible in HIKE, which will be able to switch between kaon and dump mode during an 8-hours SPS Machine Development (MD) time slot.The reach for the various channels assumes: 2× 10^13 kaon decays in decay volume per yearfor the K^+ beam and 3.8× 10^13 kaon decays in decay volume per yearfor the K^0_L beam plus tracking. A uniformly distributed intensity over ≥4.5 spill is essential in all phases, to optimally collect high statistics while effectively managing detector rates and spurious intensity effects. §.§.§ Present status, required R&DDetails of specific technologies envisaged for detectors and readout systems can be found in <cit.>. In brief, state-of-the-art technologies considered to push the time resolution, granularity and rate performances are: * Beam tracker, based on the TimeSpot sensor andApplication Specific Integrated Circuit (ASIC) technology, or new monolithic silicon sensors. Sensors with the desired performances exist already. Related ASICs are being developed, in synergy with other high-energy experiments happening on a similar timescale.* Main tracker based on ultra-thin Straws. A prototype is being developed already.* Electromagnetic (EM) calorimeter, a fine-sampling shashlyk based on PANDA (antiProton ANihilation at DArmstadt) forward EM calorimeter.* Small-angle EM calorimeter based on a compact Cherenkov calorimeter with oriented high-Z crystals. Test beam results already indicate feasibility.* Photon detectors for kaon and pion identification detectors, based on Micro-Channel Plate-Photo Multipliers (MCP-PMTs). These devices already satisfy the requirements but are susceptible to aging. Aging tests with modified Atomic Layer Deposition (ALD) prototypes are ongoing.* Large-angle photon vetoes, based on lead/scintillator tiles with Wavelength Shifting (WLS) read-out by Silicon Photo-Multipliers (SiPMs). The technology is well established.* Hadron calorimeter, based on a high-granularity sampling calorimeter.* Timing planes and charged particle vetoes, based on scintillating tiles readout by SiPMs.* Veto counter based on Scintillating Fibre (SciFi) technology, as that used in LHCb.In summary, all the mentioned technologies are established, at least as proof of principle, and several are synergetic to detector developments for High-Luminosity LHC (HL-LHC) experiments.§.§ SHADOWS§.§.§ Physics caseSHADOWS aims to perform a comprehensive search for FIPs (discussed in detail in Section <ref>) in the range from the MeV scale to a few GeV. It aims at exploiting the upgraded 400 GeV proton beam line P42, slowly extracted from the SPS, by running off-axis concurrently with the proposed HIKE experiment. In the MeV-GeV range, the strongest bounds on the interaction strength of new light particles with SM particles exist up to the kaon mass; above this mass the bounds weaken significantly. SHADOWS can take an important step forward into this still poorly explored territory and has significant discovery potential for FIPs if they have a mass between the kaon and the beauty mass. If no signal is found, SHADOWS will push the limits on their couplings with SM particles by up to two orders of magnitude, depending on the model and scenario, opening new directions in model building.SHADOWS is meant to expand HIKE's capability to search for FIPs from kaon decays and in BD mode, by enhancing the sensitivity for FIPs coming from charm and beauty hadron decays. The combined system SHADOWS+HIKE can span the still uncovered parameter space of many well motivated FIP models, below and above the kaon mass, with a competitive sensitivity in the international landscape.Since theoretically there is no uniquely preferred mass range for FIPs, the capability of spanning below and above the kaon mass, ranging from a few MeV up to the b mass, is paramount.The off-axis position allows SHADOWS to be less impacted by backgrounds (especially neutrinos)than an on-axis setup, and to be placed close to the FIP production point.With the NaNu subdetector, SHADOWS also aims to study neutrino physics (in particular τ neutrinos) in a phase space complementary to the one explored at SND and FASER experiments, currently running at the LHC. The capability of the NaNu subdetector to search for light DM is currently being studied. §.§.§ Experiment description The SHADOWS detector <cit.> requirements are defined by the characteristics of FIPs produced in the interactions of the 400 GeV/c proton beam with a dump. At these energies, FIPs with masses above the kaon mass are mostly produced in the decays of charmed and beauty hadrons and in proton bremsstrahlung and/or Primakoff effect occurring in the dump. At the SPS centre-of-mass energy (√(s)≈ 28 GeV) the heavy hadrons are produced with a relatively small boost so that FIPs emerging from their decays have a large polar angle and can be detected by an off-axis detector.The distance of the detector with respect to the impinging point of the proton beam onto the dump is a compromise between the maximisation of FIP flux in acceptance (that requires short distances) and the maximisation of the probability that the FIP decays before reaching the detector (that requires long distances). The optimal distance varies as a function of the FIP model and benchmark. The current compromise, also taking into account beam background and irradiation, corresponds to an off-axis distance of the decay vessel lateral wall to the beamline of 1.45 , and to a longitudinal distance of the decay vessel upstream window from the upstream face of the TAX dump of 15. The background lateral veto wall remains to be integrated in the layout.The SHADOWS detector must be able to reconstruct and identify most of the visible final states of FIP decays while simultaneously reducing the background to a level of less than 1 event in the whole data set. To this aim a standard spectrometer with excellent tracking and timing performance, and an efficient veto system and some particle identification capability is required. The spectrometer will be made of: * A magnetic muon sweeping system based on magnetised iron blocks (MIB) in front and aside the decay volume to sweep away from the detector acceptance the muons emerging from the dump.* An efficient veto system able to tag the residual muon flux surviving the MIB system before it enters the decay volume. This system is made of two components, an upstream veto and a lateral veto, made of 2 active layers instrumented with micro-megas, to veto muons entering from the front-face and the side close to the beam line of the decay vessel. The exact configuration of the lateral veto is still being optimized. The sensitivity simulations presented later assume a baseline lateral veto instrumentation along the full length of the decay volume.* A Tracking System able to reconstruct with high accuracy the mass, the decay vertex and the impact parameter with respect to the impact point of the beam on the dump for FIP decays with at least two charged tracks in the final state. The requirements are:i) a vertex resolution of ≈𝒪(1) cm in the transverse plane over a volume length of ≈ 20 m;ii) an impact parameter resolutionof 𝒪(cm) for FIP decays into two charged tracks when the total momentum is extrapolated backward at the impact point of the beam on the dump. Two technologies for the tracking stations are currently under scrutiny, the scintillating fibre option and the NA62-like straw tubes option, which is currently the baseline. * The dipole magnet:Two designs of the dipole magnet providing a bending power of about 0.9 are being considered: i) a normal-conducting (NC) option, designed in order to have a power consumption of 287, i.e. 10 times lower than that of the NA62 dipole magnet for the same bending power; ii) a superconducting (SC) option. The NC option is the current baseline. * a Timing Detector with ≈ 100 ps time resolution in order to reduce any combinatorial background (and in particular the muon one, see Sec. <ref>) by requiring the tracks to be coincident in time. The tracks of combinatorial background events are indeed intrinsically out-of-time with respect to each other as they have origin times spread over the 4.8 sec duration of a typical P42 proton spill.The timing layer will be made of scintillating bars of 1 thickness with SiPM readout. * An Electromagnetic Calorimeter able to reconstruct the energy with a mild resolution of σ(E)/E ≈ 10-15%/√(E(GeV)), a time resolution of few ns and some pointing capability in order to reconstruct the mass of fully neutral decays such as ALP →γγ. Two options are currently under study: the SplitCal option and a StripCal option, based on scintillating strips. The StripCal option looks very compelling and represents to date our baseline.* A Muon Detector to positively identify muons with timing capabilities to reinforce the rejection of the combinatorial muon background in combination with the timing detector. The muon detector will be based on scintillating tiles with direct SiPM readout. This technology allows a compact, efficient and cost-effective detector to be built. The measured time resolution per station is 𝒪(250) ps.The baseline solution to reduce the background of inelastic interactions of neutrinos with the air of the decay volume (Section <ref>) will be to put the decay volume in a mild (≈ 1 mbar) vacuum. A compelling alternative is a decay volume made of a balloon filled with Helium, to be studied for the Technical Design Report (TDR). The baseline layout of the spectrometer with the in-vacuum decay vessel is shown in Figure <ref>. The spectrometer integrated in the experimental area close to the dump is shown in Figure <ref>. Directly downstream the main SHADOWS detector, a specific neutrino sub-detector system called NaNu (NorthArea NeUtrino Experiment, <cit.>) will be positioned approximately 50 downstream from the beam dump and 0.6 off-axis. The baseline concept involves two main detector components: the "active-detector" and the "emulsion-detector," both having dimensions of 45 × 45 × 100 cm^3. These detectors will be partially located inside an existing dipole magnet at CERN with gap dimensions of 50 × 100 × 100 cm^3 and a magnetic field strength of 1.4 T generated by a current of 2500 A. The transverse plane of the NaNu subdetector, facing the interaction point, has a total size of 45 × 90 cm^2. The active detector component, positioned close to the beamline, is a calorimeter system that utilizes passive tungsten plates and plastic scintillators with tracking capabilities using Micromegas chambers. Its purpose is to study muon neutrino interactions. The emulsion detector consists of tungsten plates interleaved with emulsion films and is designed to study interactions involving tau and electron neutrinos. The combined passive material in both systems amounts to a total mass of approximately 2.4 tons in each detector component. Following the detectors, there is a spectrometer for measuring the momentum of muons, utilizing a 1.5 magnetic field over a length of 1. Depending on funding availability and the feasibility of reducing muon background, it is possible to replace the active detector component with a second emulsion-based detector design, which could increase the expected number of tau neutrino interactions by up to a factor of five. The schematic layout of the baseline NaNu version is shown in Figure <ref>. The SHADOWS detector description with its sub-detector options is fully documented in the SHADOWS Proposal <cit.> including still open issues. §.§.§ Present status, required R&DTo a large extent, prototypes or even full-size detectors based on the technologies proposed for SHADOWS have already been built or operated. Hence, in most cases the R&D is meant to further optimize the design of an already well established and known technology, rather than proving that a given technology is suitable for the task. * Upstream and Lateral Vetoes: The measured performance of micromegas prototypes are: i) a few mm spatial resolution, ii)MHz/cm^2 rate capability; iii) 10-20 ns time resolution; iv) >95 % single layer efficiency. A dedicated prototype for SHADOWS is currently being built.* Tracking system: Detectors in operation (SciFi Tracker in LHCb and Straw tracker in NA62) guarantee the reliability of the two technologies in consideration. A thorough R&D is expected to happen in the coming years.* Timing layer: The scintillating material will be chosen from what is commercially available. The scintillating bars will then be read out at both ends with commercially available SiPMs, with SiPMs mounted on front-end (FE) electronics Printed Circuit Boards (PCBs) derived from those developed and produced for other projects such as the ATLAS Phase-II ITk Strip upgrade.* ECAL:A SplitCal prototype has already been built and successfully operated in the context of the R&D for the SHiP detector <cit.>. The StripCal option is currently being studied.A dedicated R&D is foreseen to happen in the coming years. * Muon Detector:A thorough R&D has already been performed in the past 2 years within the AIDA-Innova European Grant.Two SHADOWS full-size prototypes have been built and used in June 2023 to measure the off-axis muon flux in the ECN3 cavern.* TDAQ:The TDAQ system will be as much as possible in common with HIKE. This will allow to design a high-performance and cost-effective system and share with HIKE expertise and person-power. §.§ BDF/SHiP §.§.§ Physics caseBDF/SHiP is a state-of-the-art experimental setup designed to perform a generic search for FIPs with maximal sensitivity in a region of mass and coupling that is only accessible with a dedicated beam-dump configuration. The physics programme includes searches through both decay and scattering signatures. The beam parameters listed in Table <ref> for BDF/SHiP give SHiP access, annually, to ≈ 2×10^17 charmed hadrons, ≈1.4×10^13 beauty hadrons, ≈ 2× 10^15 tau leptons, and 𝒪(10^20) photons above 100 within the acceptance of the detectors. The overall detector concept provides sensitivity to as many final states as possible <cit.>, including both fully and partially reconstructed final states, to ensure model-independent searches. Sensitivity to decay modes with neutrinos enable SHiP to explore for instance HNLs with enhanced U_τ – coupling and neutralinos.The BDF/SHiP physics programme was explored in detail in a dedicated physics book in 2015 <cit.> prepared by a large collaboration of theorists. It has been further elaborated over the years and was part of the comprehensive coverage of the field in the EPPSU 2020 Physics Briefing Book <cit.>. Beyond the exploration of FIPs, BDF/SHiP is also particularly suitable for a rich program of tau-neutrino physics and measurements of neutrino-induced charm production. More details on both these aspects can be found in section <ref>. It has also been shown that the BDF/SHiP target system can give unique access to a high-intensity neutron spectrum <cit.> that is not easily accessible at spallation facilities. This makes it possible to implement a user platform <cit.> for studying neutron-induced reactions on short-lived isotopes that is relevant for nuclear and astrophysics <cit.>, as well as for material testing <cit.>, and radiation-to-electronics (R2E) studies.The BDF/SHiP physics performance is anchored in an optimised acceptance to all FIP production mechanisms <cit.> accessible with the 400 protons, combined with a highly efficient background suppression. The background suppression relies on a set of critical components: * target of high density material with short interaction length to suppress weak decays of pions and kaons to muons and neutrinos,* iron hadron stopper to absorb hadrons and electromagnetic radiation produced in the dump,* magnetic muon shield starting with magnetisation of the hadron stopper and followed by free-standing magnets to deflect the muons produced in the dump (≈10^11 per spill), away from the detector acceptance,and in particular for the search for FIP decays: * background taggers fully surrounding the decay volume, both upstream and on all sides, to protect against residual muons leaking through the shield, and against hadrons from muon and neutrino deep-inelastic scattering (DIS) interactions, as well as cosmics,* vacuum in the decay volume to suppress in particular neutrino DIS. The detector systems provide further suppression by the reconstructed quantities in terms of fiducial volume, track quality, vertex quality, impact parameter at the dump target, timing, and particle identification. The designed redundancy in the background suppression allows for a common, very simple and robust event selection with these quantities, and to measure background components by relaxing criteria. The selection has been demonstrated through full simulation to be entirely inclusive with respect to different types of long-lived particle decays. This ensures maximum sensitivity in the FIP searches, while remaining generic to new models that may be proposed in the future.In addition to improving present constraints on many models by several orders of magnitude, the SHiP decay spectrometer allows distinguishing between different models, and, in a large part of the parameter space, measure parameters that are relevant for model building and cosmology. At the limit of sensitivity of other experiments, BDF/SHiP expects 𝒪(100–1000) events throughout the mass range. These features make BDF/SHiP a unique direct-discovery tool for FIPs.Moreover, together with the direct search for Light Dark Matter (LDM), and neutrino physics, BDF/SHiP represents a wide-scope general-purpose beam-dump experiment.§.§.§ Experiment descriptionA detailed description of the detector, the design and the detector performance from measurements with prototypes in test beam have been reported in Refs. <cit.> (complete list of dedicated reports in <cit.>) and updated in <cit.>. Below is a summary of the most relevant features of the SHiP detector. The SHiP experiment is composed of a muon shield and dual system of complementary apparatuses, shown in Figure <ref>. The upstream system, the Scattering and Neutrino Detector (SND), is designed for the search for LDM scattering and for neutrino physics.The downstream system, the Hidden Sector Decay Search (HSDS) detector is designed to reconstruct the decay vertices of FIPs, measuring invariant mass and providing particle identification of the decay products.The revision of SHiP from the original Comprehensive Design Study (CDS) <cit.> in ECN4 to the smaller ECN3 experimental hall has required reducing the lateral dimensions of the HSDS spectrometer.The aperture of the spectrometer has been reduced from 5 width and 10 height to 4×6□, consequentlyalso leading to a reduction of the decay volume and the particle identification systems in height and width. The lengths of the decay volume and the HSDS detector systems remain unchanged. This work has been accompanied by an effort to shorten the muon shield. The aim of bringing the experiment closer to the proton beam dump is to preserve the signal acceptance for all physics modes, production and scattering/decay kinematics convolved together, with a detector that is also decreased in cost. The first studies of the experimental layout for ECN3, as described in the LoI <cit.>, continued focusing on a muon shield entirely based on NC magnets. The studies led to a fully developed NC alternative with a ≈ 5 shorter muon shield, and a  3 shorter configuration of the SND, and acceptable background rates and sensitivity. First explorations with SC technologies were done during the CDS phase and have continued in the context of ECN3 <cit.> with the help of external expertise. These studies have concluded on an optimised hybrid muon shield in which the first section is based on SC technology, and the second alternate-field section is based on NC technology. This has made it possible to further reduce the overall length of the muon shield by ≈ 5, and to implement the SND with a length of about 6. Given that the investigations of the SC magnets are promising, and that they lead to on overall reduction in size of the NC section of the muon shield, the experimental layout, physics performance, and cost have been evaluated with the hybrid muon shield <cit.>, shown in Figures <ref> and <ref>.The SND detector consists of a LDM/neutrino target with vertexing capability incorporated in the form of tungsten plates alternated with emulsion films and fast electronic detector planes. The SND target system is followed by a muon spectrometer that is designed to identify and determine charge and momentum of muons produced in the ν_τ interactions at high efficiency.The electronic detector planes in the SND target region are based on scintillating fibres. The configuration allows reconstructing the shower produced by the recoil electron in LDM scattering to determine the initial particle angle and energy. In addition, the micro-metric accuracy of the nuclear emulsion provides topological discrimination of LDM interactions against neutrino-induced background events. For the neutrino physics programme, the emulsion technique is crucial to detect tau leptons and charmed hadrons by disentangling their production and decay vertices with the help of the sub-micrometric position and milliradian angular resolution. With respect to the CDS design, the magnet around the LDM/neutrino target has been removed, leading to a loss of the charge determination in the hadronic modes of the ν_τ interactions. Instead the magnetised muon spectrometer distinguishes between ν_τ and ν_τ in the golden mode τ→μν_τν_μ. Without the magnet around the target, the momentum of charged pions and kaons is measured through the detection of their multiple Coulomb scattering in the target <cit.>. Neutral pions are also detected in the emulsion films and their energy measured.The detector is designed to observe all three neutrino flavours and perform searches for new particles through the scattering with the electrons and the nucleons of the SND target. The LDM/neutrino target and vertex detector are implemented as walls of emulsion cloud chamber (ECC) technology. Each wall consists of alternating layers of nuclear emulsion films, acting as the micrometric precision tracking stations, interleaved with tungsten layers, acting as the high-density passive layers of the target. The role of the target tracker between the walls is to provide the time stamp of the interactions located in the ECCs and to connect muon tracks between the target and the muon spectrometer. A conceptual layout of the SND detector is shown in Figure <ref>. With the shorter distance to the proton target, the same yield of tau neutrinos as in the CDS design may be achieved with a ≈3 and 0.4×0.4 m^2 LDM/neutrino target (8 in <cit.>), thus reducing the required surface of emulsion films to 145 m^2. Immediately downstream of the SND, the HSDS detector measures both fully reconstructable decays of FIPs as well as partially reconstructable decays with neutrinos in the final state in a 50 long decay volume of a pyramidal frustum shape that is delineated by the deflected beam-induced muon flux.The HSDS decay volume is followed by a spectrometer. The main element of the spectrometer is the spectrometer tracker, designed to accurately reconstruct the decay vertex, the mass, and the impact parameter of the reconstructed FIP trajectory at the proton target. The initial design of the magnet was based on a NC coil <cit.>. In order to significantly reduce the power consumption, the CDS phase included a study of a new type of superconductor-based design <cit.>. An R&D programme with the goal of developing a demonstrator is currently underway at CERN with the involvement of a SHiP institute. A particle identification system, including an electromagnetic and a hadronic calorimeter, provide particle identification, which is essential in discriminating between the very wide range of models with FIPs, but also in providing information for background rejection. The electromagnetic calorimeter is a scintillator/lead sampling calorimeter, consisting of two parts of 3 and 17 radiation lengths (X_0), respectively, which are mechanically separated in the longitudinal direction. Each part is equipped with a high spatial resolution layer in order to precisely measure the shower axes and allow reconstructing the vertex of ALP→γγ decays and the invariant mass. Measurements of shower profiles with a prototype in test beam show that, with a few mm transverse shower-position resolution in the high-precision layers, an angular resolution of the order of a few mrad is achievable. The longitudinal segmentation of the calorimeter also improves the electron/hadron separation.Background from neutrinos interacting within the decay volume is eliminated by maintaining the decay volume at a pressure of ≈ 1. The decay volume wall is instrumented upstream and on all sides by a system of high-efficiency background taggers in order to provide regional and temporal veto against muon and neutrino interactions in the vessel walls and against particles entering the volume from outside, including cosmics. The taggers covering the surrounding walls (SBT) are based on a liquid scintillator system segmented in cells, resulting in an efficiency of >99%, and ≈ ns time resolution. The tagger on the upstream vessel wall (UBT) is based on three six-layer Multigap Resistive Plate Chambers (MRPC), each with ≈50 ps resolution, 98% efficiency and spatial resolution of a few millimetres. A dedicated timing detector is located between the last spectrometer tracker plane and the calorimeters to provide a measure of time coincidence in order to reject combinatorial backgrounds. It is based on scintillating bars and has a time resolution of about 85 ps. Due to the criticality of the veto systems and the timing detector, they have been through several test-beam campaigns, including measurements with large-scale prototypes. The SHiP physics performance has been evaluated with 15 years of nominal operation, i.e. 6× 10^20 PoT. It has been verified that this is compatible with the zero-background strategy and the constraints from technical/radiation point-of-view in the current accelerator complex, as well as in the implementation of BDF (see Section <ref>).§.§.§ Present status, required R&D The work packages for the BDF and the SHiP TDR studies, including the associated resource requirements, were discussed in the CDS reports <cit.>. The work packages are built on the understanding of the designs developed in the extensive joint studies performed during the six years of the Technical Proposal and CDS phases, which concentrated a large part of the effort on tuning the design of the components to maximise the signal acceptance and minimise the background.All critical components of the facility have been studied, analysed and in some cases prototyped. The target system as one of the most challenging components has been through a first validation in a beam test in which the operating conditions of the real target were reproduced <cit.>. All the SHiP sub-detectors have undergone at least a first level of prototyping and measurements with the prototypes in test beam <cit.>. In particular, the MRPC technology for the UBT, the liquid scintillator technology for the SBT, and the scintillating bar technology for the timing detector have had larger-scale prototypes in test beam. The beam tests have revealed the main technological challenges to be addressed during the TDR phase. With this information at hand, all major subsystems of the SHiP detector have been through conceptual design reviews, with the focus on outlining the work up to the TDR. The SND@LHC experiment <cit.>, currently installed and operating in TI18 of the LHC, is a successful demonstration of the detector concept first developed for the OPERA experiment <cit.>, and then improved within SHiP for an environment with a significantly higher rate of background. Collaboration with SND@LHC is established to pursue the development of the SND detector for BDF/SHiP, and most importantly, the studies towards an upgraded SND@LHC can make significant contributions to the LDM/neutrino programme at BDF/SHiP.The principal technological challenges for the experiment lie in the further development of the muon shield, the decay volume and the spectrometer magnet, and involve mechanics and the full-size production. It is of high interest to develop the SC options for the muon shield with the potential to enhance the physics reach, and for the main spectrometer magnet with the aim to reduce the power consumption and the operational costs. The integration of the SBT and the HSDS spectrometer tracker is associated with important design challenges that must be addressed early in the TDR phase. § OPERATION AND PROTON SHARING The compatibility and possible proton sharing scenarios between the proposed future experiments in ECN3 and other NA experiments have been studied, also considering the parallel operation of the LHC, AWAKE, HiRadMat and MD sessions <cit.>. §.§ Operation mode For the future proton sharing scenarios, operational periods with and without dedicated ion physics have been considered. Scenarios with dedicated SPS cycles for ECN3 users (dedicated ECN3 spills) as well as scenarios with a concurrent beam delivery to the TCC2 and TCC8 targets (shared spills) have been studied. Different flat top lengths have been analysed taking into account realistic supercycle compositions while respecting the SPS limits on power dissipation in the magnets. The intensities considered are based on operationally achieved values during the past operation of the SPS while an operational efficiency of 80% (consistent with the expectations after the ongoing NA-CONS) has been assumed. Presently, the TCC2 targets are servedsimultaneously with shared spills by splitting the extracted beam, transported via the TT20 transfer line, by means of the two splitter magnets in TDC2 (see Section <ref> and Figures <ref> and <ref>–top). The corresponding transmission efficiencies (used to determine the amount of PoT on the TCC2 targets) are listed in the first column (shared spills/TCC2) of Table <ref>.In this mode of operation, the T10 target in TCC8, serving ECN3, receives the non-interacting fraction of the beam delivered to T4, which then is transferred to TCC8 via the P4/P42 lines. The remaining fraction of the beam interacting on T4 serves the H6 and H8 secondary lines. The transmission efficiencies to TCC8 are listed in the second column (shared spills/TCC8–T4 in beam) of Table <ref>. A new mode of operation with dedicated ECN3 spills can be conceived where beam is transported through TT20 and TCC2 and delivered exclusively to TCC8. This scenario assumes that the primary beam can be cleanly transported without splitting in TT20 to the T4 target station bypassing the target with a trajectory bump (see Figure <ref>–bottom). No other NA experiment will receive beam when a dedicated ECN3 spill is delivered. The corresponding transmission efficiencies are listed in the third column of Table <ref>.Operation with dedicated ECN3 spills is characterized by significantly lower beam losses at the splitters and at the T4 target as compared to the operation with shared spills (see Table <ref>), and therefore implies lower prompt and induced radiation as well as a reduction of the overall muon background in the NA. Moreover, as dedicated cycles would be played outside of the shared cycles during which the other NA experiments are taking data, no adverse effect of ECN3 High-Intensity (HI) operation on the backgrounds for the EHN1 and EHN2 experiments is expected. The only exception might be emulsion experiments, which currently are not planned.The RP studies carried out to-date and the various mitigation measures identified conclude that HI operation of ECN3 with dedicated ECN3 cycles is expected to be compliant with the CERN RP code <cit.>. In addition, operation with super-cycles delivering shared spills for EHN1 and EHN2 and dedicated high-intensity cycles for ECN3 remains compatible with the present T4 target and TCC2 TAX design. Therefore, upgrading them is not required, provided that the appropriate machine protection measures are put in place. Recent studies have confirmed the assumed transmission efficiency through the T4 target station and TAX (see Table <ref> - fourth row — T4/TAX) for the dedicated ECN3 spill (third column) while indicating lower values for the shared spills with T4 in beam (second column) <cit.>. For the above reasons the delivery of the required ECN3 intensity with dedicated ECN3 spills is preferred. §.§ Proton sharing The SPS operation has been studied for ECN3 high-intensity <cit.> by optimising SPS supercycles delivering both shared spills for EHN1 and EHN2 and dedicated ECN3 spills considering the operational scenarios presented in Section <ref>. Figure <ref> shows that the experimental requirements for HIKE/SHADOWS (BDF/SHiP) can be met with a dedicated beam delivery while providing ≈1×10^19 PoT/year (≈1.2×10^19 PoT/year) to the other NA experiments, provided no ion run takes place. Similarly, ≈0.6×10^19 PoT/year (≈0.8×10^19 PoT/year) can be delivered in case an ion run (1 month) is included. The integrated intensity to the other NA experiments is maximised by assuming the acceleration of 4.2×10^13 ppp on the shared spill cycles with a 4.8 FT. For some existing NA users this might be problematic due to rate limitations. A carefulscheduling of rate-limited NA experiments exploiting longer cycles with a FT of 9.6 would help to optimise beam delivery and to alleviate this problem. The study demonstrates that ≈0.7×10^19 PoT/year (≈0.8×10^19 PoT/year) can be delivered to other NA users with 9.6-long shared spills interleaved with dedicated ECN3 spills for HIKE/SHADOWS (BDF/SHiP), provided no ion run takes place. In case an ion run is included in the operational year, ≈0.4×10^19 PoT/year (≈0.6×10^19 PoT/year) can be delivered to other NA users. An additional optimization would consist in increasing the intensity of dedicated ECN3 spills beyond 2.1× 10^13 ppp and correspondingly increasing the FT duration at constant extracted current for the HIKE/SHADOWS mode of operation.Finally, it should be stressed that the PoT numbers would be reduced in case of more frequent LHC fillings, as compared to today's operation, during the HL-LHC era.The energy consumption of the SPS main magnets and the NA magnets depends on the super-cycle composition. These elements are among the main contributors to the overall SPS and NA energy consumption during beam operation, representing more than 40 % and almost 15 % of the total SPS+NA consumption, respectively. Supplying beam to a HI facility in ECN3 will not change the power consumption significantly with respect to recent years. For 2022 the total energy consumption of the SPS main magnets was ≈ 170 and the estimated difference for all the ECN3 beam delivery scenarios considered (with 1.2 to 9.6 FT) is small and not larger than ≈ 10 %, see <cit.>. § REQUIRED MODIFICATIONS AND INTEGRATIONECN3 HI operation requires modifications of existing facilities. The extent of these modifications and the integration of the new experiments and the associated facilities are analyzed in this Section (for more details see <cit.>). The compatibility and synergy with the activities ongoing or planned within the NA-CONS Project are addressed. The NA-CONS project consists of two phases:* Phase 1: 2022–2028 (up to end LS3), prioritising the primary beam areas TT20, TDC2, TCC2 and the initial section of the NA Transfer Tunnels. * Phase 2: 2026–2034 (up to end LS4), completing the consolidation of the secondary beam areas.The areas affected by NA-CONS are schematically shown in Figure <ref>.NA-CONS is expected to guarantee reliable operation in the North Area up to the end of 2040s provided regular maintenance is performed, such as the regular replacement of irradiated cables when needed. In that respect, the dedicated mode of operation and the upgrades described in this Section will reduce the radiological impact of HI ECN3 operation to a level comparable or lower than the present mode of operation for which NA-CONS has been conceived.§.§ SPS extraction The consolidation of the electrostatic septa is already planned and funded as part of the Accelerator Consolidation (ACC-CONS) Project during LS3 and ready for Run 4, with a far longer-term R&D objective to replace the septa with systems employing crystal technology.At least a factor 4 reduction of the beam losses is needed to implement the proposed ECN3 HI upgrade without impacting the present day radiological situation in LSS2. R&D on the LS3 timeline is focused on beam loss reduction techniques that significantly improve the efficiency of the present electrostatic slow extraction system <cit.>. In particular, the development of a low-density version of the septa tanks, of an anode with improved straightness and of thin crystals to `shadow' the septum blade is ongoing with PBC support.The required extraction beam loss reduction factor can be achieved with the crystal shadowing technique developed at CERN <cit.>. Up to a factor of 2 has already been demonstrated at the SPS with beam tests of prototype local and non-local shadowing systems installed in LSS2 and LSS4. The phase-space folding technique <cit.> can be combined with the crystal shadowing technique to boost the loss reduction close to a factor 4, although it cannot be combined effectively in the shared mode of operation because the larger emittance of the folded beam will increase beam losses at the TT20 splitters <cit.>. §.§ TT20, P4 and P42 Transfer LinesThe modifications required for the primary (TT20) and secondary (P4, P42) transfer lines for HI ECN3 operation with dedicated ECN3 cycles are independent of the experiment that could be installed in ECN3.Recent studies of the current TT20 optics have revealed deviations of the measuredoptics from the model <cit.>. Studies continue in 2023 to solve this issue.As discussed in Section <ref>, a new optics in TT20, rematched to provide a dedicated beam to ECN3 by transmitting it unsplit through the two TT20 splitters <cit.> will have to be used and a vertical bump at the T4 target station will have to be implemented for the dedicated ECN3 cycles (see Figure <ref> - bottom). The largest T4 TAX collimator opening of 40×20 will be used to accommodate the large beam divergence at the T4 target. With this configuration the unsplit beam should be transported through TT20/TDC2/TCC2 to TCC8 without losses for the ECN3 dedicated spills.The front-end of the T4 target is composed of multiple 2 thick beryllium plates of different lengths (between 40and 500) arranged one on top of another with a vertical separation of 40. This geometry provides the opportunity to bump the beam vertically between the target plates.With the installation of one additional vertical dipole magnet upstream of the T4 target, a closed solution for a trajectory bump can be found in combination with two other magnets already existing in the beamline for trajectory correction. A prototype system with a non-laminated magnet and spare power converter has been installed during the Year-End Technical Stop (YETS) 2022–2023 <cit.> and it has allowed initial tests with beam and the proof-of-principle of this mode of operation. In the operational configuration, the prototype magnet will need to be replaced by a magnet with a laminated yoke and a new power converter to allow Pulse-to-Pulse Mode (PPM) operation. As a back-up solution for the magnetic bypass option, actuating the T4 target's head between cycles is being investigated. The MTN magnets in the wobbling system of T4 (that allow for a momentum selection of the secondary beam produced in the T4 target for H6/H8) cannot be operated in PPM. They can be kept powered at constant current and the beam transported into P42 to TCC8 on dedicated ECN3 spill cycles, whilst still providing beam to H6/H8 on shared spill cycles. The fraction of beam that does not interact with the T4 target during shared spills will still enter P42, as it does today for NA62. During Run 4 it might not be possible to optimize the transport of this beam, as for the dedicated ECN3 spills, because some of the power converters of the P42 line will not be operable in PPM. To ease the situation, the beam entering P42 during shared spills can be reduced in intensity by reducing the primary beam intensity and increasing the T4 target length (up to 500, according to the H6/H8 experimental programme). A new absorber located in P42 could then be used in case of unacceptable beam losses in P42 or experimental background in ECN3. A new laminated vertical dipole magnet, to be installed at the end of the P4 line, will direct the beam on the absorber <cit.> during shared spills.§.§.§ Magnets and Power Converters Two new laminated vertical bumper magnets (bumper MDXVL.24119 and the absorber magnet of MDXV or MDLV type in Figure <ref>) with the corresponding power converters, DC cables and Warm magnets Interlock Controller (WIC) will be required and the non-laminated magnet MDXV.043048 and other eight MDX corrector magnets will have to be replaced by a laminated version. During Run 4 not all power converters downstream of the T4 target will be capable of operating in PPM as shown in Figure <ref> – top, only those undergoing consolidation in LS3 will be. After LS4, when NA-CONS Phase 2 for power converters will be completed, all magnets and power converters in P42 will be PPM-compatible and dedicated ECN3 spills and shared spills could be optimised independently (Figure <ref> – bottom). The operation with 1.2-long dedicated ECN3 spills might have an impact on the specifications of the electrical infrastructure while the correspondingly larger number of cycles might entail more frequent maintenance of the power converters. The impact on cost and maintenance budget of the above two items is being estimated <cit.> but it is expected to be small.§.§.§ Beam instrumentationEfficient operation of the NA primary and secondary beam lines will require detailed optics models and accurate measurements of the beam characteristics to benchmark them as well as precise beam position and intensity measurements to minimize losses. Consolidation of the beam instrumentation is already part of the NA-CONS scope. High-intensity operation for ECN3 will require additional upgrades <cit.>:* 4 beam profile Secondary Emission Monitors (SEM) (Beam SEM Grids — BSGs) planned as part of NA-CONS–Phase 1 have been installed in P42 during YETS 2022–2023 <cit.> to conduct optics studies during the 2023 run. In order to operate, these monitors require vacuum pressures of at least ≈10^-4 - 10^-5 mbar which are not presently reached in the P42 line (see Section <ref>). Due to the tight schedule for the installation, preventing to upgrade the P42 vacuum system to achieve the above vacuum levels, four small sectors around each of the BSG have been isolated under the required vacuum conditions and separated by thin (100) Aluminium windows from the rest of the vacuum line.Additional BSGs will have to be installed in TT20 in view of the high-intensity operation;* 13 new Beam Loss Monitors (BLM) have been installed <cit.> as part of NA-CONS to instrument critical locations including the EHN1 ramp and ECN3 bridge and to permit optimisation of prompt beam losses. Additional BLMs will be installed in TDC2 and TCC2 upstream of the targets as part of NA-CONS Phase 1;* a passive optical fibre dosimeter covered by NA-CONS Phase 1 will be installed at selected locations to measure integrated beam losses outside the coverage of the BLM system;* following the experience gained during 2021–2022 operation the Target Beam Instrumentation (TBI) will be upgraded and it will include beam profile monitors based on BSGs <cit.>;* the installation of consolidated SEM for beam intensity measurements (BSI) is included in the present NA-CONS Phase 1 scope;* High-bandwidth spill monitoring to guide the optimization of the spill uniformity and reduce event pile-up is also included in NA-CONS. §.§.§ Vacuum SystemThe consolidation of pumping units, main gate valves connecting the pumps to the vacuum system, vacuum gauges and the corresponding cabling and electrical sockets in the primary and secondary transfer lines is already planned as part of NA-CONS Phase 1. The same applies to all vacuum chambers, bellows and windows exhibiting any signs of damage or deterioration (in particular in TCC2).The present vacuum level in TT20 is not expected to limit performance. The scope of the NA-CONS Phase 1 consolidation for the P42 beamline will have to be extended to achieve a vacuum level of at least 10^-4 mbar all along the line without windows. Studies are underway to compute the effect of vacuum pressure on radiation levels and preliminary results indicate that the above target average vacuum pressure is sufficient.§.§.§ Beam intercepting devicesThe majority of the beam intercepting devices in the transfer lines are already being considered in Phase 1 of NA-CONS to increase reliability and address a series of operational issues encountered during operation in 2021–2022. These include <cit.>: * The TT20 Target External Dump (TED), which is moved along the beam trajectory when required to prevent beam transport to the downstream part of TT20. The new TT20 TED design will be compatible with increased intensity per cycle (> 4×10^13 ppp) and an appropriate duty cycle consistent with the operational scenarios described in Section <ref>. Cooling of the assembly will be optimised with sustainability in mind, while the core, the shielding and translation system will be designed consideringbest practices and adaptation to the foreseen dumped intensities.* The TT20 Target Beam Stopper Extraction (TBSE) stopper, providing a redundant safety element in case of access to the TDC2/TCC2 area. It will undergo a consolidation of its translation system, while keeping the same absorber.* The TT20 Target Collimator Splitter Copper (TCSC), protecting each of the two TT20 splitter magnets, will intercept the beam during shared spills only. Therefore, no significant losses and activation increase is expected as a result of the high intensity operation in ECN3. However, the TCSCs will remain among the most radioactive components in TT20 and following the operational experience in 2021–2022 design improvements including a low-activation tank with improved handling, new support tables (to allow more accurate alignment while allowing easier remote exchange of the assembly) and an improved water cooling system with quick connections to permit the possibility of installing marble shielding <cit.> have been recommended to take place as part of NA-CONS Phase 1. These upgrades will reduce the dose to personnel intervening in the area, but they will not reduce the splitting inefficiency at the origin of the beam losses estimated to be ∼ 3% per splitter <cit.>. Crystal set-ups installed upstream of the collimators and aligned in volume-reflection or channeling mode (<cit.> could offer a reduction of the splitting inefficiency by a factor between 2 and 5 and should be further studied and implemented as part of a general campaign of loss reduction for the operation of EHN1 and EHN2.* The T4 Target, and the corresponding Target Beam Instrumentation Upstream (TBIU) and Downstream (TBID) of it, will not require any modification for the mode of operation considered in Section <ref>, though a re-design of all the TCC2 target stations to guarantee an isostatic positioning of the TBIU and the TBID and of the target box itself have been requested to be implemented as part of NA-CONS Phase 1. The beryllium plates will not require any specific upgrade <cit.>, provided that adequate beam interlocks are put in place preventing impact of high intensity beams, which would permanently damage them.* The TAX collimators are suffering from repeated reliability issues linked to their support tables reaching their end of life. The supporting table of 7 devices (including one spare) are included in NA-CONS Phase 1 to address the reliability issues. The T6 TAX on P62 can be postponed to NA-CONS Phase 2 because the line is presently not in use. The T10 TAX could also be removed from NA-CONS pending a decision on the physics programme to be conducted in ECN3. The dedicated mode of operation for ECN3 does not require a priori a modification of the beam absorbing elements, provided adequate beam interlocks preventing more than a single high intensity extraction to intercept the absorbing material are put in place. Two or more extractions of the dedicated beam at 4×10^13 ppp would risk melting the copper in the second block of TAX if impacting directly <cit.>. In fact, even shared beam at 2×10^13 ppp could damage the blocks in that case.As mentioned earlier, a new absorber will be installed in the P42 beamline to intercept the proton beam not interacting with the T4 target during shared spills. The latter would be an internal dump under vacuum, with an aperture large enough to allow the beams during dedicated ECN3 spills to pass through. An available spare of an earlier version of the SPS internal beam dump (TIDVG4) <cit.> could be used for that purpose. §.§.§ Survey and AlignmentThe alignment and smoothing of the NA primary and secondary lines is foreseen as part of the NA-CONS project. The connection of TT20 through the T4-TAX system to P4/P42 in TCC2 and the P42 beamline are of interest for ECN3 operation.The work in TCC2 can only take place during LS3 because of the high radiation levels. A permanent survey network will be installed in TCC2 as part of NA-CONS Phase 1 to ease the measurement in the area and reduce radiation to the personnel. The P42 transfer line has been surveyed and smoothed already in YETS 2022–2023. This activity is limited by the activation of certain collimators in the TT83 tunnel (see Figure <ref>) and additional verifications will have to take place in LS3.NA-CONS includes the update of survey instrumentation and measurement methods and in particular the target station consolidation that will ease the measurement of the equipment position.§.§.§ Radiation protection In addition to the studies carried-out to assess the origin of the observed and expected prompt radiation levels and to identify appropriate mitigation measures (see Section <ref>), further studies were performed to investigate accidental beam loss scenarios along the shallow transfer tunnels (TT83 and TT85—see Figure <ref>) housing the P4/P42 beamline <cit.>. The loss of an entire NA62 spill at the current nominal intensity would create a maximum dose of ≈ 300/spill at the EHN1 ramp, which is acceptable (below the limit of 1) if there are no visitors in the area and provided the beam is interlocked after 1 spill. Presently, an RP monitoring system is installed with an interlock capability. When scaling to the higher intensities given in Table <ref> the limit would be exceeded and the following two mitigation measures should be implemented:* halt the extraction and dump the beam in the SPS using an interlock input to the Beam Interlock System (BIS) from the BLM system and selected power converters;* increase the effectiveness of the shielding at the ramp <cit.>. Replacing the concrete shielding by iron yields a factor 50 reduction in the prompt dose, which would be sufficient to stay well below the 1 limit in case of accidental beam loss. This measure and other possible actions are presently being studied. The situation at the ECN3 bridge is similar with ≈ 50/spill reached with the uncontrolled beam loss of the nominal NA62 intensity <cit.>. A reduction of more than an order of magnitude in the prompt dose rates can be achieved with moderate improvements of the shielding at the bridge <cit.> and civil engineering studies for such a solution have been launched.§.§.§ Machine protection system The machine protection architecture foreseen as part of the NA-CONS project is compatible with a dedicated ECN3 beam delivery scenario <cit.>. The BIS is modular and distributed across the NA primary and secondary beamlines. It can be easily adapted to the needs of future beam transfer and target systems. A detailed study on the required machine protection inputs is needed for the HI facility in ECN3 in 2023. The technical specifications are presently being written and new interlocking requirements are now being worked out. The protection of the primary beamlines would exploit signals provided by several pieces of equipment. These include power converters’ current monitoring, WIC, BLM systems, vacuum valves, beam intercepting devices, transfer line elements and the access system. The BIS will have to decode which cycle-type is being played and it will allow SPS slow beam extraction only if safe conditions are met. The system has a reaction time (≈ 100) well below the spill length to avoid accidental damage to equipment. The deployment of the new BIS is foreseen as baseline in NA-CONS Phase 1 and during LS3, however, there will be a transition period where modern interlocks will coexist with old and software interlocks because the consolidation of power converters in the auxiliary surface buildings BA81 and BA82 currently is not planned to happen until LS4 (see Figure <ref>). §.§.§ Timing and controls In comparison to the non-PPM NA operation today, the introduction of a dedicated NA user in ECN3 will bring with it the concept of ECN3 user (USER) and ECN3 destination (DEST), not only for the relevant magnets and power converters, but also for the machine protection system and other systems that need to understand the cycle-type (dedicated ECN3 spills or shared spills) being played, including the NA users and experiments themselves. The distribution of timing signals to the NA is part of NA-CONS but the individual NA user requirements will need to be followed-up carefully to ensure that post-LS3 operation is compatible with a dedicated cycle and NA user in ECN3.§.§.§ Other TDC2/TCC2 infrastructure Although a dedicated beam delivery mode to ECN3 relaxes the need for significant upgrades in TDC2 and TCC2 during LS3, some targeted but significant consolidation is still required in the zone. In addition to the items already discussed in this Section, the following activities should be completed during LS3 to improve the future reliability of the NA in Run 4: * replacement and rerouting of DC and signal cables;* replacement and rerouting of water cooling hoses and connections;* deployment of higher performing fire detection and protection system with corresponding compartmentalisation and smoke extraction. §.§ TCC8/ECN3 The instantaneous and integrated beam intensities requested by BDF/SHiP and HIKE/SHADOWS both require the installation of a new target complex, associated cooling and ventilation systems, and shielding in TCC8.Based on the experience of fixed-target operation at CERN and considering the best practices in the international community, as well as the need to comply with today's radiation protection and radiation safety regulations, the target systems of a new facility will have more stringent design requirements than currently operating facilities (see also <cit.>). Studies executed during 2021–2022 <cit.> proved that a high-power target station could achieve compliance with these criteria, provided that an appropriate shielding configuration as well as specific design requirements are implemented. Recovery of at least 100–120 of passive cast iron blocks from facilities such as the CERN Neutrinos to Gran Sasso (CNGS) hadron absorber and/or from the old PS neutrino facility in TT7 is being investigated in order to optimize costs and enhance sustainability <cit.>.Three options with helium, nitrogen, or alternatively vacuum have been considered for the target vessel that should ensure an inert atmosphere to prevent corrosion and reduce residual gas activation within the target shielding. These need detailed investigations, together with the design of the proximity shielding and services. Optimisation concerning design, integration, handling and manipulation is also being sought in order to allow reasonable maintenance of highly radioactive devices according to the ALARA (As Low As Reasonably Achievable) principle and in particular the possible replacement of the magnets installed between the target and the TAX (for the HIKE/SHADOWS configuration) and/or the target (for both configurations) during the lifetime of the HI facility, expected to span over at least 15 years of nominal operation. While the concepts around the handling of the target and the target complex are well developed, the different components involved and the remote handling techniques also require detailed design and prototyping. Nevertheless, no showstopper has been identified so far. The facility design with its shielding and infrastructure has been optimized to be compliant with CERN’s RP code <cit.> regarding dose to the personnel and members of the public. The optimization considers the operational scenarios described in Section <ref>.It takes into account the prompt and residual radiation, air activation and the environmental impact. Also, soil activation and transfer of activation products to groundwater has been considered in the shielding design. However, due to lack of information about the local groundwater transport, very conservative constraints on the activity concentration of longer-lived leachable radionuclides in the soil (^3H, ^22Na) have been applied. A hydro-geological study is underway, and it will provide the information needed to relax the above constraints and to further reduce the required shielding.A preliminary civil engineering study <cit.> has been carried out on the required modifications to the existing infrastructure. The installation of the new target complex with the associated shielding requires civil engineering works in TCC8. The existing floor will be lowered locally and a dynamically confined area with nuclear-grade ventilation will be created with fire resistant walls, separating the target area from the ECN3 hall and the rest of TCC8. The extent of the civil engineering work might be reduced once the results of the hydro-geological study above-mentioned are available.A new service surface building will be constructed with an area of approximately 500□ to house all the dedicated services needed for the target complex sub-systems independently of the experiment, including an area for target system preparation as well as for the handling, repair and waste packaging of spent targets and the various beam intercepting devices. Additional 200□ will be available for the installation of power converters. The local electrical installation would require the construction of a concrete platform to support the transformers measuring about 12×8□ for HIKE/SHADOWS or 12×4□ for BDF/SHiP.Access separation of TCC8 with respect to the rest of the NA will be needed to allow for work on the target and experiment installation during Run 4 whilst beam operation continues in the rest of the NA. Potentially, new fire doors will have to be installed with an impact on the compartmentalisation and on the fire detection scheme. A Fire-Induced Radiological Integrated Assessment (FIRIA) analysis of the new target complex and compartmentalisation study must be conducted. New buildings and shafts will have to be equipped with fire detection as well. The recently renovated EHN2-BA82 control unit can be scaled to protect a larger perimeter. The access control system will have to be implemented according to the new premises and related restrictions (target building, target area, shafts, new service building for power converters and cooling station). The safety aspects in TCC8 and ECN3 will need a detailed and experiment-specific study, to be carried out in the TDR phase.The EHN2 and ECN3 magnets are powered from the BA82 surface building. The consolidation of BA82 is foreseen only in phase 2 of NA-CONS during LS4 and its anticipation to LS3 is not possible, as emerged during the NA-CONS Cost, Schedule and Scope Review (CSSR) <cit.>.Instead, the installation of the converters for ECN3 could be foreseen in the new service building planned for ancillary equipment for the target systems in TCC8. The installation work could be performed after LS3 without impacting the operation of M2 and the consolidation of EHN2 and BA82 could take place during NA-CONS phase 2 as planned.It is expected that the cooling and ventilation capacity available after the consolidation of the cooling towers as part of NA-CONS phase 1 will be sufficient for the ECN3 upgrade. The new service building hosting the power converters for the experimental magnets will require dedicated ancillary cooling and ventilation equipment, including pumps, control racks and heat exchangers for the demineralized water. In addition, a corresponding local electrical infrastructure will have to be deployed.An important logistical support will be required all along the process of equipment decommissioning and, if needed, decontamination in the area with a particular care for materials like target, absorbers and highly activated equipment. Waste packaging and disposal will have to be organised accordingly. In the same way, transport and handling support will be needed for the installation of the target complex and the experimental equipment. An upgrade of the crane in TCC8 will be required to improve its movement system and remote handling capability. The impact on other services such as cryogenics, gas distribution and Information Technology (IT) infrastructure will need iterating with the specific experiments. §.§.§ HIKE/SHADOWS For the HIKE proposal, a 100-class target complex based on radiation-cooled graphite or He-gas cooled beryllium, similar to the CNGS configuration, is proposed, at the place of the current T10/TAX target system, which will be completely dismantled. The physics requirements, resulting from the kaon beam, demand a significant shielding improvement with respect to the current NA62 target system (see Figure <ref>). Despite the lower requested number of PoT with respect to BDF/SHiP, the nature of the kaon production, its selection and secondaries in-flight decay, coupled with the need of dumping the remaining proton beam and hadrons, results in a target system that is stretched over a length of ≈ 27 with several equipment in the secondary beamline that requires access (hence without hermetic shielding); this requires a significant amount of shielding to contain the radiation (more than 300 of cast iron and 600 of concrete). An ad-hoc TAX system with a Cu-Fe sandwich configuration, upgraded cooling and maintenance/handling capabilities will replace the existing one. Full remote handling of the various components is also a pre-requisite to be compatible with ALARA requirements.The integration of the layout considered for HIKE Phase 1 and SHADOWS has been conceptually validated <cit.>. It must be noted that the design of the SHADOWS background lateral veto wall between the decay volume and the beamline is still ongoing and this will have to be integrated in the space presently reserved for the experiment shown in Figure <ref>. Access capabilities to the equipment between target and TAX will require further optimization since, due to the increased shielding and the expected residual dose rate, it is expected that maintenance operation would be relatively complex. Space availability in TCC8 would still have to be thoroughly evaluated as the integration of SHADOWS and the K12 beamline, particularly between the target and TAX for the latter, progresses.A set of radiation protection studies were conducted based on extensive FLUKA Monte Carlo simulations <cit.> to optimize the facility and its shielding design <cit.>.The studies were performed for HIKE Phase 1 and HIKE/SHADOWS BD mode, however not yet for HIKE Phase 2 in view of the ongoing beamline design.The optimized shielding for HIKE Phase 1 and the HIKE/ SHADOWS BD mode allows reducing the soil activation to comply with the given design limits and the residual radiation in the target and experimental areas guaranteeing access for interventions. It further aims at containing the air activation and reducing the environmental impact from its releases to respect CERN's dose objective of 10/year for members of the public. The shielding decreases the prompt radiation above-ground. While for the area above TCC8 and ECN3 the shielding is sufficient to comply with a Non-Designated Area (NDA), the area downstream of ECN3 must be reinforced with additional 4 of soil over an extended area and additional iron shielding in TCC8/ECN3 is required. This allows not only to meet the ambient dose equivalent limit of an NDA within the CERN fence, but also the limit at the CERN fence as well as the above-mentioned dose objective for members of the public.HI operation in kaon mode will induce significant radiation dose to the coils of the magnets installed between the target and the TAX. Dedicated shielding will be required to avoid the necessity of frequent magnet replacements <cit.>. An optimisation of the TCX, the fixed collimator mask downstream of the target encasement, is being considered. Due to the high prompt radiation levels (specifically high energy hadrons and neutrons, order of magnitudes higher than tolerated by commercial electronics according to CERN's Radiation Hardness Assurance criteria), radiation-tolerant electronics will have to be used in proximity of the SHADOWS detector and dedicated alcoves with iron shielding will have to be built for the electronics in TCC8 <cit.>. It is expected that for HIKE Phase 1 the front-end of the K12 beamline will have to be rebuilt. In particular, the magnets between target and TAX will have to be replaced with new magnets adapted for full remote handling that are busbar-powered to avoid manual cable connection.The floor in the TCC8 cavern needs to be reinforced with iron shielding in the critical areas of HIKE/SHADOWS to prevent the soil activation going above the given design limits. Under the target the required excavation for the installation of the iron blocks is 6.5 long, 3 wide and 0.8 deep, while under the TAX the floor will be excavated on a 6 long area in three steps with a total depth of 1.35 and the width varying between 2 and 6. Additionally, in the area between the target and the TAX the floor will be also lowered by 0.5 over a 8.4 length and by 0.7 over a 8 length. Due to the size of the required modification, the slab will be excavated to the full depth and a new reinforced foundation slab will be built to maintain the structural stability of the tunnel. In addition, for the installation of SHADOWS a 3.5 long, 5.5 wide and 0.5 deep trench will be excavated under the spectrometer magnet.SHADOWS requires a new power converter for the spectrometer, three for the MIBs and one for the NaNu magnet. In case the existing MNP33 magnet will be replaced by a new NC or SC spectrometer, one new converter will be required for HIKE instead of the two currently existing ones. The power converters will be installed in the target service building.The present conceptual HIKE Phase 2 layout <cit.> implies a major rework of the K12 beamline in the transition from HIKE Phase 1 to Phase 2 during an LS. It includes the removal of the highly activated TCX and magnets between target and TAX, the removal of the K12 beamline itself, and the installation of an in-vacuum high-power beam dump few meters downstream the production target to reduce the muon background stemming from decays of secondary hadrons produced in the target. Optionally, the possibility to put the TAX absorber entirely under vacuum is considered to reduce background from kaon regeneration at the vacuum windows surrounding the TAX and the air in the TAX holes. This may imply a significant change in the shielding configuration and beamline system integration. The new neutral beamline will consist mainly of three collimation and sweeping magnet stages as depicted in Figure <ref>. The defining collimator is located at 1/3 of the distance to the final collimator and defines the beam angular acceptance of ±0.4, matching the size of the central bore in the proposed HIKE calorimeter. A cleaning collimator stops debris from scatterings in the jaws of the defining collimator, and a final collimator stops scattering products from the cleaning collimator. Charged background from inelastic interactions at the collimators is reduced further by introducing strong sweeping magnets with apertures larger than the beam acceptance. The active final collimator is part of the experiment and defines the start of the fiducial volume of HIKE. The beamline between TAX and experiment is required to be under vacuum.The HIKE Phase 2 layout has not been validated yet, either from a radiation protection or from a system integration point of view as the beamline design is still ongoing. Moreover, it is important to stress that the Phase 2 services will have to be available already during the construction period of Phase 1, as the dose rate at the end of Phase 1 is expected to be very significant. Ongoing studies seem to indicate that there is no need to keep the TAX absorber under vacuum while optimization of the spot size at the target is being considered to reduce the power density requirements on the proton dump <cit.>. In addition, minor modifications to the P42 beamline will be needed, i.e., re-alignment of the last three dipole magnets, if the experiment decides to run at a production angle larger than 2.4. The target concept would allow to increase the angle up to 8 mrad.§.§.§ BDF/SHiP The design of BDF and the technology studies, including prototyping, have been documented in detail in the CDS report and other documents <cit.> (complete list of dedicated reports in <cit.>). The implementation in the existing TCC8 and ECN3 reuses the designs developed for the original proposal. Only the most relevant aspects for the implementation in TCC8/ECN3 are reported below.The present T10 production target in TCC8 would be removed along with the entire K12 beamline and the corresponding magnet power converters as well as all the shielding assemblies. The latter will be reused for the target systems. At the upstream end of TCC8, the magnets of the BDF dilution system would be installed along with a vacuum chamber spanning the length of TCC8 towards the BDF/SHiP proton target, with the ≈ 130 drift distance exploited to increase the beam size and develop the dilution pattern on the target's front face.The layout of BDF/SHiP at the end of TCC8 and throughout ECN3 is shown in Figure <ref> <cit.>. The setup consists of the high-density 300-class proton target, effectively acting as a beam dump and absorber, followed by a magnetised hadron absorber and a magnetic muon shield immediately downstream. The shield deflects the muons produced in the beam dump in order to reduce the flux in the detector acceptance to an acceptable level. The hadron absorber is an integral part of the overall shielding complex that is completely surrounding and sealing the target system. Together they form a compact and free-standing target complex, shown in Figure <ref>. The target complex design draws from the experience gained during the CDS phase <cit.>. Significant simplification and reduction in shielding has been made possible thanks to the use of an already operational underground area and thanks to the depth of TCC8 with respect to the surface. The handling of the target systems may be carried out by the existing crane in TCC8 (after the upgrade of its movement system and remote handling capability), taking inspiration from the recently developed design of the new SPS beam dump <cit.> and developments during 2023. This has led to a revision of the shielding and the system handling in ECN3 to cope with the space and access constraints, while fully respecting the constraints from radiation protection, equipment maintenance and operation.In order to maximise the production of heavy flavoured hadrons and photons, and at the same time provide the cleanest possible background environment by suppressing decays of pions and kaons decaying to muons and neutrinos, the target should be long and made from a combination of materials with the highest possible atomic mass and atomic number, and be optimised for maximum density with a minimum of space taken by internal cooling. The corresponding target system developed during the CDS phase <cit.> requires no modifications with respect to the implementation in ECN3. The baseline design is still composed of blocks of titanium-zirconium-doped molybdenum alloy (TZM), cladded by a tantalum-alloy, in the core of the proton shower, followed by blocks of tantalum-cladded pure tungsten. The blocks are interleaved with a minimum number of 5 gaps for cooling, resulting in a total length of twelve interaction lengths. In order to cope with the 350 average beam power, a bunker configuration with cooled stainless steel shielding, passive cast iron blocks (180), as well as concrete and marble shielding is foreseen (for a total volume of ≈360). A pit (4 long, 4 wide and 1 deep) will be excavated under the target station to embed part of the shielding and some of the services.The five metres long hadron absorber stops hadrons and electromagnetic radiation emerging from the proton target. It is equipped with a coil which magnetises the iron shielding blocks <cit.> to serve as the first section of the active muon shield. The rest of the muon shield consists of free-standing magnets. The configuration presented in <cit.>, shown in Figure <ref>, consists of a first SC section followed by a NC section. The target complex and part of the free-standing muon shield is located at the end of the TCC8 target hall, while the subsequent muon shield magnets are located in the taller ECN3 experimental hall.The implementation of BDF/SHiP in ECN3 has undergone a series of radiation protection studies with nominal beam operation of 4× 10^19 PoT per year and 15 years of operation  <cit.>.Compared to the original CDS design, it has been possible to significantly reduce the amount of shielding at strategic locations by benefiting from the thick soil layer above TCC8 and ECN3 and already existing activated shielding. Consequently, decommissioning of the facility would also involve less newly produced radioactive waste. Studies of prompt radiation above the target complex and beyond demonstrate that dose rates are well below the limit for an NDA. Furthermore, the doses due to stray radiation at the CERN fence downstream of ECN3 and beyond have been investigated. Results show that the ambient dose equivalent limit for the CERN fence would be met with a substantial margin and that the effective dose to the public would remain well below 10/year and is considered as optimized <cit.>.Residual dose rates in the target area as well as the soil activation were evaluated for the fifteen years of beam operation showing that the target area is well optimized and compatible with the given soil activation design limits <cit.>.Studies for air and nitrogen/helium activation occurring inside of the nitrogen/helium target vessel and the surrounding air have further demonstrated that air and nitrogen/helium releases into the environment have a negligible radiological impact on the public <cit.>. In order to further simplify the installation and increase the lifetime of the facility, it is currently considered the option of employing primary vacuum; this will further reduce the radiological impact of the facility, reduce operational costs and increase the capability of the system to run for longer periods (i.e. by reducing the risks of radiation accelerated corrosion).The radiation to the detector and electronics in ECN3 is expected to be significantly below levels that require special measures with the exception for the first part of the muon shield together with the side of ECN3 along the stream of muons, but in any case not requiring the development or application of radiation tolerant electronics <cit.>.The updated dimensions of the muon shield and the detectors allow integrating SHiP in the existing TCC8/ECN3 hall below the existing bridge cranes (Figure <ref>). While the distance between the Salève-side wall and the decay volume in ECN3 is between ≈4 – 2 (upstream/downstream), the Jura-side wall is at about the same distance of ≈9 – 7 as in the original CDS design, leaving sufficient space for detector assembly and maintenance.Limited modifications to the ECN3 floor will be necessary under the spectrometer magnet in the form of a 5×7 pit with a depth of 1. A detailed investigation of the impact and reuse of existing services and infrastructure has been performed. The implementation of BDF/SHiP will not interfere with services for other NA facilities and a number of existing detector services may be reused. The current access shaft to TCC8/ECN3 of 4×8 is considered a limiting factor in performing the works associated with both TCC8 and ECN3. An additional shaft of 8×8□ at the end of ECN3 would allow separating the activities associated with TCC8 and the target complex, and the detector activities in ECN3. It would reduce interference and significantly ease and simplify the detector installation. In order to build the new shaft, part of the building 918 will be demolished and the existing services will be rerouted. A new access building will be constructed on top of the shaft and equipped with an overhead crane for transport purposes. Access control will be needed. The reduced surface building 918 appears sufficient to host detector electronics, services and computing, and space for operating the detector. A new power converter will be required for the hadron absorber, six for the SND muon system and one for the decay spectrometer. Power converters will also be needed for the BDF dilution system magnets. The power converters will be installed in the target service building.§ PRELIMINARY SCHEDULE AND PRELIMINARY COST ESTIMATE §.§ North Area operation, beamline and infrastructure scheduleThe main constraints for the ECN3 HI implementation are the availability of resources, which will be critical given the concurrence with the HL-LHC, ATLAS/CMS Phase-II upgrades, and NA-CONS Phase 1, as well as the fact that the upgrade of the accelerator infrastructure upstream of TCC8 must be ready for operation after LS3 to avoid impacting other NA users. In addition, given the length of cool-down required in highly radioactive areas, major modifications in TDC2/TCC2 are (most likely) not compatible with the LS3 timeline.These constraints can be met if the TCC8/ECN3 upgrade is decoupled from the upstream accelerator infrastructure (access, cooling and ventilation, etc.) by allowing at least 1 year to complete work in TCC8/ECN3 after LS3 and during Run 4, whilst the rest of the NA is operational. This possibility has been confirmed by the team responsible for the access system. As already mentioned in Section <ref> the consolidation of BA82 during LS3 is not feasible because of lack of resources and in the following it is assumed that the power converters for the experimental magnets will be hosted in the new target station service building. A preliminary implementation timeline for the ECN3 HI facility is shown in Figure <ref>. The proposed schedule assumes a decision on the experimental program by the end of 2023 to address the outstanding issues on the experiment-dependent target and secondary beamline design. Engineering studies must be completed before LS3 to keep compatibility with Phase 1 of NA-CONS and execution in LS3. The TDR/Project Readiness Review (PRR) phases of the intensity upgrade would start immediately in 2024. Note also that the proposed schedule assumes adequate access to test beams by the selected experimental program until 2030 for the development and calibration of the detectors (see Sections <ref> and <ref>). §.§ Beamline and infrastructure cost estimate Following the mandate of the ECN3 Beam Delivery TF, the short pre-study primarily focused on an evaluation of the technical feasibility of a future ECN3 HI facility. Consequently, the provided resource estimates <cit.> are in several cases based only on group expert estimates without time for an extensive engineering study. The overall uncertainty range for the cost estimate here summarized is not expected to be better than C3 –- C4 <cit.> [A Class 3 estimate uncertainty has a lower range between -10% and -20% and an upper range between +10% and +30%. A Class 4 estimate uncertainty has a lower range between -15% and -30% and an upper range between +20% and +50%. ]. Two main cost categories have been identified:* high-intensity beam delivery (including the corresponding engineering phase): * a set of additional NA-CONS requirements, beyond the initial baseline and identified during the ECN3 TF evaluation, or as a result of the operational experience in 2021-2022. These consolidation items are not directly linked to the intensity upgrade requirements but to the future reliability of a new facility;* high-intensity upgrade specific beam delivery requirements allowing for maximum beam intensities safely and reliably delivered to ECN3; * experiment-specific target complex and infrastructure requirements for TCC8 and ECN3.With respect to the initial estimate presented in <cit.>, the latest information concerning the shielding requirements for HIKE-Phase 1 and SHADOWS and the corresponding adaptation of the civil engineering work implies an increase of ≈ 4 MCHF of the cost for HIKE-Phase 1 and SHADOWS. The updated cost estimates are now equal for both options,within the uncertainties. They include:* High-Intensity beam delivery with the corresponding engineering phase: 14 MCHF;* TCC8 target Complex and ECN3 Infrastructure: 50 MCHF.A detailed list of funding requests has been presented during the NA-CONS CSSR <cit.>. The above cost estimate is based on the following main assumptions:* BA82 consolidation will take place during NA-CONS Phase 2;* recovery of iron shielding blocks for the TCC8 target station from the CNGS hadron absorber, TT7 dump/absorber, and OPERA;* staging of the beam instrumentation upgrade compatibly with the available resources;* no need of increasing the scope of the electrical infrastructure consolidation beyond that considered by NA-CONS;* no need of an additional cooling tower beyond the already planned NA-CONS scope.The cost estimates for the TCC8 target complex and ECN3 infrastructure are either derived from the studies carried out in the scope of the BDF CDS <cit.>, or from updated civil engineering studies performed by an external consultant, and/or reviewed by equipment/service group experts to provide an expert estimate in line with the pre-study requirements <cit.>. The experiment requirements summarized in Section <ref> and presented in the LoIs <cit.> were considered in compiling a related infrastructure requirement document <cit.> and iterated together with NA-CONS. Despite the different types of target complex implementations, as well as civil engineering needs for HIKE/SHADOWS or BDF/SHIP, the total implementation cost envelope remains comparable when considering equivalent operation periods. The cost includes: * civil engineering needed for the target complex including a surface building also housing the required additional power converters and respective Heating, Ventilation and Air Conditioning (HVAC) infrastructure;* a new target complex in the form of a dilution system, target systems and shielding, instrumentation and inertisation system (for BDF/SHIP), or a separated production target and TAX implementation with cooling and longitudinal shielding (for HIKE/SHADOWS);* power converters and DC cables for the experimental magnets and muon shielding;* TCC8/ECN3 general infrastructure, as well as services and support needed for the new detectors;* dismantling and decommissioning of the existing TCC8 target complex (for both implementation scenarios), integration and installation activities, as well as beamline and infrastructure modifications required between HIKE Phase 1 and Phase 2. Spectrometer magnets and muon shields are not included in the cost estimate and they are expected to be primarily covered by the experimental collaborations. No cryogenics services were considered as not requested in the experiment LoIs. The operation with 1.2-long dedicated ECN3 spills might have an impact on the specifications of the electrical infrastructure beyond the NA-CONS scope that remains to be evaluated (see Section <ref>).The beamline design for HIKE-Phase 2 is ongoing and the specifications on the proton beam dump (not included in the initial cost estimate) are being defined. The corresponding radiation protection and integration studies have to be done, with expected implications for the design of the shielding and of the K12 magnets and busbars in the area between the target and the TAX and in general for the services required for HIKE Phase 2 (see Section <ref>). The requirements for the cryogenic system for the first SC section of the SHiP muon shield have not been specified, yet (see Section <ref>). The cost implications of the above items will have to be addressed in the TDR phase.From the analysis conducted including the outcome of the NA-CONS CSSR, no showstopper for the ECN3 HI implementation according to the schedule proposed in Section <ref> has been identified. §.§ HIKE/SHADOWS cost and schedule The indicative operation timeline of HIKE/SHADOWS as given in Figure <ref> assumes start of nominal operation in 2031. The HIKE timeline to first beam is as follows (see Figure <ref>): * 2024-2025: detector studies* 2026: TDR* 2027-2028: prototyping * 2028-2029: production * 2030: installation and possible commissioning with lower-intensity beam* 2031: data acquisition starts with high-intensity beam The estimated material cost of the HIKE detector upgrades and new components is estimated to 27.5 MCHF summing both Phase 1 and 2 contributions. It is largely dominated by the cost of Phase 1.In case of positive approval of the experiment by the end of 2023/beginning of 2024, SHADOWS plans to prepare a TDR by mid of 2026 and to undergo a PRR by the end of 2026. This would allow a timely start of construction in 2027, that could last until mid-2029, followed by one year of installation/commissioning in 2029-2030. The first pilot run could be performed already by the end 2030 or beginning of 2031 (Figure <ref>). The following expected nominal operation will include 8 years where SHADOWS will operate 50 % of the time in BD mode together with HIKE, interleaved with long shutdown periods. This operational configuration will allow to consolidate the set-up along the lifetime of the experiment.The overall cost of the SHADOWS detector is driven by the choice of the detector technologies that are currently under scrutiny. The current best estimate is a total material cost of 12.4 M€, out of which 9.6 M€ correspond to the main spectrometer and 2.8  M€ to the NaNu subdetector. The overall uncertainty range for this cost estimate is not expected to be better than C3.SHADOWS expects that the MIB system, the dipole magnet of the main spectrometer and the decay vessel will be provided by CERN as Host Laboratory, while the rest of the cost will be shared among the other collaborating institutions. §.§ BDF/SHiP cost and scheduleGiven the extensive studies performed during the Technical Proposal and the CDS phases, it is expected that the TDR phase will require 3–4 years, depending on the subsystem. The construction phase is expected to start in LS3 to allow commissioning the BDF in 2030, with first year of data taking in 2031 (see Figure <ref>). LS4, currently scheduled for 2033, presents an opportunity for consolidation, if necessary. The operational schedule stretches over 15 years, with several opportunities for extensions and upgrades of BDF/SHiP, as discussed in <cit.>.The cost estimate of the detector includes the muon shield, the SND and the HSDS detectors, and all associated infrastructure. The estimate initially prepared for the CDS report has been revised according to the new detector configuration and dimensions, and updated with 2023 rates <cit.>. It amounts to ≈ 51 MCHF with an uncertainty at the level of ^+30%_-10%, making it compatible with a Class 3 cost estimate. The accuracy is derived from the uncertainties associated with each individual component. At the same time the total cost is conservative given that upper estimates have been used and the most expensive options have been included, wherever applicable, e.g. muon shield in the hybrid configuration with a superconducting magnet, SBT with maximum number of compartments, etc <cit.>.§ PHYSICS POTENTIALIn the following the main areas of fundamental physics that could be significantly impacted by the projects proposed at ECN3 are considered. It should be noted that the first two topics, i.e., the physics of feebly-interacting particles (FIPs) and flavour physics, benefit from an extensive body of literature and from many existing and dedicated studies, whereas the third topic, neutrino physics, presents various novel ideas that have not yet been studied at the same level of detail.§.§ FIP physics This section focuses on the FIP searches of the experiments proposed at ECN3. After a general introduction on the physics motivations and measurement issues, the projects detail their respective strategies and how FIP simulations and background estimates are performed. The international landscape of competing experiments and proposals is then briefly discussed before presenting exemplary FIP sensitivity projections.§.§.§ Introduction The Standard Model (SM) of particle physics is highly successful in accurately predicting experimental observations across many different processes and energy scales. Nevertheless, there are a number of both theoretical arguments and experimental observations pointing to the need for new physics beyond the Standard Model (BSM). From the theoretical perspective, the parameters of the SM appear finely tuned, in particular the electroweak scale (known as the hierarchy problem) and the CP-violating phase of strong interactions (known as the strong CP problem), and there is no compelling explanation for its flavour structure and its accidental global symmetries. On the experimental side, there is clear evidence for a particle-antiparticle asymmetry in the universe beyond the SM prediction, for non-zero neutrino masses and for the existence of a new form of matter called dark matter.While these problems are often addressed by postulating new physics at high energies beyond the reach of existing particle colliders, there exist compelling solutions also in terms of light particles, which are kinematically easily accessible but have evaded detection due to their tiny couplings. For example, the hierarchy problem can be solved dynamically through the relaxion mechanism, which introduces a new spin-0 particle (the relaxion), which may have a mass in the MeV range and couple to SM particles via a tiny Higgs mixing <cit.>. The strong-CP problem is commonly solved via the Peccei-Quinn mechanism, which predicts the existence of a new particle (the QCD axion). While QCD axions are usually considered to be extremely light, in certain models they can have a mass in the MeV-GeV range <cit.>. Particles with similar coupling structures (so-called axion-like particles) furthermore arise naturally in many theories with spontaneously broken global symmetries, such as supersymmetry breaking <cit.> or string theory <cit.>.The problem of neutrino masses can be elegantly solved by introducing three right-handed neutrinos below the electroweak scale <cit.>. While the lightest of these sterile neutrinos may be a viable dark matter candidate <cit.>, the two heavier sterile neutrinos (called heavy neutral leptons <cit.>) may explain the baryon asymmetry of the universe through their decays into Standard Model particles. In this set-up, the lightest sterile neutrino would have a mass in the keV range and such tiny couplings that these particles evade all laboratory searches and never enter into thermal equilibrium in the early universe.An alternative avenue to address the dark matter puzzle is to postulate the existence of new particles that are in thermal equilibrium with the SM bath at high temperatures, but then decouple as the universe cools down. Due to its insensitivity to initial conditions, this so-called freeze-out mechanism has for many years been the leading paradigm to predict the abundance of dark matter in the present universe. While it has traditionally been assumed that the interactions that keep the dark matter particles in thermal equilibrium are mediated by SM particles (in particular electroweak gauge and Higgs bosons), this possibility has been increasingly constrained by the non-observation of dark matter signals in collider and direct detection experiments <cit.>. These constraints have led to a shift of focus towards dark matter models that introduce new interactions, mediated by new BSM particles <cit.>. Such interactions may arise for example from simple gauge extensions of the SM, such as a spontaneously broken U(1)' symmetry. These so-called dark sector models can in principle be realized for dark matter masses anywhere between a few MeV (the lower bound being imposed by the agreement of the Big Bang Nucleosynthesis predictions with observed element abundances <cit.>) and hundreds of TeV (the upper bound stemming from the so-called unitarity limit <cit.>). Nevertheless, it is particularly attractive to consider dark matter masses that fall below the energy threshold of direct detection experiments searching for nuclear recoils, which rapidly lose sensitivity for sub-GeV dark matter.Finally, it should also be mentioned that fundamental extensions of the SM, notably string theory, quite generally have a tendency to contain whole sectors of particles, very weakly coupled to the particles that make up the experiments. Such “hidden” or “dark” sectors can be coupled to the SM via “portal” interactions, e.g. dark photons <cit.> or axion-like particles <cit.>.All of the examples above motivate searches for new particles at the MeV to GeV scale, called FIPs.[In the following we will always implicitly assume this mass range when referring to FIPs. In general both lighter as well as heavier FIPs can be of interest, cf. <cit.>.] While the same arguments can in principle be used to predict specific coupling structures, the range of possibilities is so large that it makes sense to combine this top-down approach with a more model-agnostic bottom-up approach, in which we consider coupling structures that resemble interactions known from the SM. For scalar particles this means couplings similar to those of the SM Higgs boson, while for axion-like particles inspiration can be taken from the neutral pions, i.e. the Goldstone bosons of chiral symmetry breaking. GeV-scale heavy neutral leptons would interact through mixing with the active neutrinos of the SM, while the interactions of new gauge bosons (called dark photons) would resemble electromagnetism. This approach leads to a small number of well-defined benchmark scenarios, which have been spelled out explicitly by the Physics Beyond Colliders initiative <cit.>.In principle it is possible to search for such FIPs at the energy frontier, i.e. using high-energy proton-proton collisions. A key limitation however arises from the typical detector dimensions, which limit the range of observable decay lengths to a few (tens of) meters. Taking into account the substantial boost factors of light particles produced in high-energy collisions, one immediately concludes that the LHC is not the ideal environment to search for long-lived (neutral) particles with a proper decay lengths above 1 metre. Much higher sensitivities can be achieved by experiments operating with larger detectors at lower centre-of-mass energies. Indeed, many of the leading constraints on FIPs stem from beam-dump experiments carried out several decades ago. Given modern beam intensities and detector technologies, it will be easily possible to surpass the sensitivity of these experiments by orders of magnitude and probe deeply into the unexplored parameter regions of FIPs models.A typical FIP event in a beam-dump experiment would consist of a FIP being produced in the proton target (with an angular and energy distribution that depends on the specific production mechanism), propagating into the decay volume and then decaying into several SM particles. In the simplest case, the decay produces exactly two charged particles, such that the vertex position and the mass of the decaying particle may be reconstructed. However, in practice many more complicated decay modes are of interest, involving neutral particles (such as photons or neutral pions) in the final state and three-body decays. Detecting and identifying all final-state particles and reconstructing the vertex position and the mass of the decaying particle as accurately as possible is of utmost importance in order to achieve a background-free environment, as well as the possibility of characterising a signal <cit.>.In Beam Dump experiments, the physics backgrounds originate mainly from the three following processes related to muons and neutrinos emerging from the dump, for which the experiments have developed mitigation strategies: *Muon combinatorial: This type of background arises when two opposite-sign muons within the same proton spill appear to form a vertex and point back to the target.*Muon DIS: Muons may interact inelastically in the material of the detector or in the surrounding infrastructure. These DIS interactions produce V^0s but also, more importantly, false V^0s due to random combinations of tracks from the same DIS interaction. Given the small energy transfer, the DIS interactions lead to energetic products that are aligned with the direction of the incoming muon. Hence, muon DIS background is dominated by those originating in the material in the close vicinity of the fiducial volume. *Neutrino DIS: Similarly to the muon DIS background, the dominant source of neutrino-induced background comes from neutrino DIS in the material close to fiducial volume. §.§.§ HIKE FIP searchesKaon and beam-dump data sets are sensitive to complementary FIP processes and mass ranges. Operation in both kaon and beam-dump modes will allow HIKE to address a uniquely broad range of hidden-sector scenarios covering a mass range spanning from about 10 MeV to a few GeV. Moreover, operation in kaon mode provides excellent sensitivity to non-minimal dark sector scenarios involving short-lived FIPs, which completely evade detection in beam-dump experiments <cit.>.Prospects for searches for FIP production in kaon decays, including non-minimal scenarios, have received much attention recently, and are reviewed in <cit.>.HIKE kaon datasets will bring significant sensitivity improvements for dark photon (via the π^0→γ A^' and possibly K^+→π^+A^' decays), dark scalar (via the K^+→π^+S decay), heavy neutral leptons with electron and muon couplings (via the K^+→ e^+N, K^+→μ^+N and π^+→ e^+N decays), and axion-like particles (via the K^+→π^+a decay). Depending on the FIP mass and coupling constant values, the searches at HIKE will consider both invisible final states (via missing mass), and searches for production of FIPs followed by their decays (including prompt and displaced decay vertices). The HIKE projections are detailed in the proposal and, in many cases, based on analyses of the existing NA62 datasets <cit.>. Therefore the projections are robust, and fully account for such factors as background and resolution. The HIKE experiment plans to collect a substantially larger sample than NA62 in dump mode: the HIKE sensitivity curves are obtained for a total of 5× 10^19 PoT, to be compared to the 10^18 PoT expected to be collected in dump mode by NA62 at LS3. The HIKE sensitivity to the FIP benchmarks <cit.> has been studied with the data from NA62 and using full Monte Carlo simulations evolved from the NA62 Monte Carlo framework. This framework is a C++, Geant4-based code, containing a detailed description of the subdetectors and the K12 beamline.The NA62 beam-dump datasets <cit.> have been used to extrapolate the expected background level, and to quantify additional possible improvements due to upgraded or extra detectors.The overall background expected is: < 0.01, <0.8, <0.07 and <0.1 events for μ^+ μ^-, e^+ e^-, π^+π^- (γ), ℓ^±π^∓ final states, respectively.§.§.§ SHADOWS FIP searches The SHADOWS sensitivity to different benchmarks has been studied with the use of the full Monte Carlo simulation. As for HIKE, the SHADOWS full Monte Carlo is part of the general NA62 Monte Carlo framework (see above) able to simulate the interactions of the particles with the detector elements. The detailed geometry of the detector and the technologies chosen for each sub-detector has been included in the Monte Carlo along with a detailed description of the K12 beam line, the muon sweeping system, and the experimental hall.2mm The signals are generated with PYTHIA 8.32 and the background with the Geant4-based Beam Delivery Simulation or BDSIM package <cit.>.The output of BDSIM package and of PYTHIA 8.32 <cit.> is then handed over to the SHADOWS full Monte Carlo where the simulation of interactions of theparticles with the detector material and SHADOWS magnetic elements is performed. The inelastic interactions of neutrinos and muons with the detector material are simulated using the GENIE <cit.> and PYTHIA6 <cit.> generators, respectively, where the interactions are forced to occur to enhance the sample and a weight representing their probability is stored together with the event. Themuon inelastic interactions have beenstudied also using the physics lists contained in Geant4, and without forcing the interactions to occur. The results obtained with Geant4 agree within a factor of 2 with those obtained with PYTHIA6.The impact of this difference on the final result is negligible, as the background arising from muon inelastic interactions in SHADOWS is very low.2mm A detailed discussion of the backgrounds samples can be found in the Proposal <cit.> together with techniques and methods used to mitigate them. A brief summary of the findings is reported here.2mm The proton interactions with the dump give rise to a copious direct production of short-lived resonances, pions and kaons. While the TAX length is sufficient to absorb the hadrons and the electromagnetic radiation produced in the proton interactions, the decays of pions, kaons and short-lived resonances result in a large flux of muons and neutrinos. Muons and neutrinos emerging from the dump are the two major sources of background for FIP searches in SHADOWS. The muon flux predicted by simulation has been validated with two campaigns of measurements performed in ECN3 both on-axis and off-axis, when the K12 beam line was operated in beam dump mode, in November 2021 and June 2023. The measurement of the muon flux off-axis has been a major achievement with respect to the LoI. Two full size modules of the muon system and several telescopes with different technologies (silicon pixels, scintillating tiles, and micromegas) have been funded and built on purpose for that. 2mm The three main backgrounds discussed in the introduction are evaluated in the SHADOWS baseline setup, featuring 1 mbar of pressure in the decay volume instrumented with both the upstream background veto and the lateral background veto on its full length. Before any analysis step, the momentum and directions of every charged track are smeared to account for the detector resolutions after full reconstruction. Signal selection is based on two tracks pointing to a common vertex in the target area. Background rejection makes full use of the precise tracking and timing capabilities of the spectrometer and its background vetos. Table <ref> summarises the overall background in SHADOWS that can mimic a signal final state in the full SHADOWS dataset of 5× 10^19 PoT.2mm It should be reminded that prior to any suppression technique the rates of these background components in the geometric acceptance of SHADOWS are significantly lower than in an on-axis setup, especially for neutrino-induced background. This is a direct consequence of the specific kinematics of these components that favor small polar angles and therefore emission mostly in the forward direction. In addition, the lower momentum of muons and neutrinos emitted off-axis significantly reduces the probability of inelastic interactions with respect to an on-axis setup, since the inelastic cross-section raises with the momentum of the involved particles.§.§.§ SHiP FIP searches BDF/SHiP's expected physics performance in ECN3 has been studied in detail with the help of the full Geant-based Monte-Carlo framework that was developed for the original proposal. The software framework is based on the FairRoot package <cit.> and is called FairShip. The framework incorporates Geant4 <cit.> to simulate the particles through the target and the experimental setup, PYTHIA8 <cit.> for the primary proton fixed-target interaction, PYTHIA6 <cit.> for muon DIS and cascade production of charm and beauty <cit.>, and GENIE <cit.> for interactions of neutrinos. The production and decays of various types of FIPs have been implemented in FairShip. Mainly PYTHIA8 is used to generate the different signal processes. The validity of the FairShip prediction of the beam-induced particle fluxes has been verified by comparing to the data from the CHARM beam-dump experiment at CERN <cit.>. The most realistic cross-check of FairShip has been performed in summer 2018 in a dedicated experiment at the CERN SPS <cit.>. It has directly measured the rate and momentum of muons produced by 400 GeV protons dumped on a replica of the BDF/SHiP target, and found a very good agreement between the prediction by the simulation and the measured spectrum <cit.>.The background simulations have been performed with strongly enhanced muon production from the relevant processes, such as resonance decays and gamma conversion. These have been found to produce rare but difficult background events. Dedicated samples of charm and beauty hadrons have been produced. These are both a source of signals and challenging backgrounds. The effect of cascade production of charm and beauty from secondary hadrons are also accounted for in both signal and background.The SHiP detector response and resolution has been taken into account based on measurements done in test beams with prototypes of all subdetectors during the CDS phase.For the implementation in ECN3, the Geant4 simulation has been updated with the complete geometry of the underground complex, the revised muon shield, and the detectors.Extensive simulations of the three main background components discussed in the introduction have been done in the SHiP setup. In order to get large statistics for the background studies of muon and neutrino DIS, the fluxes obtained from the simulation of the minimum bias, and the charm and beauty production, were used to produce DIS events using PYTHIA6 for muons and GENIE for neutrinos, and boosting the interaction cross-sections such that every muon/neutrino interacts according to the material distribution of the experimental setup. With the use of the upstream vessel wall background tagger (UBT) and the surrounding wall background tagger (SBT), coincidence timing, and a simple and common set of selection criteria <cit.> based on reconstructed quantities, the resulting expected background levels are shown inTable <ref>. They do not differ significantly from the CDS results <cit.>. The adaptation to ECN3 and the results of the background studies bear witness of the redundancy built in to the combined performance of the suppression of beam-induced particle rates and the detector. The selection above is entirely inclusive with respect to different types of long-lived particle decays in the fiducial volume. This ensures maximum sensitivity in the FIP searches, while remaining generic to new models that may be proposed in the future. It preserves close to 100 % of the signal efficiency in fully reconstructed modes, while in general, the efficiency for partially reconstructed modes is around 70 %, obtained by simulating the signals with the full simulation. It has also been verified that the probability that an actual signal candidate is wrongly vetoed by an uncorrelated hit in the SBT remains insignificant. With the simple regional veto that requires the SBT hit to be upstream of the signal candidate vertex and within a time window of 3×σ_SBT (time resolution σ_SBT≈ ns) the probability is roughly a percent.To avoid irreducible neutrino DIS background from neutrinos interacting with the air molecules inside the vessel, a level of vacuum below 10^-2 bar is sufficient. The background from cosmics can be reduced to negligible levels using the SBT <cit.>.For the sensitivity to LDM scattering, the principal background comes from neutrino events with only one reconstructed outgoing electron at the primary vertex constitute, mimicking the signature χ e^-→χ e^-. The GENIE Monte-Carlo generator <cit.>, interfaced with FairShip, has been employed for a full simulation to provide an estimate of the expected background for 6× 10^20 PoT. After imposing a selection optimised for the signal, the total residual neutrino background amounts to ≈600 events <cit.>. The dominant background contribution arises from ν_e quasi-elastic scattering ν_e n→ e^-p, where the soft proton remains unidentified, and from topologically irreducible sources, i.e., ν_e(ν̅_e) elastic and ν̅_e quasi-elastic scattering (ν̅_e p→ e^+n).LDM signal events have been simulated with the help of the MadDump software <cit.>, and assuming pair-production (χχ̅) in the prompt decays of dark photons. In the considered dark photon mass range of M_V≈𝒪(1) GeV/c^2, only contributions from the decay of light mesons (π, η, ω) and proton bremsstrahlung have been included. Prompt-QCD and heavier DY-like production mechanisms have been proven to be negligible.§.§.§ International landscape Broadly speaking, constraints on FIPs stem from two types of experiments: fixed-target experiments searching for the scattering or decay of FIPs in a downstream detector and collider experiments searching for displaced signatures. Fixed-target experiments operate at much higher effective luminosity (i.e. much larger number of collisions) but lower centre-of-mass energy. They achieve very low backgrounds due to the long distance between the interaction point and the decay/scattering volume. As a result, both proton and electron beam dump experiments have unique sensitivity to FIPs with tiny couplings and decay lengths greater than 1 m. For shorter decay lengths, there are strong constraints from existing collider experiments, such as Belle II (see <cit.>) and LHCb (see <cit.>), which are projected to substantially improve their reach with increasing integrated luminosity in the coming decade. The complementarity of fixed-target experiments and collider experiments is illustrated in Figure <ref>.To extend the reach of the LHC to longer lifetimes, various new experiments have been proposed. These can be divided into detectors placed in the forward direction and detectors placed at large angle. The FASER experiment <cit.> provides a proof of principle of an experiment from the former category, and much more sensitive experiments could be built at a dedicated Forward Physics Facility (FPF) <cit.>. In the following, FASER2 will be taken as a representative proposed forward experiment, noting that similar sensitivities may be achieved by competing proposals such as FACET <cit.>. Among the various proposed experiments at large angle are CODEX-b <cit.>, ANUBIS <cit.> and MATHUSLA <cit.>. Out of these, MATHUSLA is most ambitious, requiring significant civil engineering, whereas the first two may be accommodated within existing facilities. In the following CODEX-b will therefore be considered as a representative of LHC large angle (LHC-LA) experiments, noting that similar sensitivities may be achieved by ANUBIS and significantly better sensitivities by MATHUSLA <cit.>.The two on-going CERN fixed-target experiments most relevant in the context of FIPs physics are NA62 (see section <ref>), which takes proton beam data both in kaon mode and in beam-dump mode, and NA64 <cit.>, which searches for missing energy in an active dump using either a (100-150) GeV electron beam or a 160 GeV muon beam <cit.>. Both experiments already achieve world-leading sensitivity to certain FIPs models and will continue taking and analysing data in coming years. The two most advanced proposals for fixed-target experiments outside of CERN are DarkQuest <cit.>, which is a proposed upgrade of the running SpinQuest experiment using a 120 GeV proton beam at Fermilab, and LDMX <cit.>, which will initially operate with a 4 GeV electron beam at SLAC, with a subsequent upgrade to 8 GeV beam energy discussed in Ref. <cit.>. While the former would be able to probe similar models as NA62 and the various proposed ECN3 experiments, the latter resembles NA64 and focuses primarily on missing energy signatures from (meta)stable FIPs. However, the lower beam energy of DarkQuest means that it will be unable to achieve the same sensitivity as the ECN3 experiments to FIPs above the GeV scale, as well as to FIPs produced dominantly in B meson decays.Finally, particles that couple primarily to photons can also be produced at the European XFEL at DESY. An experiment to detect such particles using an optical dump, called LUXE-NPOD, has been proposed in <cit.>.§.§.§ Results To illustrate the sensitivity that can be achieved by the ECN3 experiments, a number of benchmark physics cases (BCs) are considered. They have been first proposed in <cit.>, further refined in <cit.>, and have since become a community standard. Specifically, they include models of dark photons with visible (BC1) and invisible (BC2) decays, dark scalars (BC4), heavy neutral leptons with electron (BC6) and tau lepton (BC8) couplings, and axion-like particles with photon (BC9) and fermion (BC10) couplings. This subset of benchmarks has been selected to capture the full range of production modes and final states that are relevant for the ECN3 experiments in order to highlight the unique opportunities and facilitate the comparison between the proposals. Some further benchmarks can be found in <cit.>. While in many cases there exist sizeable theoretical uncertainties regarding the production and decay modes of these particles, considerable effort has been made in the past years to reduce these uncertainties and define a common framework for all experiments.That said, there still remain some differences between the experiments both in the underlying assumptions and in the concrete numerical implementations. This should be kept in mind when interpreting the sensitivity projections. The results are shown in Figures <ref>–<ref> with a layout adapted from <cit.>. All curves are exclusion limits at 90% confidence level. For the various projections, the line style reflects the maturity of the background estimates: solid lines correspond to background estimates based on the extrapolation of existing data sets, dashed lines indicate background estimates based on full Monte Carlo simulations, and dotted lines represent projections based on toy Monte Carlo simulations or on the assumption that backgrounds are negligible. In these figures, existing constraints and projections for non-ECN3 experiments are (unless mentioned otherwise) taken from <cit.>[Projections for LHC experiments are shown for the HL-LHC era (FPF integrated luminosity of 3ab^-1), for Belle II assuming 50 ab^-1 (except for BC1, where 20 fb^-1 are assumed), for NA62 assuming 10^18 PoT in dump mode and 10^19 PoT in kaon mode, for NA64 assuming 1 × 10^13 electrons and 2 × 10^13 muons on target, for DarkQuest assuming 10^20 PoT, for LDMX assuming 1.6 × 10^15 electrons on target in a 8 GeV beam and for LUXE-NPOD assuming a 40 TW laser operating for one year.], see there for additional details and references. The ECN3 sensitivity projections are for the baselinedetector designs and reference integrated intensities given in Section <ref>. When alternative designs are considered by the experiments, the corresponding sensitivity curves are given in theproposals. For all benchmarks under consideration, the proposed ECN3 high-intensity facility has the potential to make CERN the world leader in the search for MeV-GeV FIPs, improving existing sensitivities by orders of magnitude and offering unparalleled opportunities for discovery. §.§ Flavour physicsThe flavour sector of the SM has a puzzling and rich flavour structure featuring significant intrinsic hierarchies and the only known source of CP violation. Rare decays of kaons provide a vital and powerful test of this sector. Notably, high precision measurements of processes that are strongly suppressed in the SM – often related to an underlyingsymmetry structure – enable to probe scales far in excess of the direct reach of existing colliders, cf., e.g., <cit.>. Currently, ECN3 hosts the NA62 experiment, a world-leading multi-purpose kaon experiment whose primary goal is the measurement of ultra-rare K^+ decays, most notably K^+ →π^+ νν̅ for which clean theory predictions are available. The most recent measurement of the K^+→π^+νν̅ decay rate based on the NA62 Run 1 (2016–18) dataset <cit.> achieves a precision of ≈ 40 %, expected to be improved to ≈ 15-20 % until LS3. Furthermore, the NA62 experiment pursues a broad programme of rare K^+ and π^0 decay measurements, precision tests of low-energy QCD, precision tests of lepton universality, searches of lepton flavour/number violation, and searches for production and decays of hidden-sector mediators in K^+ and π^0 decays and in beam-dump mode. The HIKE project would bring the above rare K^+ and π^0 decay programme to a new level of precision with respect to NA62, improving for example the precision of BR(K^+ →π^+ νν̅) by a factor of ≈ 3-4. In addition, HIKE will accomplish a similarly broad rare K_L decay programme. This section details the unique opportunities of the combined K^+ and K_L programme to probe BSM physics and the potential impact on fundamental physics.The overview starts with a discussion of the state-of-the-art at both the theoretical and the experimental front. It identifies two scenarios of particular interest: violation of unitarity of the Cabibbo-Kobayashi-Maskawa (CKM) matrix and violation of lepton flavour universality (LFU), for which the sensitivities and potential impact of HIKE are evaluated. Further science goals are then considered in the international landscape.§.§.§ State of the art §.§.§ Recent developments in SM predictions for rare kaon decays The amplitudes for flavour-changing neutral current (FCNC) kaon decays receive contributions from physics at several energy scales. Within the SM, the relevant scales are the electroweak scale, the charm-quark threshold, and the hadronic scale of the order of the kaon mass. Effective field theory (EFT) techniques are used to factorize the amplitudes into Wilson coefficients (typically calculated in renormalization-group improved perturbation theory) and matrix elements of effective operators. For decays that are mediated (or at least dominated) by a Z penguin diagram, the Glashow–Iliopoulos–Maiani (GIM) mechanism suppresses the low-energy contributions, and perturbative uncertainties are well under control.If precise knowledge on the hadronic matrix is available, the corresponding decay is “clean”. The prime examples are the rare decays K_L →π^0 νν̅ and K^+ →π^+ νν̅. The SM predictions for their branching ratios are exceptionally clean since the requisite hadronic matrix elements can be extracted from the well measured K →πℓν_ℓ modes, including higher-order chiral corrections <cit.>. Correspondingly, the next-to-leading logarithmic QCD and the next-to-leading logarithmic QED corrections have been calculated, resulting in a residual (non-parametric) theory uncertainty at the percent level <cit.>.The rare decays K_L →π^0 ℓ^+ ℓ^- are less clean, due to the contributions of the photon penguin, but provide important probes of non-vectorial contributions of BSM physics <cit.>. The rare decays K →μ^+μ^- are dominated by long-distance (LD) contributions which makes their use as a precision probe of BSM physics challenging.Expected progress in the lattice determination of the dominant two-photon intermediate state might change this picture in the future <cit.>.Interestingly, it has been pointed out recently that the direct CP-violating, short-distance contribution to K_S →μ^+ μ^- can, in principle, be extracted experimentally using K_L - K_S interference data <cit.>. Including the effects of indirect CP violation <cit.> and recently obtained information on a relative strong phase <cit.>, the corresponding branching ratio is now also predicted with a residual theory uncertainty at the percent level.§.§.§ CKM unitarity measurements and the Cabbibo angle anomaly The main (semi)leptonic decay modes of relevance for this topic are K→πℓν (K_ℓ 3) and K→ℓν (K_ℓ 2). Thanks to a global effort involving several experiments, lattice QCD simulations and analytical QCD calculations, an impressive precision has been achieved for these modes, typically below the percent level <cit.>. In the context of the SM, these results offer the possibility of the most precise extractions of the V_us CKM element using Γ_K_ℓ 3 rate and the Γ_K_μ 2/Γ_π_μ 2 ratio, as well as a clean window to interesting QCD physics <cit.>.On the other hand, precision measurements of these decays represent an important BSM physics probe, e.g., in scenarios with SM-like flavour and CP structure.As with rare decays, EFTs represent a very useful setup that covers a vast variety of BSM physics models. A particularly simple and interesting case is the U(3)^5-symmetric one[Here U(3)^5 refers to the flavour symmetry of the gauge part of the SM Lagrangian. Each U(3) factor refers to a rotation in generation space of the gauge fermion multiplets (q_L,u_R,d_R,ℓ_L,e_R).], where all BSM effects are absorbed in the phenomenological CKM elements <cit.>. Thus the only BSM probe is a CKM unitarity test: |V_ud|^2 + |V_us|^2 + |V_ub|^2 = 1, where the last term can be neglected in practice. If the U(3)^5 symmetry is not imposed, a rich variety of effects take place, such as Lepton Flavour Violation (LFV) effects or non-standard currents, which would affect differently each decay mode.Until a few years ago, there was a good agreement between the V_us values obtained from Γ_K_ℓ 3, Γ_K_μ 2/Γ_π_μ 2, and with the CKM unitarity prediction (using the β-decay V_ud value). This overall agreement entailed strong constraints on BSM physics, corresponding to effective TeV scales, with an interesting synergy with LHC direct searches <cit.>.However, recent theoretical and experimental improvements in kaon and beta-decay physics moved apart the various V_us determinations <cit.>, yielding an interesting yet unclear situation, known as the Cabibbo angle anomaly <cit.>. This intriguing situation has sparked an intense activity in model building, EFT studies and the reevaluation of the SM contributions, see, e.g., <cit.>. Figure <ref>, left, shows the current experimental constraints in the V_us-V_ud plane.The tension between the values of V_us from K_μ2 and K_ℓ3 decays is seen in the fact that corresponding bands do not intersect at a common point with the band for V_ud from nuclear and neutron beta decays.The right panel of Figure <ref> illustrates the constraints from CKM unitarity on the contributions to the leptonic and semileptonic kaon decay amplitudes from right-handed quark currents, following the analysis of <cit.>. Specifically, denoting by ϵ_R the contributions of right-handed currents to the decays of non-strange quarks and by ϵ_R^(s) those to the decays of strange quarks, the following relations to the unitarity deficits can be written:Δ_ CKM^(1) ≡ |V_ud^β|^2 + |V_us^K_ℓ3|^2 - 1 = 2ϵ_R + 2Δϵ_R V_us^2, Δ_ CKM^(2) ≡ |V_ud^β |^2 [1 + (|V_us/V_ud|^K_μ2)^2] - 1 = 2ϵ_R - 2Δϵ_R V_us^2, Δ_ CKM^(3) ≡ |V_us^K_ℓ3|^2 [(|V_us/V_ud|^K_μ2)^-2 + 1] - 1= 2ϵ_R - 2Δϵ_R (2 - V_us^2),with Δϵ_R ≡ϵ_R - ϵ_R^(s).The colored bands in the plot show the constraints from the different constructions of the unitarity deficit in the plane of ϵ_R vs. Δϵ_R; note that the bands intersect by construction. §.§.§ Lepton flavour universality violation in rare kaon decays In the SM the three lepton flavours (e, μ and τ) have exactly the same gauge interactions and are distinguished only through their couplings to the Higgs field and hence the charged lepton masses. Models of BSM physics, on the other hand, do not necessarily conform to the LFU hypothesis and may thereby induce subtle differences between the different generations that cannot be attributed to the different masses. Among the most sensitive probes of these differences are rare kaon decays with electrons, muons or neutrinos in the final state.The FCNC decay s→ d can be described with the effective Hamiltonianℋ_ eff=-4G_F/√(2)V_tdV_ts^*α_e/4π∑_k C_k^ℓO_k^ℓ ,where G_F denotes Fermi's constant, α_e the fine-structure constant and the Wilson coefficients C_k^ℓ multiply the effective operators O_k^ℓ. The present discussion can be limited to the following sub-set of effective operators motivated by various anomalies in B physics <cit.>:O_9^ℓ = (s̅γ_μ P_L d) (ℓ̅γ^μℓ) , O_10^ℓ = (s̅γ_μ P_L d) (ℓ̅γ^μγ_5 ℓ) ,O_L^ℓ = (s̅γ_μ P_L d) (ν̅_ℓ γ^μ(1-γ_5) ν_ℓ) .For the study of BSM physics contributions to δ C_k^ℓ it is possible to reduce the set of operators further by considering only scenarios where the neutral and charged leptons are related by SU(2)_L gauge symmetry, such that δ C_L^ℓ≡δ C_9^ℓ = - δ C_10^ℓ. The individual constraints on δ C_L^e and δ C_L^μ = δ C_L^τ are shown in Figure <ref> (taken from <cit.>), where it is readily seen that the main constraining observables are BR(K^+→π^+ νν̅) and BR(K_L→μμ̅),where for the latter the unknown LD sign plays an important role. For theories with LFU New Physics effects, such that δ C_L^e = δ C_L^μ = δ C_L^τ, the NA62 measurement of K^+ →π^+ νν̅ already puts rather strong constraints on possible lepton-flavour universal BSM effects. However, these constraints are relaxed considerably if LFU-violating BSM effects are allowed.§.§.§ HIKE sensitivity The potential impact of HIKE on the physics landscape described above is discussed in details in the HIKE proposal <cit.> and shortly summarized below. §.§.§ CKM unitarity Given that the dominant contribution to the uncertainty on the measurement of the first-row unitarity deficit is from the determination of V_ud from nuclear beta decays, the fact that experimental and theoretical sources contribute approximately equally to the current overall uncertainty on V_us, and the substantial set of kaon decay measurements in world data, HIKE can contribute to the understanding of the anomaly mainly by providing experimental confirmation of the leptonic and semileptonic branching ratio values, to help to exclude an experimental origin. Indeed, the experimental situation is complex, with a few measurements of the branching ratios playing an outsize role in the overall determination of V_us. For charged kaon decays, HIKE Phase 1 is poised to make a significant impact. Not only does the value of V_us/V_ud used in the unitarity analysis derive from a single measurement of BR(K_μ2) with a 0.27 % total uncertainty <cit.>; this measurement also impacts the normalization of all other branching ratio measurements in the K^+ decay rate fit to world data, e.g., by the PDG or the analysis of <cit.>. The importance of the measurement of the ratio BR(K_μ3)/ BR(K_μ2) to settle this question is discussed in <cit.>. HIKE could also make a very precise measurement of BR(K_μ3)/ BR(K_e3), an important test of LFU, as well as of other important ratios amenable to measurement with good precision, such as BR(K_e3)/ BR(K_π2), BR(K_μ3)/ BR(K_π2), and BR(K_π2)/ BR(K_μ2), possibly with a unified analysis. With the ratios between the widths for four of the six main K^+ decay modes thus determined, current world data on the branching ratios for K_μ2, K_π2, K_e3, and K_μ3 can be omitted from the K^+ rate fit, allowing HIKE to make a nearly independent determination of the K_μ2 and K_ℓ3 branching ratios.The limiting systematic uncertainties are difficult to predict, but the HIKE sensitivity can be estimated on the basis of past experience. NA48/2 measured the ratios BR(K_e3)/ BR(K_π2),BR(K_μ3)/ BR(K_π2), andBR(K_e3)/ BR(K_μ3) at the level of 0.4 % <cit.>.It should be easy for HIKE to match or exceed this precision, especially for the ratiosBR(K_μ3)/ BR(K_μ2) and BR(K_μ3)/ BR(K_e3), for which significant cancellations of systematic uncertainties are expected.The HIKE Phase-1 sensitivity estimate assumes 0.2 % total uncertainty for the measurements of these two ratios, and 0.4 % total uncertainty for the measurements of BR(K_e3)/ BR(K_π2), BR(K_μ3)/ BR(K_π2), and BR(K_π2)/ BR(K_μ2).The potential improvements to the knowledge of the semileptonic branching ratios for K_L decays from HIKE Phase 2 are more challenging to evaluate.One possible set of HIKE Phase-2 measurements that could be added to the current K_L world data set to improve the precision of the K_ℓ3 branching ratios consists of high-precision measurements of BR(K_e3)/ BR(K_μ3) and BR(π^+π^-)/ BR(K_e3), as well as a good measurement of BR(π^+π^-)/ BR(π^+π^-π^0) with less stringent precision requirements, to assist in normalization via the global fit. The corresponding Phase-2 sensitivity estimates assume total uncertainties of 0.3 %, 0.4 %, and 0.6 %, respectively. These are consistent with or slightly more conservative than the assumptions for HIKE Phase 1. In particular, NA48 made a statistically dominated measurement of BR(π^+π^-)/ BR(K_e3) with a systematic uncertainty of 0.3 % <cit.>. The impact of adding the HIKE measurements from both phases to the global fit in this scenario can be seen in Figure <ref>. The increase in sensitivity from the reduced uncertainties for the branching ratios can be appreciated in the smaller size of the white ellipses. Under the assumption that consistent results are obtained for K_μ2 and K_ℓ3, the values obtained for V_us are perfectly consistent, indicating that if the unitarity deficit is attributed to right-handed currents, they must be SU(3) flavour universal.The level of exclusion of the point ϵ_R = Δϵ_R = 0 is greatly decreased: the current 3.1σ evidence for right-handed currents is reduced to a mere 2.2σ curiosity. In this scenario, while the kaon measurements are consistent, the unitarity deficit remains; the precision obtained in the kaon sector strongly motivates further progress on the determination of V_ud, especially in the theoretical calculation of the radiative corrections.§.§.§ LFU violation Table <ref> summarises the SM predictions for various (semi)leptonic and rare kaon decays from <cit.>, the current experimental status and the HIKE sensitivities. These values are the inputs for the theory analysis described below. The HIKE Phase 1 sensitivity is estimated thanks to the extensive experience of the NA62 experiment, corresponding to a factor four increase in PoT and kaon decays. HIKE, with new or upgraded detectors and readouts to profit the most from the increased beam intensity, will improve the acceptance of kaon decays and keep the random veto under control at much higher intensity. Improved upstream detectors will be used to control the dominant background modes. The sensitivity to the K_L→π^0ℓ^+ℓ^- decays at Phase 2 is determined primarily by the irreducible Greenlee background K_L→γγℓ^+ℓ^- <cit.>. This background is suppressed exploiting the reconstructed mass of the di-photon system (which peaks at the π^0 mass for the signal), photon energy asymmetry in the kaon frame (which has a flat distribution for the signal and peaks at ±1 for the background), and the minimal angle between any of the photons and any of the leptons in the kaon frame (which is on average higher for the signal than for the radiative Greenlee process). The expected numbers of SM signal (N_S) and Greenlee background (N_B) events in five years of HIKE Phase 2 operation, evaluated using a full Geant4 simulation, reconstruction and analysis chain, are summarised in Table <ref>. The K^+→π^0π^+π^- background with pion decays in flight to the K_L→π^0μ^+μ^- decay is found to be sub-dominant using a full simulation. HIKE is expected to provide the first observation (above 5σ) and measurement of both K_L→π^0ℓ^+ℓ^- decay modes, making it possible to determine the corresponding branching ratios with a precision of 20 %. Following the strategy of <cit.>, projection fits of Wilson coefficients of equation 5 are made (using SuperIso v4.1 <cit.>) for the future kaon measurements that will become possible with the HIKE program. The projection fits require both the possible future measured values as well as the experimental precision. For the latter the expected HIKE sensitivities are taken from Table <ref>, while for the projected central values two scenarios are assumed:* projection A: the predicted central values for those observables with only an upper bound is projected to be the same as the SM prediction while for the measured ones the current central values are taken; * projection B: the central values for all of the observables are projected with the best-fit points obtained from the fits with the existing data.Both projections do not assume any improvement in the theoretical precision. The projected fits of the two scenarios are shown in Figure <ref> where the 68 and 95 % CL regions are shown with the two shades of light-green for projection A and the two shades of dark-green for projection B. The two panels correspond to the two possible signs of the LD contributions to K_L→μμ̅.The two scenarios give quite different results with projection A indicating overall consistency with SM at the level of 3σ while projection B clearly departs from the SM at more than 3σ, especially for positive LD. The sign of the LD contributions to K_L→μμ̅ has a clear impact on how precisely BSM can be probed and although currently the theory uncertainty overshadows the experimental error, in case of future improvement of theory prediction, decrease in experimental uncertainty will be relevantfor extracting information on BSM physics as well as identifying the correct sign of A_Lγγ^μ.§.§.§ Further science goals §.§.§ Lepton flavour violation Individual lepton flavours – electron, muon, and tau number – are conserved in the SM but known to be violated in nature, as evidenced from neutrino oscillations. No LFV has yet been observed in the charged-lepton sector, but is generically expected in many extensions of the SM, notably those that aim to generate neutrino masses <cit.>. An observation would provide groundbreaking indirect evidence for new elementary particles, e.g. heavy neutrinos <cit.>, additional Higgs bosons, or leptoquarks <cit.>. The absence of model-independent predictions leads to explore a wide variety of LFV signatures <cit.>. HIKE will be able to search for LFV in kaon and π^0 decays, reaching the sensitivity to branching fractions down to O(10^-13). Recent results from NA62 include limits on the decays K^+→π^+e^+μ^- <cit.>, π^0→ e^+μ^- <cit.>, and K^+→μ^- e^+e^+ ν <cit.>, with analogous charge-flipped final states currently only constrained by older experiments <cit.>. Phase 1 of HIKE will improve on the processes listed above, as well as other modes including K^+→ e^-μ^+μ^+ν, K^+→π^+π^0e^+μ^-, K^+→π^+(π^0)e^-μ^+, π^0→ e^+μ^-. Phase 2 of HIKE will study K_L decays and is likely to improve limits on LFV decays such as K_L→ e^±μ^∓ (π^0)(π^0) <cit.> and K_L→ e^± e^±μ^∓μ^∓ <cit.> that still stem from the BNL-E871 and KTeV experiments. The LFV signatures above implicitly assume heavy new physics, but HIKE will also be sensitive to several LFV channels mediated by FIPs, involving displaced vertices. §.§.§ Lepton number violation While the individual lepton flavours are without a doubt broken in nature, the same is not known for total lepton number: no lepton-number-violating process has ever been observed, in agreement with the SM prediction <cit.>. An observation would again provide evidence for additional particles beyond the SM and have wide-ranging consequences for our understanding of fundamental physics and even cosmology, since lepton number violation could be the reason for the observed dominance of matter over antimatter <cit.>. Neutrino masses can serve as motivation for these violations too: if neutrinos are Majorana particles then lepton number is broken and corresponding signatures are expected, themost sensitive of which arguably being neutrinoless double beta decay (A,Z)→ (A,Z+2)+2 e^- <cit.>. Meson decays provide a complementary probe that is sensitive to different flavour structures <cit.>. Phase 1 of HIKE will be able to improve on the bounds recently set by NA62 in the channelsK^+→π^-μ^+μ^+ <cit.>, K^+→π^-e^+μ^+ <cit.>, and K^+→π^-(π^0)e^+e^+ <cit.>, reaching the sensitivity to branching fractions down to O(10^-13). Searches for processes with displaced vertices involving emission and decay of a heavy Majorana neutrino, such as K^+→μ^+N, N→π^-μ^+, are also of interest. §.§.§ Precision tests of low-energy QCD Most kaon decays are governed by long-distance physics and are described by chiral perturbation theory (ChPT), the low-energy EFT of QCD.Kaon decay amplitudes are evaluated in the ChPT framework using the so-called low-energy constants determined from experimental data. Comprehensive measurements of kaon decay rates and form factors represent both essential tests of the ChPT predictions and crucial inputs to the theory. A complete overview of kaon decays in relation to the ChPT can be found in <cit.>.The HIKE dataset will provide a unique opportunity to perform a wide range of precision measurements of rare and radiative decays of both K^+ and K_L mesons:* Precision measurements of K^+→π^+ℓ^+ℓ^-allow for the determination of the sign of the form-factor a_S, since different combinations of ChiPT parameters enter the O(p^4) chiral Lagrangian <cit.>. * Precise measurements of K^+→π^+γγ, K^+→π^+γℓ^+ℓ^- provide interesting chiral tests, including determination of the O(p^4) weak chiral Lagrangian and relations among low-energy observables <cit.>. * The decays K^+→π^+π^0γ, K^+→π^+π^0ℓ^+ℓ^- are interesting to determine the weak chiral Lagrangian <cit.> and to study CP asymmetries. * A measurement of K^+→ e^+νγ aiming at O(p^6) will be very interesting since the ChPT Lagrangian terms here are not known from other data, and a recent measurement from J-PARC <cit.> departs from the O(p^4) theory result. * The radiative decay K^+→π^0 e^+νγ has been accurately studiedtheoretically, and can limit to 1 % the novel structure-dependent contributions of new physics <cit.>. * Measurements of the principal kaon decay modes K→ 2π and K→ 3π provide overall information on all isospin amplitudes, ππ phase shifts, the δ I =1/2 rule, and a test of the weak chiral Lagrangian <cit.>. The recent NA62 K^+→π^+μ^+μ^- experimental measurements has already improved the theoretical determination of the form-factors <cit.>. HIKE expects to collect background-free samples of several times 10^5 events of both K^+→π^+e^+e^- and K^+→π^+μ^+μ^- decays, allowing for crucial improvements in the precision of the extracted form factors. Measurements of the branching ratios of decays K^+→ e^+νγ, K^+→π^0e^+νγ, and K^+→π^+γγ are expected to reach a relative precision of a few per mille. The decays K^+→π^+γ e^+e^-, K^+→π^+π^0 γ and K^+→π^+π^0 e^+e^- are expected to be measured with a few per cent relative precision. Studies of the K→ 2π and K→ 3π decays will provide important inputs to ChPT parameter fits. §.§.§ International landscape This overview is concluded with a brief discussion of how the physics potential of HIKE compares with other ongoing or planned experimental efforts. §.§.§ Kaon facilities A central player in the field of kaon physics is the KOTO experiment at J-PARC, whose physics programme is entirely focused on the decay K_L →π^0 νν̅. The Grossman-Nir (GN) bound <cit.> states that under mild assumptions the partial decay width for this process must be smaller than the one for K^+ →π^+ νν̅, which translates to ℬ(K_L →π^0 νν̅) < 4.3 ℬ(K^+ →π^+ νν̅). Given the KOTO upper bound ℬ(K_L →π^0 νν̅) < 3.0 × 10^-9 at 90 % CL <cit.> and the NA62 measurement of ℬ(K^+ →π^+ νν̅), it can be concluded that KOTO is currently not sensitive to models of BSM physics that satisfy the GN bound. Nevertheless, KOTO provides valuable tests of BSM models that circumvent the GN bound. This can happen for example through the direct production of a new long-lived particle X via K→π + X <cit.>. Naively, the production of such new particles is also subject to the GN bound, such that the leading sensitivity should stem from K^+→π^+ + X decays. However, there are several differencessuch as experimental acceptances <cit.> andviolation of flavour symmetries <cit.>. In the context of such models, KOTO (including its future upgrade KOTO-II) and HIKE are highly complementary.§.§.§ Other probes of flavour physics In the context of LFU violation, rare B meson decays have received substantial interest in recent years. While the hints for LFU violation in b→ sℓ^+ℓ^- transitions have disappeared and ℬ(B_s→μ^+μ^-) is in good agreement with the SM prediction, there is still strong tension in observables such as ℬ(B→ Kμ^+μ^-), ℬ(B_s→ϕμ^+μ^-) as well as angular observables in B→ Kμ^+μ^- and B_s→ϕμ^+μ^- <cit.>. Together, they point towards LFU-violating BSM in the Wilson coefficient C_9 within a global fit <cit.>. Furthermore, the measurements of R(D) and R(D^*) point towards LFU violation in charged currents <cit.>. While the former anomalies might lead to an enhancement of K_L,S→μ^+μ^- <cit.>, the latter are particularly relevant for K→πνν <cit.> since left-handed tau leptons are linked to tau neutrinos via SU(2)_L invariance and the neutrino flavour in K→πνν is not detected.To illustrate the interplay of different constraints and future experiments, three specific models of BSM physics are considered. The first, discussed in <cit.>, introduces a scalar leptoquark S_1 ∼ (3̅,1)_+1/3 coupled only to the third generation of quark and lepton SU(2)_L doublets:ℒ⊃λ_t τq̅_3^c l_3 S_1 + h.c. ,where q_3 = (t_L, V_t d_j d_L^j), l_3 = (ν_τ,τ_L). In this up-quark basis, the coupling to left-handed down quark d^i_L is proportional to the corresponding V_t d_i CKM element. The second, discussed in <cit.>, considers a vector leptoquark SU(2) singlet with hypercharge -4/3 and dominant couplings to third-generation leptons:ℒ⊃ (κ_fi^L Q_fγ_μ L_i + κ_fi^R d_fγ_μ e_i) V^μ†_1 + h.c. .Finally, the third model is based on thetop-philic Z' proposed in Refs. <cit.>. In contrast to the model set-up considered there, vector couplings to both muons and tau leptons are considered, giving rise to an interesting interplay between the LHC (which gives the dominant constraints for small Z' masses) and flavour physics (which achieves leading sensitivity for large Z' masses).Exclusion regions, interesting parameter regions and sensitivity projections for the three models are shown in Figures <ref>–<ref>. In all plots, the NA62 exclusion limit corresponds to BR(K^+ →π^+ νν̅) < 0.42 ×SM and BR(K^+ →π^+ νν̅) > 2.04 ×SM <cit.>, while the NA62 (HIKE) sensitivity projections assume that the SM value for BR(K^+ →π^+ νν̅) will be confirmed with 20 % (5 %) uncertainty. We emphasize that in these models there is non-trivial interference between the SM and the new physics contribution to BR(K^+ →π^+ νν̅). As a result, the deviation from the SM does not simply scale proportionally to the coupling strength squared, and it is possible for the branching ratio to become smaller than in the SM. In such a case the sensitivity improvement in terms of the underlying couplings achievable by HIKE may differ from the naive expectation based on the improvement in precision. §.§ Neutrino Physics Collisions induced by the high-intensity and high-energy proton beam extracted from the SPS and reaching the ECN3 experimental hall will produce copious amounts of neutrinos, which—despite faint interaction rates—will enable a comprehensive neutrino-physics program at ECN3.In particular, the use of emulsion detectors (cf. Sections <ref> and <ref>) allows for various measurements with so-far poorly explored ν_τ neutrinos. An obvious highlight would be the high-significance observation of the ν̅_τ in concurrence with LHC-neutrino experiments.In general, precise and reliable theoretical predictions for the scattering rates of (anti-)neutrinos on proton and nuclear targets constitute a central ingredient for the interpretation of a wide variety of ongoing and future neutrino experiments.In turn, measurements of these neutrino scattering rates can provide valuable probes of the partonic structure of nucleons and nuclei as well as of fundamental SM parameters.Both the ν-ECN3 experiments and the Forward Physics Facility (FPF <cit.>) proposed at the LHC will be able to carry out at least some of these studies including with tau neutrinos. After a general overview of the potential physics topics that could be addressed by the projects, specific information on the expected performance of the experiments is given in Sections <ref>, <ref> and, for common issues, <ref>. A summary and comparison of the physics reach in the international landscape, including theFPF proposed at CERN, is provided in <ref>.§.§.§ Physics caseAn overview of the expected fluxes of neutrino interactions and reconstructed events is given in Table <ref>, complemented by the number of charmed particles expected to be detected by SHiP SND. A number of potentially interesting physics topics are listed below.It should be emphasized that decisive quantitative evaluations are not available for all topics, and that some of them are listed to serve as inspiration for further feasibility studies.* ν̅_τ observation: The ν̅_τ is the only particle in the SM of particle physics that remains to be experimentally observed.* Lepton-flavour Universality (LFU): The ECN3 and FPF neutrino experiments are able to simultaneously measure the ν q →ℓ q' charged-current (CC) ν-scattering cross sections for all three neutrino flavours. This helps to reduce associated systematic uncertainties and allows for a more precise comparison of those cross sections, e.g., to search for hints of BSM physics. Strong constraints however already exist<cit.> since similar contact operators or diagrams would also contribute to meson decays via q → q' ℓν and to LHC scattering via q q' →ℓν.* DIS structure functions F_4 and F_5: Deep-inelastic scattering (DIS) of a neutrino on a nucleon was defined in the generic (model independent) way assuming intermediate vector boson (IVB) exchange between lepton and hadron currentswith five independent structure functions <cit.>. The most commonly studied ones, F_1 and F_2, only require single-photon exchange. Electroweak effects give rise to F_3. These three have all been measured quite precisely for the proton. F_4 and F_5 are suppressed for small lepton masses. This makes tau leptons the only viable mean of accessing these so far unmeasured structure functions via CC tau-neutrino DIS. * Parton distribution functions (PDFs): PDFs are fundamental quantities for describing, e.g., nucleons. Many processes are employed (cf. discussion in Section <ref>) to extract precise PDFs. Sea quarks are generally more difficult to access, which makes electroweak processes, such as CC ν-DIS, a usually fruitful tool for singling out their distributions. In practice, ν-DIS data have been taken on nuclear targets and not free nucleons. It is well established that parton distributions get modified when the parent hadron is embedded inside a nucleus. On one hand, this requires careful studies when using such data for measuring nucleon PDFs. On the other hand, ν-DIS off nuclear targets sheds further light on nuclear PDFs, complementary to charged-lepton scattering by nuclei, even more so when different target materials are employed. Significant impact on strangeness PDFs, including different systematics compared to other approaches, can be expected here thanks to the unique possibility of direct tagging of charm quark production in emulsion.* Charm production in neutrino interaction: Neutrino-induced interactions in emulsion detectors can be used to investigate inelastic, quasi-elastic, as well as exclusive charm production, with the added benefit of reconstructing the charm-decay chain. Alternatively, charm can be identified via their muonic decay channels, without the need of emulsion detectors. Such data sets are beneficial to both study charm fragmentation, including charmed baryons and pentaquarks or doubly-charmed hadrons in a specific kinematic regime <cit.>,as well as charm-hadron decays, e.g., new decay channels.Additional interesting aspects that merit but also require more detailed studies are:* ν_τ magnetic moment: While the SM predicts very small neutrino magnetic moments to arise at one loop level (μ_ν∼ 10^-19μ_B × (m_ν/eV) in the Dirac neutrino case), BSM physics could potentially lead to larger magnetic moments <cit.>. Solar neutrino measurements have provided the most stringent constraints on the magnetic moment of tau neutrinos, yielding a limit of μ_ν < 1.3· 10^-11μ_B <cit.>. Independently, the neutrinos magnetic moments can be probed at accelerator experiments by searching for neutrino-electron scattering events with low-energy recoils. The currently strongest purely laboratory-based bound on the ν_τ magnetic moment, μ_ν < 3.9 · 10^-7μ_B, comes from DONUT <cit.>. While measurements at ECN3 are expected to improve on this (see, e.g., the SHiP study in <cit.>), they will not reach a level comparable to the astrophysical constraints. They therefore provide a welcome independent laboratory confirmation but are likely to be sensitive only to exotic models.* Study of neutral currents: The study of the neutral-current (NC) ν-scattering rate, or equivalently the ratios of NC-to-CC cross sections, provide sensitivity to a variety of phenomena. These include, e.g., the weak mixing angle (same for all flavours; requires high precision) or non-standard neutrino interactions (possible flavour or energy dependence). A comparison with measurements at other experiments with different flavour and energy distributions, such as NuTeV, is expected to provide further input to phenomenological studies. Significant deviations of NC results from the SM could indicate interactions of new FIP particles. Studies on NC-related phenomena have been performed by SHiP SND (see section 7.2.5 in <cit.>) and FPF (see sections 7.3.2,7.5.3, and 7.5.8 in <cit.> and <cit.>).* Sterile neutrino/HNL oscillations: Since SM neutrino oscillations are negligible for the ECN3 and FPF experiments, any sign of an oscillation signal would hint toward the existence of an additional eV-scale sterile neutrino.Taking into account existing constraints, a possible eV-scale sterile neutrino oscillation signal would cause up to percent-level deviations, which would be experimentally challenging to observe and would require a precise understanding of the expected flux.While a study was performed for FPF (see Section 7.5.9 in <cit.>), both SHADOWS NaNu and SHiP SNDneed to evaluate their sensitivities to eV-scale sterile neutrino oscillations from known neutrino flavours. SHiP experiment is sensitive to oscillations of GeV-scale heavy sterile neutrino between lepton number conserving and violating states with m_N ∼ 1 GeV and Δ m ∼ 10^-6 eV (see Figure <ref> taken from <cit.>)). SHiP SND detector is just in front of the SHiP's HSDS decay volume closer to beam dump, so it will be interesting to study how it can improve this result at small values of proper time by adding events with heavy neutrino decays in the magnet or even in the target region.On a more exploratory note, data from the ECN3 neutrino experiments potentially help in validating MC simulation for neutrino oscillation and astroparticle experiments, but more studies would be needed to quantify this physics case. §.§.§ SHADOWS neutrino measurementsThe baseline concept of NaNu, the SHADOWS neutrino detector, foresees two separate detector components: one active detector component closer to the beam-line targeting the study of muon neutrino interactions, and a partly passive detector component based on emulsion, aiming for tau-neutrino physics.A detailed description of the experimental setup and possible extensions are discussed in <cit.>. Pythia8 was used to estimate the neutrino kinematics at the off-axis location of SHADOWS NaNu, while GENIEv3 and Geant4 were used to simulate the neutrino interactions and their subsequent decay products in the detector. In the following, only the physics reach of this baseline detector system assuming four full years of operation and a collected data set of 5×10^19 PoT is summarized. An overview of the expected neutrino interactions and reconstructed event yields [ν_μ/ν̅_μ rates in the NaNu detector include the events in the Tungsten-Micromegas part of the detector without emulsions and closer to beam dump axis.] is given in Table <ref>.The expected number of reconstructed ν_μ and ν̅_μ interactions is obtained by requiring a minimal muon momentum of 5 GeV. They are dominantly reconstructed by the active detector component and to a smaller extent by the emulsion detector. Assuming additionally a minimal hadronic energy of the recoil system of 10 GeV, to allow for a sufficiently precise reconstruction of the full event kinematics, the numbers reduce by another 40 %. The hadronic energy is measured using scintillator plates that are interleaved between the passive tungsten plates. An energy resolution of 200%/√(E [GeV]) is expected.Differential cross-sections in a two-dimensional binning of 5×5 bins in Bjorken x and squared momentum transfer Q^2 can be measured with statistical uncertainties in the range between 5 % and 10 % for ν_μ and ν̅_μ interactions, respectively (cf. Figure <ref> top left). Those measurements would provide a consistency test of existing neutrino data in the context of global fits of PDFs.The muon neutrino measurements at SHADOWS NaNu are expected to be limited by systematic uncertainties, which are expected to be on the order of 2–4 %, as observed in previous cross-section measurements of muon neutrinos (e.g. in <cit.>). Charm-meson production in neutrino events can either be identified in the emulsion target or via the muonic charm decay channels. In the latter case, the full reconstruction can be performed using the active detector components and no reconstruction within the emulsion is required. Taking acceptance and reconstruction efficiencies as well as minimal momentum requirements into account,about 150 identified charm-meson candidates in a di-muon final state can be reconstructed.The number of identifiedν_τ and ν̅_τ interactions exceeds the currently available statistics by a factor ten, allowing in principle for first ν̅_τ candidates during the first year of operationwithin the baseline experimental setup. While the signal over background ratio for ν_τ is very high, we expect background contributions from charm-induced processes for the reconstruction of ν̅_τ. Those can be distinguished by their decay signatures, yielding a background estimate for the ν̅_τ channel below 2 events, allowing for a first experimental evidence of ν̅_τ at the end of data taking. The inclusive cross section of ν_τinteractions can be measured with a statistical precision of 10 % and cross-section measurement of ν_τ and ν̅_τ interactions can be used to test the combined effect of F_4 and F_5 structure functions <cit.> on the ν_τ cross section for the first time, in particular if it is as large as about 30 % at E_ν=20 GeV (even larger for ν̅_τ interactions, and decreasing for higher energies), as predicted by available QCD analyses.Given that the expected ν_τ energies (Figure <ref>) are in the range where the effect is expected to be maximal, first constraint on F_4 and F_5could be possible with SHADOWS NaNu.Similar to the determination of the upper limit for the ν_τ magnetic moment by DONUT <cit.>, a study on the ν_τ magnetic moment can be performed at SHADOWS NaNu. It is reasonable to assume similar systematic uncertainties with improved statistical precision.SHADOWS NaNu could probe LFU at the 𝒪(10%) level, the precision driven by the statistical one on the tau-neutrino interaction cross section (the electron and muon neutrino cross sections are systematically limited as pointed out before). One may note, though, that LFU of such size is currently not plausible for these ν energies <cit.>. The integration of the neutrino detector system, in particular its active components, into the main SHADOWS experiment would allow SHADOWS to extend the search for long-lived particles. Moreover, the emulsion detector can be used for the direct search of signatures of light bosonic dark matter. Detailed studies are still ongoing. §.§.§ SHiP neutrino measurements The Scattering and Neutrino Detector in SHiP, SND, consists of three elements: the neutrino target and vertex detector, the target tracker stations, and a muon spectrometer (cf. Section <ref>).The muon spectrometer is meant to measure the charge and momentum of the muons, in combination with the SHiP muon spectrometer of the hidden sector. Given the correlation between the emission angle and momentum, muons with high momentum will be detected in the SHiP decay spectrometer and therefore the SHiP SND spectrometer magnet will focus mostly on those with lower momentum, thus loosening the requirements in terms of field strength times length of the spectrometer.The right plot of Figure <ref> shows the muon momentum spectrum: the portion detectable in the Hidden Sector spectrometer is highlighted as a shaded area and it amounts to about one third. Muons with momentum below 50 GeV will have to be detected in the SHiP SND spectrometer.This makes the design of an air core magnet less demanding and more compact. Three tracking stations are foreseen in the spectrometer, one in front, one in the middle, and the third one downstream. The role of the intermediate station is to detect low-energy muons that will not be in the acceptance of the most downstream station.The field strength and length is being optimised, currently assumed to provide 3Tm.The Pythia event generator was used to simulate proton interactions with the target and obtain the neutrino flux. This includes a dedicated simulation of the cascade effect <cit.>. Neutrino interactions are described using the GENIE event generator while the description of the detector response is based on Geant4. The expected rates of reconstructed events of all six neutrino types are given in Table <ref> together with the expected number of produced charmed particles. These high rates of charmed particles will allow a rich program of charm physics <cit.>. Figure <ref> top middleshows the number of muon neutrino CC DIS events in each bin of the probed 2D region in x & Q^2 for 6 × 10^20 PoT.The leading systematic uncertainty for an accurate cross-section measurement is the uncertainty on the neutrino flux. This is particularly true for tau neutrinos that are produced via the D_s decay. Charm production in p+p collisions at 400 GeV was measured with an accuracy better than 10 %by the NA27 experiment <cit.>. A dedicated measurement of the D_s production with the identification of the subsequent D_s →τ decay is being carried out by the NA65 experiment <cit.>. They expect to reconstruct about 1000 D_s →τ decaysin 2.3 × 10^8 proton interactions with a tungsten target <cit.>. The data, which will become available in the coming years, will narrow down the uncertainty on the tau neutrino flux.An important aspect is that in a thick target such as the one used for the BDF, charmed hadrons are also produced in the hadron cascade: the relevant process is proton quasi-elastic scattering followed downstream by the same proton inelastic scattering with charm production on a target nucleus. Simulations show that the charm yield increases by more than a factor two due to this effect. In 2018, the SHiP Collaboration successfully conduced a feasibility test of the charm production measurement, including the cascade effect, using the 400 GeV SPS proton beam impinged on a replica of the SHiP target <cit.>.The success of this feasibility test <cit.> paves the way for an extensive measurement campaign. Ongoing and planned measurements should hence permit to reduce systematic uncertainties on the tau neutrino flux to the percent level. On the other hand, the high statistics accumulated by the experiment will allow to define different control samples where detection efficiency will be evaluated with data-driven procedures. It is expected that this procedure will lead to an uncertainty on the detection efficiencies at a similar level as that reached on the tau neutrino flux. It is worth pointing out that measurements of relative processes, such as the charm production in CC neutrino interactions and the corresponding studies of the strange-quark content of the nucleons, are much less affected by the uncertainty on the absolute flux. §.§.§ Common experimental issues The important experimental aspects of the proposed experiments at ECN3 are the expected muon fluxes, maximum tolerable track fluxes for emulsion detectors, and corresponding frequency of the emulsion exchange.Table <ref> summarises the assumed parameters for the neutrino experiments at ECN3 as well as the LHC.It is interesting to note that the expected muon flux in SHADOWS NaNu and SHiP SND are close to the experimental muon fluxes at the running FASERν and SND@LHC experiments.So the performance of emulsions in ECN3 neutrino detectors looks feasible both technically and financially.§.§.§ Physics reach in the international landscapeThe ECN3 experiments are exposed to neutrino beams with energies in the range 10 - 100 GeV. The corresponding energy spectra of interacting muon neutrino events and tau neutrino events at SHiP SND and SHADOWS NaNu are compared to the worldwide context in Figure <ref>.A variety of historical neutrino experiments has operated in a similar energy range. They predominantly include experiments placed in beams ofmuon neutrinos or anti-neutrinos, such as CDHSW <cit.>, CHARM <cit.>, CHARM II <cit.>, and CHORUS <cit.> at CERN as well as CCFR <cit.> and NuTeV <cit.> at Fermilab.In particular, the Fermilab experiments have collected up to 10^6 muon neutrinos, which is comparable to the expected rates at SHiP SND. The data collected by these experiments still provide the most precise data set of high-energy neutrino scattering and are used as input for most proton <cit.> and nuclear <cit.> PDF determinations, in particular to probe antiquarks and strangeness <cit.>, to measure the weak mixing angle, or to constrain models of new physics such as Non-Standard Interactions (NSI). Figure <ref> also shows previously obtained measurements of the muon-neutrino–nuclei interaction cross sections as well as their predictions from recent theoretical calculations.Due to the expected number of muon neutrino interactions and kinematic coverage, SHiP SND and SHADOWS NaNu will provide complementary input to validate and improve those measurements.In addition, there has been another class of accelerator neutrino experiments that were able to detect ν_τ. This includes DONUT <cit.>, which observed 9 directly produced ν_τ, and OPERA <cit.>, which observed 10 ν_τ produced in oscillations.In ECN3, about 1.5× 10^5 and 2× 10^2 tau neutrinos are expected to undergo CC interaction in the SHiP SND and SHADOWS NaNu detectors, respectively. This would significantly increase the number of observed ν_τ events compared to DONUT and OPERA, and—thanks to the employed magnets—separate detection of ν_τ and ν̅_τ events will be possible.Moreover, at least SHiP SND (and the FPF, see below) should be able to significantly improve DONUT's laboratory-based bound on the ν_τ magnetic moment (see, e.g., <cit.>). Laboratory neutrinos with even higher energies are produced only at the LHC. Two experiments, FASER <cit.> and SND@LHC <cit.>, have recently started their operation and have both reported their first observation of neutrinos <cit.>. FASER is an on-axis detector consisting of an emulsion target followed by a magnetized spectrometer. SND@LHC is a slightly off-axis detector consisting of an emulsion target followed by a hadronic calorimeter and muon system.These experiments are expected to observe ∼ 10^3, ∼ 10^4, and ∼ 10^2 electron, muon, and tau neutrinos, respectively,during the LHC Run-3 data-taking period. The neutrinos have average energies of about a TeV <cit.>, as shown in Figure <ref>.The currently operating FASER experiment can search forν̅_τ during LHC Run-3. However, only 𝒪(1) ν̅_τ CC interaction events with measured muon charge are expected, making a high-significance observation very challenging.The far-forward LHC neutrinos allow to i) measure neutrino interaction cross sections at TeV energies for the first time and perform tests of LFU, ii) study NC and test NSI <cit.>, and iii) provide input for global proton and nuclear PDF fits, including studies of intrinsic charm <cit.>. In addition, these experiments will provide unique constraints on forward particle production at the high LHC collision energy, which are not accessible by the ECN3 experiments.Specifically, FASER and SND@LHC will allow to test explanations of the cosmic ray muon puzzle <cit.> and study QCD in an otherwise inaccessible regime with x∼ 4m_c^2/s ∼ 10^-7 where novel phenomena such as BFKL dynamics <cit.> and gluon saturation <cit.> are expected, and therefore provide valuable input for astro-particle physics <cit.> including a direct calibration of the prompt neutrino flux. An extension of the LHC neutrino program with significantly increased rates is envisioned during the HL-LHC in the context of the FPF <cit.>, located approximately 620 m downstream of ATLAS and directly in the LHC's TeV-energy neutrino beam. Three experiments with neutrino-detection capabilities are foreseen: FLArE, FASERν2, and AdvSND. The event rates expected at all three experiments together are shown as blue dashed lines in Figure <ref>. Notably, the FASERν2 emulsion detector in conjunction with the FASER2 spectrometer will have the capacity to identify approximately 830 ν_τ and 430 ν̅_τ separately <cit.>. The FPF would be able to constrain tau-neutrino magnetic moments to μ_ν_τ < 6.6× 10^-8μ_B <cit.>, as well as measure the tau-neutrino cross section and probe LFU at the percent level. The FPF will also be able to constrain the NC/CC ratio at sub-percent precision and search for sterile-neutrino oscillations <cit.>.In order to compare the reach of future CERN neutrino experiments concerning the measurement of DIS inclusive structure functions, the upper panels of Figure <ref> display the number of reconstructed muon neutrino events within detector acceptance at SHADOWS NaNu, SHiP SND, and FASERν2@FPF in different bins of x and Q^2. Kinematic requirements of Q^2≥ 3 GeV^2 and W^2≥ 4 GeV^2 are imposed to restrict the comparison to the DIS region.The bottom right panels compare the kinematic coverage in the (x,Q^2) plane of SHADOWS NaNu, SHiP SND, and FASERν2 with each other and in the global context.In the QCD perturbative region, SHADOWS NaNu, SHiP SND, and FASERν2 respectively cover x≥ 0.03, x≥ 0.007, and x≥ 0.003,reaching up to Q^2∼ 40 GeV^2, Q^2∼ 200 GeV^2, and Q^2∼ 2000 GeV^2, respectively. Figure <ref> indicates that the expected event rates should lead to structure functions measurements with statistical uncertainties at the few percent level or better, which hence are likely to be ultimately limited by systematic uncertainties.Neutrino DIS measurements of sufficient precision in these regions can be used to inform future global proton and nuclear PDF fits, potentially benefiting searches for BSM physics at the HL-LHC, e.g., via the high-mass Drell-Yan (DY) process <cit.>, and reduce theory systematics in key SM measurements such as the W-boson mass. The lower right panel of Figure <ref> compares the presented experiments to the world data on hard-scattering processes involving nuclear projectiles or targets.In particular, this comparison displays the coverage of existing measurements of neutrino DIS on nuclear targets (labelled “CC DIS”), as well as the expected coverage of electron-nucleus scattering at the Electron-Ion Collider (EIC) <cit.>.While SHiP SND and SHADOWS NaNu overlap with previous neutrino DIS experiments, FASERν2@FPF covers an uncharted region for CC scattering on nuclear targets and complements the NC measurements to be carried out at the EIC.It should also be reminded that CC and NC measurements provide access to different PDF flavour combinations, with the former in particular being close to those relevant for W^± production at hadron colliders.In view of the large overlap in kinematics of the ECN3 neutrino experiments with existing high-statistics measurements, a significant impact on PDFs is mainly expected for strangeness where the tagging of charm production in the emulsion detector will play a crucial role, though quantitative estimates exist presently only for SHiP SND <cit.>. 110 11 1To conclude the discussion, Table <ref> provides a summary of the potential neutrino physics topics and scientific reach with the proposed SHADOWS NaNu and SHiP SND subdetectors at ECN3, compared to those of the FPF at the LHC and of other experiments. 110 11 1§.§.§ Moved here for temporary "storage"Place where original tables and figures are stored for temporary reference. To be cleaned when done with the section.Among specific issues: * Comparison of SHiP SND ν_τ reach to that of SND@LHC/FASER and proposed Forward Physics Facility; make sure to use similar MC model for charm and ν_τ production* reach of neutrino-DIS for proton and nuclear PDFs* briefly comment on unique FPF capability for APP due to high LHC CM energyKinematics in the Bjorken-x and Q^2 plane for muon neutrino interactions within NaNu is shown on Figure <ref> left. Energy distribution of interacting tau neutrinos in NaNu is shown on Figure <ref> right.Muon momentum distribution in SHiP SND with highlighted the portion of the spectrum measured by the SHiP decay spectrometer is shown on Figure <ref> left. Number of charged-current DIS expected interactions at SHiP SND in 2 × 10^20 PoT in each bin of the (x, Q^2) plane is shown on Figure <ref> right. The energy coverage of the ECN3 neutrino experiments is compared to that of the LHC neutrino experiments in Figure <ref>, whichdisplays a comparison of different theoretical calculations <cit.> of neutrino-nucleusinclusive cross-sectionon a Wtarget in the neutrino energy range from 100 GeV to 10 TeV.Existing accelerator data, shown in Figure <ref>stops at E_ν∼ 350 GeV.Measurements of neutrino cross-sections at theECN3 energies are sensitive to the low-Q regionrelevant also for atmospheric and oscillationexperiments.We show the same comparison in the right panel now for an Fe target between 30 and 370 GeV,compared with the NuTeV measurements.In both cases we indicate the approximate coverage in E_νof the ν-ECN3 and LHC neutrino experiments. As indicated by Figure <ref>, in terms of inclusive measurements the ECN3 experiments overlap with existing measurements with FASER/FPF probing higher energies.Table <ref> summarises the qualitative assessment potential physics topics in neutrino and QCD physics provided by the proposed NaNuand SHiP SND experiments to be installed at ECN3, compared to that of the Forward Physics Facility (FPF) at the LHC and of non-CERN experiments.In particular, both the ν-ECN3 experiments and the FPF will be able to carry out at least some of these studies with also tau neutrinos. 110 11 1 Table <ref> summarises the qualitative assessment potential physics topics in neutrino and QCD physics provided by the proposed NaNu and SHiP SND experiments to be installed at ECN3, compared to that of the Forward Physics Facility (FPF) at the LHC and of non-CERN experiments.More details on the various topics as well as specific information concerning the expected performance of the experiments are given below and insections <ref>, <ref>, and <ref>, respectively.§ ACKNOWLEDGMENTSF.K. acknowledges helpful discussions with David Curtin, Marco Drewes, Torben Ferber, Paddy Fox, Jan Jerhot, Maksym Ovchynnikov and Thomas Schwetz, and support by the Deutsche Forschungsgemeinschaft (DFG) through the Emmy Noether Grant No. KA 4662/1-2 and grant 396021762 – TRR 257.M.G.A. acknowledges support by the Generalitat Valenciana through the plan GenT program (CIDEGENT/2018/014), and by the Spanish Ministerio de Ciencia e Innovación through grants PID2020-114473GBI00 and CNS2022-135595. B.D. acknowledges funding through the European Research Council under grant ERC-2018-StG-802836 (AxScale project). G.S. acknowledges support by the State Agency for Research of the Spanish Ministry of Science and Innovation through the grant PID2022-136510NB-C33. J.J. acknowledges support by the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 860881-HIDDeN. F.Kl. acknowledges support by the Deutsche Forschungsgemeinschaft under Germany's Excellence Strategy - EXC 2121 Quantum Universe - 390833306.§ DEFINITION OF ACRONYMSACC-CONS: Acceleration Consolidation projectALARA: As Low As Reasonably AchievableALD: Atomic Layer DepositionALP: Axion-Like ParticleASIC:Application Specific Integrated CircuitBA: Batiment Auxiliaire (Auxiliary surface Building)BC: Benchmark physics caseBD: Beam dumpBDF: Beam Dump FacilityBIS: Beam Interlock SystemBLM: Beam Loss MonitorBSG: Beam SEM GridBSI: Beam SEM IntensityBSM: Beyond Standard ModelCC: Charged CurrentCDHSW: CERN-Dortmund-Heidelberg-Saclay-Warsaw, neutrino detector at CERN West AreaCDS: Comprehensive Design StudyCHARM: CERN-Hamburg-Amsterdam-Rome-Moscow, neutrino detector at CERN West AreaCHARM II: CERN-Hamburg-Amsterdam-Rome-Moscow II, neutrino detector at CERN West AreaCHORUS: neutrino detector at CERN West AreaChPT: Chiral Perturbation TheoryCL: Confidence LevelCKM: Cabibbo-Kobayashi-MaskawaCNGS: CERN Neutrinos to Gran SassoCP: Charge ParityCSSR: Cost, Schedule and Scope ReviewDIS: Deep-Inelastic ScatteringDY: Drell-YanECAL: electromagnetic calorimeterECC: Emulsion Cloud ChamberECN3: Experimental Cavern North 3ECN3-TF: PBC ECN3 Beam Delivery Task ForceEFT: Effective Field TheoryEHN: Experimental Hall NorthEM: ElectromagneticEPPSU: European Particle Physics Strategy UpdateEYETS: Extended Year-End Technical StopFASER: ForwArd Search ExpeRiment at LHCFCNC: Flavour-Changing Neutral CurrentFE: Front EndFIP: Feably Interacting ParticleFIRIA: Fire-Induced Radiological Integrated AssessmentFLArE: Forward Liquid Argon Experiment at FPF at LHCFPF: Forward Physics Facility at LHCFPC: FIPs Physics CentreFT: Flat-TopGIM: Glashow–Iliopoulos–Maiani GN: Grossman-NirGTK: Giga TrackerHL-LHC: High Luminosity LHCHI: High IntensityHNL: Heavy Neutral LeptonHIKE: High Intensity Kaon ExperimentHL-LHC: High-Luminosity LHCHSDS Hidden Sector Decay SearchHVAC; Heating, Ventilation and Air ConditioningIT: Information TechnologyIVB: Intermediate Vector BosonLD: Long DistanceLDM: Light Dark MatterLFU: Lepton Flavor UniversalityLFV: Lepton Flavor ViolationLHC: Large Hadron ColliderLoI: Letter of IntentLS: Long ShutdownLSS: Long Straight SectionMCP: Micro-Channel Plate MD: Machine DevelopmentMIB: Magnetized Iron BlockMRPC: Multigap Resistive Plate ChambersNA: North AreaNC: Normal-Conducting / Neutral CurrentNaNu: North Area NeUtrino experimentNA-CONS: North Area Consolidation projectNDA: Non-Designated AreaNSI: Non-Standard InteractionsOPERA: Oscillation Project with Emulsion-tRacking ApparatusPANDA: antiProton ANihilation at DArmstadtPBC: Physics Beyond CollidersPCB: Printed Circuit BoardPDF: Parton Distribution FunctionPDG: Particle Data GroupPPM: Pulse-to-Pulse ModePMT:Photon MultiplierPoT: Protons on Targetppp: particles (protons) per pulsePRR: Project Readiness ReviewPS: Proton SynchrotronQCD: Quantum Chromo-DynamicsQED: Quantum Electro-DynamicsR2E: radiation-to-electronicsR&D: Research and DevelopmentRICH: Ring-Imaging CHerenkovRP: Radiation ProtectionSBT: Surrounding walls Background TaggerSC: Super-ConductingSciFi: Scintillating FibreSEM: Secondary Emission MonitorSHADOWS: Search for Hidden And Dark Objects With the SPSSHiP: Search for Hidden ParticlesSiPM: Silicon Photo-MultiplierSM: Standard ModelSND:Scattering and Neutrino DetectorSPS: Super Proton SynchrotronSPSC: SPS and PS Experiments CommitteeTAX: Target Attenuator eXperimental areasTBI: Target Beam InstrumentationTBID: Target Beam Instrumentation DownstreamTBIU: Target Beam Instrumentation UpstreamTBSE: Target Beam Stopper ExtractionTCC: Tunnel Caverne Cible = Tunnel Target Cavern TCSC: Target Collimator Splitter CopperTCX: Target Collimator mask eXperimental areasTDC: Tunnel Divider (splitter) Cavern TDR: Technical Design ReportTED: Target External DumpTIDVG: Target Internal Dump Vertical Graphite (SPS Internal Dump)TT: Transfer TunnelTZM: Titanium Zirconium-doped Molybdenum alloyUBT: Upstream vessel wall Background TaggerWIC: Warm magnets Interlock ControllerWLS: Wavelength Shifting𝐗_0: Radiation LengthYETS: Year-End Technical Stop
http://arxiv.org/abs/2310.17726v1
{ "authors": [ "C. Ahdida", "G. Arduini", "K. Balazs", "H. Bartosik", "J. Bernhard", "A. Boyarsky", "J. Brod", "M. Brugger", "M. Calviani", "A. Ceccucci", "A. Crivellin", "G. D'Ambrosio", "G. De Lellis", "B. Döbrich", "M. Fraser", "R. Franqueira Ximenes", "A. Golutvin", "M. Gonzalez Alonso", "E. Goudzovski", "J. -L. Grenard", "J. Heeck", "J. Jaeckel", "R. Jacobsson", "Y. Kadi", "F. Kahlhoefer", "F. Kling", "M. Koval", "G. Lanfranchi", "C. Lazzeroni", "F. Mahmoudi", "D. Marzocca", "K. Massri", "M. Moulson", "S. Neshatpour", "J. Osborne", "M. Pospelov", "T. Prebibaj", "T. R. Rabemananjara", "Ch. Rembser", "J. Rojo", "A. Rozanov", "G. Ruggiero", "G. Rumolo", "G. Schnell", "M. Schott", "Y. Soreq", "T. Spadaro", "C. Vallée", "T. Zickler", "J. Zupan" ], "categories": [ "hep-ex", "hep-ph" ], "primary_category": "hep-ex", "published": "20231026183718", "title": "Post-LS3 Experimental Options in ECN3" }
Realizing attractive interacting topological surface fermions: A resonating TI- thin film hybrid platform Saran Vijayan and Fei Zhou January 14, 2024 ========================================================================================================= The fundamental computational issues in Bayesian inverse problems (BIPs) governed by partial differential equations (PDEs) stem from the requirement of repeated forward model evaluations. A popular strategy to reduce such cost is to replace expensive model simulations by computationally efficient approximations using operator learning, motivated by recent progresses in deep learning. However, using the approximated model directly may introduce a modeling error, exacerbating the already ill-posedness of inverse problems. Thus, balancing between accuracy and efficiency is essential for the effective implementation of such approaches. To this end, we develop an adaptive operator learning framework that can reduce modeling error gradually by forcing the surrogate to be accurate in local areas. This is accomplished by fine-tuning the pre-trained approximate model during the inversion process with adaptive points selected by a greedy algorithm, which requires only a few forward model evaluations.To validate our approach, we adopt DeepOnet to construct the surrogate and use unscented Kalman inversion (UKI) to approximate the solution of BIPs, respectively. Furthermore, we present rigorousconvergence guaranteein the linear case using the framework of UKI.We test the approach on several benchmarks, including the Darcy flow, the heat source inversion problem, and the reaction diffusion problems.Numerical results demonstrate that our method can significantly reduce computational costs while maintaining inversion accuracy. Operator learning, DeepOnet, Bayesian inverse problems, Unscented Kalman inversion. myheadings plain Realizing attractive interacting topological surface fermions: A resonating TI- thin film hybrid platform Saran Vijayan and Fei Zhou January 14, 2024 =========================================================================================================§ INTRODUCTIONMany realistic phenomenons are governed by partial differential equations (PDEs), where the states of the system are described by PDEs solutions. The properties of these systems are characterized by the model parameters, such as the permeability and thermal conductivity, which can not be directly determined. Instead, the parameters can be inferred from the discrete and noisy observations of the states, which are known asinverse problems. Due to the fact that inverse problems are ill-posed in general, many methods for solving them are based primarily on either regularization theory or Bayesian inference.By imposing a prior distribution on the parameters, the Bayesian approach can provide a more flexible framework. The solutions to the Bayesian inverse problems, or the posterior distributions, can then be obtained by conditioning the observations using the Baye's formula. In some cases, the model parameters are required to be functions, leading to infinite-dimensional Bayesian inverse problems. These cases possibly occur when the model parameters are spatially varying with uncertain spatial structures, which can be found in many realistic applications, including many areas of engineering, sciences and medicine <cit.>.The formulation of infinite-dimensional Bayesian inverse problems presents a number of challenges, including the well-posedness guaranteed by the proper prior selection, as well as the convergence of the solutions governed by the discretization scheme. Following that, dealing with the discrete finite-dimensional posterior distributions can be difficult due to the expensive-to-solve forward models and high-dimensional parameter spaces. As a result, direct sampling methods such as MCMC-based methods <cit.> will suffer from the unaffordable computation costs. Common methods to deal with these issues include (i) model reduction methods <cit.>, which exploit the intrinsic low dimensionality, (ii) direct posterior approximation methods, such as Laplace approximation and variational inference <cit.>, and (iii) surrogate modeling <cit.>, which replaces the expensive model with a cheap substitute. Surrogate modeling emerges as the most promising approach for efficiently accelerating the sampling of posterior distributions among the methods listed above. Deep learning methods, specifically deep neural networks (DNN), have recently become the most popular surrogate models in engineering and science due to their power in approximating high-dimensional problems<cit.>. In general, DNN employs the power of machine learning to construct a quick-to-evaluate surrogate model to approximate the parameter-to-observation maps<cit.>. Numerical experiments, such as those described in <cit.>, demonstrated that with sufficiently large training datasets, highly accurate approximations can be trained. Traditional deep learning methods, on the other hand, frequently necessitate a large number of training points that are not always available. Furthermore, whenever the measurement operator changes, the surrogate should be retrained. Physical-informed neural networks(PINNs)<cit.> can address this issue by incorporating the physical laws into the loss function and learning the parameter-to-state maps<cit.>. Due to this, they can be applied as surrogates for a class of various Bayesian inverse problems with models that are governed by the same PDEs but have various types of observations and noise modes, further reducing the cost of surrogate construction.However, PINNs have some limitations<cit.>, such as hyperparameter sensitivity and the potential for training instability due to the hybrid nature of their loss function. Several solutions have been proposed to address these issues<cit.>. Furthermore, due to the requirement of a large collocation dataset, PINNs will continue to be ineffective for infinite-dimensional Bayesian inverse problems<cit.>.Operator neural networks, such as FNO<cit.> and DeepOnet<cit.>, are able to model complex systems in high dimensional spaces as infinite-dimensional approximations.They are therefore promising surrogates, as described in<cit.>. However, using approximate models directly may introduce a discrepancy or modeling error, exacerbating an already ill-posed problem and leading to a worse solution. We propose in this paper to develop a framework that can adaptively reduce model error by forcing the approximate model to be locally accurate for posterior characterization during the inversion process. This is achieved by first usingneural network representations of parameter-to-state maps between function spaces, and then retraining this initial model with points chosen adaptively using a greedy algorithm. The inversion process can continue using the fine-tuned approximate model with lower local model error. This procedure can be repeated multiple times as necessary until the stop criteria is met. For the detailed implementation, we use the DeepOnet<cit.> to approximate the parameter-to-state maps and the unscented Kalman inversion <cit.> to estimate the posterior distribution. Moreover, we can show that under the linear case, the convergence can be obtained if the surrogate is accurate throughout the space, which can also be extended to non-linear cases with locally accurate approximate models. To verify the effectiveness of our method, we proposetesting several benchmarks including the Darcy flow, the heat source inversion problem and a reaction diffusion problem. Our main contributions can be summarized as the following points.* We propose a framework for adaptively reducing the surrogate's model error. To maintain local accuracy, the greedy algorithm is proposed for selecting adaptive samples for fine-tuning the pre-trained model. * We adopt DeepOnet to approximate the parameter-to-state and then combine the UKI to accelerate infinite-dimensional Bayesian inverse problems. We demonstrate that this approach not only maintains inversion accuracy but also saves a significant amount of computational cost.* We show that in the linear case, the mean vector and the covariance matrix obtained by UKI with an approximate model can converge to those obtained with a full-order model. The results can also be verified in non-linear cases with locally accurate surrogate. * We present several benchmark tests including the Darcy flow, a heat source inversion problem and a reaction diffusion problem to verify the effectiveness of our approach.The remainder of this paper is organized as follows.Section <ref> introducesinfinite-dimensional Bayesian inverse problems as well asthe basic concepts of DeepOnet.Our adaptive framework for model error reduction equipped with greedy algorithm and the unscented Kalman inversion, is presented in Section <ref>.To confirm the efficiency of our algorithm, several benchmarks are tested in Section <ref>. The conclusion is covered in Section <ref>.§ BACKGROUND In this section, we first give a brief review of the infinite-dimensional Bayesian inverse problems(BIPs). Then we will introduce the basic concepts of DeepOnet.§.§ Infinite-dimensional Bayesian inverse problems Consider a steady physical system described by the following PDEs:𝒜(u(𝐱);m(𝐱)) = 0, 𝐱∈Ω,ℬ(u(x)) = 0,𝐱∈∂Ω,where 𝒜 denotes the general partial differential operator defined in the domain Ω∈ℝ^d, ℬ is the boundary operator on the boundary ∂Ω, m∈ℳ represents the unknown parameter field and u∈𝒰 represents the state field of the system.Let y∈ℝ^N_y denote a set of discrete and noisy observations at specific locations in Ω. Suppose the state u and y are connected through an observation system 𝒪:𝒰→ℝ^N_y,y = 𝒪(u) + η,where η∼𝒩(0, Σ_η) is a Gaussian with mean zero and covariance matrix Σ_η, which models the noise in the observations. Combining the PDE model (<ref>) and the observation system (<ref>) defines the parameter-to-observation map 𝒢 = 𝒪∘ℱ:ℳ→ℝ^N_y, i.e, y = 𝒢(m) +η.Hereℱ:ℳ→𝒰 isthe solution operator, or the parameter-to-state map, of the PDE model (<ref>). The following least squared functional plays an important role in such inverse problems:Φ(m;y) = 1/2y - 𝒢(m)^2_Σ_η,where ·_Σ_η = Σ_η^-1/2· denotes the weighted Euclidean norm in ℝ^N_y. In cases where the inverse problem is ill-posed, optimizing Φ in ℳ is not a well-behaved problem, and some type of regularization is necessary. Bayesian inference is another method to consider. In the Bayesian framework, (m, y) is viewed as a jointly varying random variable in ℳ×ℝ^N_y. Given the prior ν_0 on m, the solution to the inverse problem is to determine the distribution ofm conditioned on the data y, i.e., the posterior νgiven by an infinite dimensional version ofBayes' formula as ν(dm)=1/Z(y)exp(-Φ(m;y))ν_0(dm),whereZ(y) is the model evidence defined as Z(y) := ∫_ℳexp(-Φ(m;y))ν_0(dm). In general, the main challenge of infinite-dimensional Bayesian inverse problems lies in well-posedness of the problem and numerical methodologies. To guarantee the well-posedness, the prior is frequently considered to be a Gaussian random field, which guarantees the existence of the posterior distribution<cit.>.To obtain the finite posterior distributions, one can use Karhunen-Loeve (KL) expansions or direct spatial discretization methods. The posterior distribution can then be approximated using numerical techniques like Markov Chain Monte Carlo (MCMC)<cit.> and variational inference (VI)<cit.>. It should be emphasized that each likelihood function evaluation requiresa forward model 𝒢 (or ℱ) evaluation.The computation of the forward model can be very complicated and expensive in some real-world scenarios, making the computation challenging. As a result, it is critical to replace the forward model with a low-cost surrogate model. In this paper, weapply deep operator learning to construct the surrogate in order to substantially reduce computational time.§.§ DeepOnet as surrogatesIn this section, we employ the neural operator DeepOnet as the surrogate, which are fast to evaluate and can leads to a speed up in the posterior computational. The basic idea is to approximate the true forward operator ℱ with a neural network ℱ_θ:ℳ→𝒰, where ℳ, 𝒰 are spaces defined before and θ are the parameters in the neural network. This neural operator can be interpreted as a combination of encoder ℰ, approximator 𝒜 andreconstructor ℛ <cit.> as depicted in Figs.<ref> and <ref>, i.e., ℱ_θ:= ℛ∘𝒜∘ℰ.Here, the encoder ℰ maps m into discrete values {m(x_i)}_i=1^N_m in ℝ^N_m at a fixed set of sensors {x_i}_i=1^N_m∈Ω, i.e.,ℰ: ℳ→ℝ^N_m, ℰ(m) =(m(x_1),⋯,m(x_N_m)).The encoded data is then approximated by the approximator 𝒜:ℝ^N_m→ℝ^p through a deep neural network. Given the encoder and approximator, we can define the branch net β: ℳ→ℝ^p as the composition β(m)=𝒜∘ℰ(m). The decoder ℛ:ℝ^p→𝒰 maps the results {β_i}_i=1^p to 𝒰 with the form ℛ(β) = ∑_i=1^pβ_it_i(x),x∈Ω.where t_i are the outputs of the trunk net as depicted in Fig.<ref>.Combined with the branch net and trunk net, the operator network approximation ℱ_θ(m)(x) is obtained by finding the optimal θ, which minimizes the following loss function:θ^* = min_θ∈Θℒ(θ) =∫_ℳ∫_Ω|ℱ(m)(x)-ℱ_θ(m)(x)|^2 dx dν_0(m),where Θ is the parameter space. It should be noted thatthe loss function cannot be computed exactly and is usually approximated by Monte Carlo simulation by sampling the space ℳ and the input sample space Ω. That is, wetake N_prior i.i.d samples m_1, m_2,⋯, m_N_prior∼ν_0at N_x points x_j^1,⋯, x_j^N_x, leading to the following empirical loss ℒ_N_prior,N_x(θ):=1/NN_x∑_j=1^N∑_k=1^N_x|ℱ(m_j)(x_j^k)-ℱ_θ(m_j)(x_j^k)|^2.After the operator network has been trained, an approximation of the forward model 𝒢 can be constructed by adding the observation operator 𝒪, i.e., 𝒢 = 𝒪∘ℱ_θ. We then can obtain the surrogate posteriorν(dm)∝exp(-Φ(m;y))ν_0(dm),where ν_0 is again the prior of m and Φ(m;y) is the approximate least-squares data misfit defined as Φ(m;y): =1/2y - 𝒢(m)^2_Σ_η. The main advantage of the surrogate method is that once an accurate approximation is obtained, it can be evaluated many times without resorting to additional simulations of the full-order forward model. However, using approximate models directly may introduce a discrepancy or modeling error, exacerbating an already ill-posed problem and leading to a worse solution<cit.>. Specifically, we can define an ϵ-feasible set ℳ(ϵ):={m∈ℳ | 𝒢(m)-𝒢(m)≤ϵ}, and the associated posterior measureν(ℳ(ϵ)) as ν(ℳ(ϵ)) = ∫_ℳ(ϵ)ν(dm).Then, the complement of the ϵ-feasible set is given by ℳ^(ϵ)=ℳ∖ℳ(ϵ), which has posterior measure ν(ℳ^(ϵ))=1-ν(ℳ(ϵ)). We can obtain an error bound between ν and ν in the Kullback-Leiblerdistance:Suppose we have the full posterior distribution ν and its approximationν induced by the surrogate 𝒢. For a given ϵ, there exist constants K_1>0 and K_2>0 such thatD_KL(νν) ≤(K_1 ϵ +K_2 ν(ℳ^(ϵ)))^2.It is important to note that in order for the approximate posterior νto converge to the exact posterior ν, the posterior measure ν(ℳ^(ϵ)) must tend to zero.One way to achieve this goal is to enable the surrogate model𝒢trained sufficientlyin the entire input space such that the model error is small enough. However, a significant amount of data and training time are frequently requiredto effectively train the surrogate model. Indeed, the surrogate model only needs to be accurate within the posterior distribution space, not the entire prior space<cit.>. To maintain accurate results while lowering the computational costs, an adaptive algorithm should be developed. In the following section, we will examine how to create a framework for adaptively reducing the modeling error of the surrogate. Typically, we start by building a surrogate offline with DeepOnet. We then propose employing a greedy algorithm to adaptively update the training dataset in order to reduce the model error, after which the pre-trained surrogate can be fine-tuned online. § ADAPTIVE OPERATOR LEARNING FRAMEWORK§.§ Adaptive model error reductionStandard DeepOnet requires a large set of training points, which are obtained through solving time-consuming forward model. This is not practical and will also introduce model error in most cases. To address this challenge, one possible way is to use a locally accurate surrogate to replace the forward model and then explore the approximate posterior distribution induced by this surrogate. In detail, suppose we have a collection of model evaluations 𝒟 = {(m_i, ℱ(m_i))}. We then can use these points to train an operator network ℱ_θ and obtain a surrogate 𝒢=𝒪∘ℱ_θ,which can be evaluated repeatedly at negligible cost, making it ideal for drawing samples from posterior distributions. It should be noticed that the training dataset influences the accuracy of the approximation model. If given sufficient training points, the surrogate model 𝒢 will be accurate in the whole prior space. While this is not practical and will lost the advantage of computational efficiency <cit.>. Moreover, how to choose the training points can still be a challenge as one needs to make sure the accuracy of the surrogate in the high posterior density region. However, the high density region of the posterior distribution is unknown until the observations are given. Therefore, we need to design an algorithm that can modify the training dataset during the posterior computational process to fine-tunethe pre-trained surrogate. This refined surrogate can maintain the local accuracy as well as save computational cost.Our approach, as depicted in Fig.<ref> is indeed the summary of the previous efforts. The whole procedure can be divided into the following four steps. * Step 1 (Offline): Build a surrogate ℱ_θ using the initial training dataset 𝒟 with a relatively small sample size.* Step 2 (Posterior computation): Using some numerical techniques to approximate, or draw samples from the approximate posterior ν_t induced by 𝒢_t=𝒪∘ℱ_θ. * Step 3 (Refinement): Choose a criteria to determine whether the refinement if needed. If refinement is needed, then selecting new points from ν_t to refine the training dataset 𝒟 and the surrogate ℱ_θ.* Step 4: Repeated the above procedure for many times until the stop criteria is satisfied. Once the initial model has been trained, a rough inversion result can be obtained for different inversion tasks at a negligible cost by using this pre-trained model. The operator network is then fine-tuned online using our adaptive framework to improve the inversion results. The remaining challenges are to define the stop criteria and the sampling technique for choosing adaptive points from ν_t. To construct the stop criteria, we usethe least squares functional Φ(m;y) as the error term because it representsthe relative distance between the observations and the forward predictions. In detail, suppose e_t = Φ(r_t;y) = 1/2y - 𝒢(r_t)_Σ_η^2.By giving a predefined tolerance ϵ, if e_t - e_t+1/e_t<ϵ, the whole procedure should be stopped as the data-fitting term does not decrease significantly anymore. While for the sampling technique, in order to maintain efficiency, we need to choose a small set of samples based on current posterior ν_t. In detail, we need to first generate a large set of samples from v_t and then choose a couple of important samples based on some criteria. To this end, a natural idea is to combine the numerical techniques used in posterior computation anda greedy algorithm to adaptively choose the most important samples from ν_t.Suppose the current approximate posterior distribution is ν_t, we need to generate samples from it to retrain the pre-trained model ℱ_θ. To select the small set of important parameter points, we first draw a large set of K samples Γ = {m_1,m_2,⋯, m_K} from ν_t to cover the parameter space.A subset γ^Q = {m_1,m_2,⋯, m_Q}⊂Γ of "important" points are selected from Γ using a Greedy algorithm. That is, we firstly select the current mean vector r_t as the anchor point, which can be estimated by samples mean in general cases. Afterwards, we want the newly selected point m_j to be close to r_t. And at the same time, the surrogate solution of this point should have the largest distance with the space spanned by {𝒢_t(m_i)}_i=1^j-1, i.e.,m_j = max_m∈Γ/γ^j-1(d(𝒢_t(m), 𝒢_t(γ^j-1)) - d(m, r_t)),γ^j = γ^j-1∪{m_j}, where d(x, y) is the distance function between vector x and y, which is defined by l_2 norm. 𝒢_t(γ^j-1) represents the space spanned by {𝒢_t(m_i)}_i=1^j-1. The purpose of this strategy is to keep the adaptive points closing to the mean vector. Specifically, the selected adaptive points will contain the varying features in the surrogate solution spaces and will be in close proximity to the mean vector r_t. These points then allow the pre-trained model to be fine-tuned to be more accurate near r_t and to show better generalization ability. The important thing is to sample from the approximate posterior distribution and approximate the mean vector. To this end, traditional MCMC-based sampling methods canbe applied. However, the slow convergence rate of MCMC methods, which always require 𝒪(10^5) iterations, has led to widespread criticism in practice, even though they possess attractive asymptotic theoretical properties.Particle-based approaches<cit.>, particularly Kalman-based Bayesian inversion techniques<cit.>, have been proposed recently to bridge the gap between MCMC and VI techniques.We investigate the unscented Kalman inversion (UKI) method<cit.> in this paper because it never needs early stopping or empirical variance inflation, and it converges with only 𝒪(10) iterations.Notably, our framework is easily extensible to numerous variants of popular particle-based methods; however, this is outside the scope of this work.§.§ Unscented Kalman InversionIn this section, we give a brief review of the UKI algorithm discussed in <cit.>.The UKI is derived within the Bayesian framework and is considered to approximate the posterior distribution using Gaussian approximations on the random variable m|y via its ensemble properties.To this end, we considerthe following stochastic dynamical system: Evolution: m_n+1=r_0+α(m_n-r_0)+ ω_n+1,ω_n+1∼𝒩(0,Σ_ω),Observation: y_n+1=𝒢(m_n+1)+η_n+1,η_n+1∼𝒩(0,Σ_η). where m_n+1 is the unknown discrete parameter vector, and y_n+1 is the observation vector, the artificial evolution error ω_n+1 and observation error η_n+1 are mutually independent, zero-mean Gaussian sequences with covariances Σ_ω and Σ_η, respectively. Here α∈(0,1] is the regularization parameter, r_0 is the initial arbitrary vector. Let Y_n := {y_1, y_2,⋯, y_n} denotes the observation set at time n. In order to approximate the conditional distribution ν_n of m_n|Y_n, the iterative algorithm starts from the prior ν_0 and updates ν_n through the prediction and analysis steps: ν_n→ν_n+1, and then ν_n+1→ν_n+1, where ν_n+1 is the distribution of m_n+1|Y_n.In the prediction step, we assume that ν_n= 𝒩(r_n, C_n), then under Eq. (<ref>), ν_n+1 is also Gaussian with mean and covariance: r_n+1=𝔼[m_n+1|Y_n]=α r_n+(1-α)r_0, C_n+1=Cov[m_n+1|Y_n]=α^2C_n+Σ_ω. In the analysis step, we assume that joint distribution of {m_n+1, y_n+1}|Y_n can be approximated by a Gaussian distribution𝒩([[ r_n+1; y_n+1 ]],[[ C_n+1C^my_n+1; C_n+1^myTC_n+1^yy ]]), where[y_n+1=𝔼[𝒢(m_n+1)|Y_n],; C_n+1^my=Cov[m_n+1,𝒢(m_n+1)|Y_n],; C_n+1^yy=Cov[𝒢(m_n+1)|Y_n]+Σ_η. ] Conditioning the Gaussian in Eq.(<ref>) to find m_n+1|{Y_n, y_n+1}=m_n+1|Y_n+1 gives the following expressions for the mean r_n+1 and covariance C_n+1 of the approximation to ν_n+1:[ r_n+1=r_n+1+C_n+1^my(C_n+1^yy)^-1(y_n+1-y_n+1),;C_n+1=C_n+1-C_n+1^my(C_n+1^yy)^-1C_n+1^my. ]By assuming all observations are identical to y (i.e., Y_n = y), Eqs.(<ref>)-(<ref>)define a conceptual algorithm for using Gaussian approximation to solve BIPs.To evaluate integrals appearing in Eq. (<ref>), UKI employs the unscented transform described below.Let Gaussian random variable m∼𝒩(r, C)∈ℝ^N_m, 2N_m + 1 symmetric σ-points are chosen deterministically: m^0=r, m^j=m+c_j[√(C)]_j,m^j+N_m=m-c_j[√(C)]_j,(1≤ j≤ N_m), where [√(C)]_j is the jth column of the Cholesky factor of C. The quadrature rule approximates the mean and covariance of the transformed variable 𝒢_i(m) as follows𝔼[𝒢_i(θ)]≈𝒢_i(m_0)=𝒢_i(r),Cov[𝒢_1(m),𝒢_2(m)]≈∑_i=1^2N_mW_j^c(𝒢_1(m^j)-𝔼𝒢_1(m))(𝒢_2(m^j)-𝔼𝒢_2(m))^T. Here these constant weights arec_1 =c_2⋯ =c_N_m = √(N_m+λ), W_1^c = W_2^c = ⋯ = W_2N_m^c = 1/2(N_r+λ),λ = a^2(N_m +κ) -N_m,κ = 0, a = min{√(4/N_m+κ), 1}.We obtain the following UKI algorithm in Algorithm <ref> by applying the aforementioned quadrature rules.UKI is a derivative-free algorithm that applies a Gaussian approximation theorem iteratively to transport a set of particles to approximate given distributions. As a result, it only needs 2N_m+1 forward evaluations per iteration, making it simple to implement and inexpensive to compute.§.§ Algorithm Summary Algorithm <ref> provides an overview of the UKI method utilizing DeepOnet approximations. The algorithm works in a manner similar to how our adaptive operator framework is explained. In summary, we begin with an offline pre-trained DeepOnet surrogate and use locally training data from the current approximate posterior distribution to adaptively refine this DeepOnet until the stop criteria is met.It is important to note that the UKI iteration process with DeepOnet may not always be stable. Therefore, in order to get the final approximate posterior distribution in each refinement, werun the UKI for T steps and select the mean vector whose data fitting error is the smallest as the output. Consequently, more T forward evaluations are needed for each refinement in this process.We review the computational efficiency of our method.Since the pre-trained operator network can be applied as surrogates for a class of various BIPs with models that are governed by the same PDEs but have various types of observations and noise modes. Thus, for a given inversion task, the main computational cost centers on the online forward evaluations and the online fine-tuning. On the other hand, the online retraining only takes a few seconds each time, so it can be ignored in comparison to the forward evaluation time. In these situations, the forward evaluations account for the majority of the computational cost. For our algorithm, the maximum number of online high-fidelity model evaluations is N_DeepOnet = ∑_t=1^I_max(Q+T_DeepOnet) = (Q+T_DeepOnet)I_max, where Q is the number of adaptive samples for refinement, T_DeepOnet is the maximum number of iterations for UKI using our approach, and I_max is the number of adaptive refinement. While N_FEM = (2N_m+1)T_FEM represents the total evaluations for UKI using the FEM solver, N_m represents the discrete dimension of the parameter field, and T_FEM represents the maximum iteration number for UKI. Consequently, the asymptotic speeds increase can be computed asSpeedUp = (2N_m+1)T_FEM/(Q+T_DeepOnet)I_max.Note that the efficiency of our method basically depends on Q, I_max. First of all, the number of adaptive samples Q will be sufficiently small (e.g. 𝒪(10)) compared to the discrete dimension N_m (e.g. 𝒪(10^2)-𝒪(10^3)), resulting in a significant reduction in computational cost as it determines the number of forward evaluations.Second, the total number of adaptive retraining I_max is determined by the inverse tasks, which are furthersubdivided into in-distribution data (IDD)and out-of-distribution (OOD) cases.The IDDtypically refers to ground truth that is located in the high density region of the prior distribution. Alternatively, the OOD refers to the ground truth that is located far from the high density area of the prior.The original pre-trained surrogatecan be accurate in nearly all of the high probability areas for the IDD case. As a result, our framework will converge quicker. In the case of OOD, our framework requires a significantly higher number of retraining cycles in order to reach the high density area of the posterior distribution. However I_max for both inversion tasks can be small. Consequently, our method can simultaneously balance accuracy and efficiency and has the potential to be applied to dynamical inversion tasks. In other words, once the initial surrogate is trained, we can use our adaptive framework to modify the estimate at a much lower computational cost. §.§ Convergence analysis under the linear caseIt is important to note that the UKI's ensemble properties are used to approximate the posterior distribution using Gaussian approximations. Specifically, the sequence in Eq.(<ref>) obtained by the full-order model 𝒢 will converge to the equilibrium points of the following equations under certain mild conditions in the linear case <cit.>:C_∞^-1=𝒢^TΣ_η^-1𝒢+(α^2C_∞+Σ_ω)^-1,C_∞^-1r_∞=𝒢^TΣ_η^-1y+(α^2C_∞+Σ_ω)^-1α r_∞. We can actually demonstrate that in the linear case, the mean vector and covariance matrix obtained by our approach will be near to those obtained by the true forward if the surrogate 𝒢 is near to the true forward 𝒢. Consider the following: (𝒢) = ℝ^N_m, and 𝒢 is linear. Using 𝒢 as a surrogate, the corresponding sequence of r_n, C_n in Eq.(<ref>) then converges to the following equationsC_∞^-1=𝒢^TΣ_η^-1𝒢+(α^2C_∞+Σ_ω)^-1,C_∞^-1r_∞=𝒢^TΣ_η^-1y+(α^2C_∞+Σ_ω)^-1αr_∞In the following, we will demonstrate that if the surrogate 𝒢 is near the true forward model 𝒢, then the r_∞, C_∞ ought to be near the true ones as well. We shall need the following assumptions.Suppose for any ϵ, the linear neural operator 𝒢:ℝ^N_m→ℝ^N_y can be trained sufficiently to satisfy 𝒢 - 𝒢_2 < ϵ. Suppose the forward map 𝒢 is bounded, that is 𝒢_2 < H,where H is a constant.Suppose the matrix 𝒢^TΣ_η^-1𝒢≻ 0[We use the notation ≻ here to demonstrate that the matrix is symmetric and positive definite] and can be bounded from below as 𝒢^TΣ_η^-1𝒢_2>C_1,where C_1 is a positive constant. We can obtain the following lemma. Under Assumptions <ref>-<ref>, 𝒢^TΣ_η^-1𝒢 is also bounded from below as 𝒢^TΣ_η^-1𝒢_2>C_2.where C_2 is constant dependent on C_1. We first consider 𝒢^TΣ_η^-1𝒢 - 𝒢^TΣ_η^-1𝒢= 𝒢^TΣ_η^-1𝒢 - 𝒢_θ^TΣ_η^-1𝒢 +𝒢_θ^TΣ_η^-1𝒢 - 𝒢^TΣ_η^-1𝒢 = (𝒢 - 𝒢)^TΣ_η^-1𝒢 +𝒢_θ^TΣ_η^-1(𝒢 -𝒢).Combining Assumptions <ref> and <ref>, this leads to 𝒢^TΣ_η^-1𝒢 - 𝒢^TΣ_η^-1𝒢_2 ≤𝒢 - 𝒢_2Σ_η^-1_2(𝒢_2 + 𝒢_2)≤ 2ϵ HΣ_η^-1_2.Then, we have 𝒢^TΣ_η^-1𝒢_2 = 𝒢^TΣ_η^-1𝒢 - 𝒢^TΣ_η^-1𝒢 + 𝒢^TΣ_η^-1𝒢_2≥𝒢^TΣ_η^-1𝒢_2 - 𝒢^TΣ_η^-1𝒢 - 𝒢^TΣ_η^-1𝒢_2≥ C_1 - 2ϵ HΣ_η^-1_2≥ C_2.Note that these assumptions are reasonable and can be found in many references <cit.>. We will then supply the main theorem based on theseassumptions. Under Assumptions <ref>-<ref>, suppose Range(𝒢^T) = Range(𝒢^T) = ℝ^N_m and Σ_ω≻ 0, Σ_η≻ 0. Then the sequence r_∞, C_∞^-1 in Eq.(<ref>) obtained by using the surrogate model will converge to the r_∞, C_∞^-1 in Eq.(<ref>) and we have the following error estimateC_∞^-1 - C_∞^-1_2≤2ϵ HH_η/1-β,r_∞ - r_∞_2≤K_1H_ηH_y/C_1(1 + 2(1 + αβ) K_2H_η H^2/(1-β)C_2)ϵ,where β, C_1, C_2, K_1, K_2, H_η, H_y, H are positive bounded constants. The proof can be found in Appendix A. In order to meet the requirements of Theorem <ref>, it is possible to make the neural operator 𝒢_θ linear by dropping the nonlinear activation functions in the branch net and keeping the activation functions in the trunk net. § NUMERICAL EXPERIMENTS In this section, we provide several numerical examples to demonstrate the effectiveness and precision of the adaptive operator learning approach for solving inverse problems. We will compare the UKI inversion results from DeepOnet (referred to as DeepOnet-UKI) and the results from conventional FEM solvers (referred to as FEM-UKI) in order to more clearly present the results. Additionally, there are two cases for the DeepOnet-UKI method: DeepOnet-UKI-Direct and DeepOnet-UKI-Adaptive, depending on whether adaptive refinement is applied. In all of our numerical tests, the branch and trunk nets for DeepOnet are fully connected neural networks with five hidden layers and one hundred neurons in each layer, with the tanh function as the activation function.DeepOnet is trained offline with 1× 10^5 iterations and N_prior = 1000 prior samples from the Gaussian random field.Unless otherwise specified, we set the maximum retraining number to I_max = 10 and the tolerance to ϵ = 0.01. In order to assess the efficacy of our adaptive model in handling varying observations, we will apply Gaussian random noise of 1%, 5%, and 10% to the observations. This can be expressed as follows: y_obs = y_ref + δ⊙ξ, ξ∼𝒩(0, 1), where y_ref = 𝒢(m_ref), δ = 1%y_ref, δ = 5%y_ref, δ = 10%y_ref and ⊙ denotes the element-wise multiplication. In UKI, the regularization parameters are α=0.5 for noise levels 0.05 and 0.1 and α=1 for noise levels 0.01. The starting vector for the UKI is chosen at random from 𝒩(0,I). The selection of other hyper-parameters is based on <cit.>.For numerical examples, we set Ω =[0,1]^2. The maximum number of UKI iterations in a retraining cycle is 20 for all three methods. After that, we will choose Q= 50 adaptive samples for noise level 0.01 and Q =20 adaptive samples for noise levels 0.05 and 0.1, respectively, from 2000 samples using the greedy algorithm.To measure the accuracy of the numerical approximation with respect to the exact solution, weuse the following relative error err defined as err = m - m_ref_2/m_ref_2, where m and m_ref are the numerical and exact solutions , respectively. Furthermore, in order to compare the model error, we define the relative model error as follows:Err = 𝒢(m) - 𝒢(m)_2/𝒢(m)_2.§.§ Example 1: Darcy flowIn the first example, we consider the following Darcy flow problem: -∇· (exp(m(x))∇u(x))= f(x) ,x∈Ω,u(x)= 0,x∈∂Ω. Here, the source function f(x) is defined asf(x_1, x_2)= 1000 0≤ x_2≤4/6, 20004/6< x_2≤5/6,3000 5/6<x_2≤ 1. The aim is to determine thepermeability m(x) from noisy measurements of the u-field at a finite set of locations.To ensure the existence of the posterior distribution, we typically selected the prior distribution ν_0 as a Gaussian measure 𝒩(r_pr, 𝒞_pr).In particular, we focus on the covariance operator with the following form:𝒞_pr = σ^2(-Δ + τ^2)^-d, where Δ denotes the Laplacian operator in Ω subject to homogeneous Neumann boundary conditions, τ denotes the inverse length scale of the random field and d > 0 determines its regularity. For the numerical experiments presented in this section, we takethe same values for these parameters as in<cit.>: τ = 3, d = 2, σ = 1. To sample from the prior distribution, we can use the Karhunen-Loeve (KL) expansion , which has the formm(x) = ∑_k∈ℤ^+θ_k√(λ_k)ψ_k(x),where λ_k and ψ_k are the eigenvalues and eigenfunctions,and θ_k ∼𝒩(0, 1) are independent random variables. In practice, we truncate the sum (<ref>) to n_d terms, based on the largest n_d eigenvalues, and hence θ∈ℝ^n_d. The forward problem is solved by FEM method on a 70× 70 grid.We will create the observation data for the inverse problem using the in-distribution data (IDD) and out-of-distribution data (OOD), respectively, as shown in Fig.<ref>. The IDD field m_ref(x) is calculated using (<ref>) with n_d=256 and θ_k∼𝒩(0,1). The OOD field m_ref(x) is generated for convenienceby sampling θ_k∼𝒰[-20, 20], k=1,…, 256.To avoid the inverse crime, we will try to inverse the first N_m = 128 KL modes using these observation data. We plot the retraining loss, model error, and relative error in Fig.<ref> to illustrate the effectiveness of our framework. When we apply the initial pre-trained model directly to run UKI, we can clearly see in the middle display of Fig.<ref> that the model error gradually decreases during the iteration for IDD data. Consequently, the relative error for IDD data will be nearly equal to the FEM-UKI value, as shown in the right display of Fig.<ref>.However, for OOD data, the model error will rise sharply, suggesting that the previously trained model will not work as planned. A poorer estimate will result from the simultaneous immediate increase in the relative error for OOD data.Nevertheless, by using an adaptive dataset that arrives at the initial model's rough estimate, we can enhance the pre-trained model.The left display of Fig.<ref>shows that the training loss increases initially and then decreases as we refine. As expected, even for OOD data, the model error in Fig.<ref> continuously decreases with refinement. When combined with our stop criteria, the OOD inversion requires six refinements, compared to the IDD inversion's four, suggesting a slower rate of convergence. As shown in the right display of Fig.<ref>, for both types of data, the accuracy of the DeepOnet-UKI-Adaptive estimate progressively increased, beginning with the approximative estimate provided by DeepOnet-UKI-Direct. We plot the final estimated permeability fields produced by three different methods in Figs. <ref> and <ref>for the detailed inversion result. The true permeability field is well approximated by the estimated permeability fields obtained by FEM-UKI and DeepOnet-UKI-Adaptive, but DeepOnet-UKI-Direct's result differs significantly, further demonstrating the efficacy of our framework.We repeat the experiment ten times for each Q∈[20,50,100,150] to test the impact of the number Q of adaptive samples used in each refinement. The error box of err_DeepOnet - err_FEM is plotted in Fig.<ref>. It is evident from the IDD data that the relative error does not decrease significantly with increasing data. This suggests that a small set of adaptive samples– roughly 50 –can meet the requirements for accuracy and efficiency. On the other hand, the relative error for OOD data steadily drops with increasing dataset size. In order to demonstrate the computational cost and examine the effects of varying noise levels, we are going to perform the experiment with three different noise levels (0.01, 0.05, and 0.1) and then repeat it with ten different UKI initial values. The numerical results are shown inFig.<ref>. We can clearly observe that the relative error gradually drops as noise levels rise, suggesting that higher noise levels are less sensitiveto model errors. Consequently, our framework performs better in real-world applications with higher noise levels. We plot the mean total online forward evaluations in Fig.<ref> to show the computational cost. It is evident that our method has a very small cost in comparison to conventional numerical methods, even with adaptive refinement.This is due to the fact that, unlike FEM-UKI, which requires 257 forward evaluations, we only need a maximum of 50 samples to refine the model each time.The online fine-tuning cost is negligible when compared to the expensive online forward simulations, as retraining only takes a few seconds each time.§.§ Example 2: The heat source inversion problemConsider the followingheat conduction problem in Ω u_t(x) - Δ u(x)= f(x,t),in Ω×[0,1],u(·, 0)= Φ, in Ω,u|_∂Ω = 0,on ∂Ω× [0,1], wherethe initial condition is taken asΦ(x, y) = 100sin(x)sin(y).The objective is to identify the heat source f from noisy measurements. The heat source field is considered in this paper with the formula e^-tm(x). Conversely, the inverse problem involves using noisy measurements of u(x, 1) to determine the true spatial source field m(x). We assume that the Gaussian random field defined in Eq.(<ref>) is the prior of m(x). The FEM method is used to solve the forward problem on a 70×70 grid, and the resulting differential equations are integrated using the implicit-Euler scheme with a uniform time step of Δ t = 0.02.We assume that the ground truth m_ref(x) has an analytical solution in this example, i.e., m_ref(x) = sin(π x)cos(π y). Using this specific solution, we generate the observations y from the final temperature field u(x, 1) at 36 equidistant points in Ω. Fig.<ref> displays the corresponding observations and the true spatial field m.In the inverse procedure, the KL expansion (<ref>) will be employed to approximate the true source field.Specifically, to accomplish the inversion task, we will truncate the first 128 modes.To verify the effectiveness of our framework, we first conduct the experiment using the original pre-trained model directly to run UKI, i.e. DeepOnet-UKI-Direct. We plot the local relative model error and the relative error of inversion in the middle and right displays of Fig.<ref>, respectively. As expected, there will be a significant increase in the local relative model error, and finally, the pre-trained model will not be able to predict the solution at all. Because of the growing model error, the relative error exhibits similar behavior, growing significantly and producing an entirely incorrect final estimate.Nonetheless, we can carry out our adaptive refinement by creating adaptive samples around this estimate. As shown in the middle of Fig.<ref>, we canretrain the model using this adaptive dataset to reduce the local model error. The refinement is indicated by the restart point of training loss in the left display of Fig.<ref>. As a result, our method yields relative errors that gradually decrease and are comparable to those of FEM-UKI.Figs.<ref>-<ref> also show this phenomenon. It is evident to us that the final numerical results produced by DeepOnet-UKI-Adaptive and FEM-UKI both closely resemble the true one and do not significantly differ from one another.This suggests that even with such OOD data in closed form, our method can handle it. In order to provide additional evidence of the efficacy of our approach, we figure out to perform the experiment for UKI at varying noise levels. In addition, we will repeat the experiment ten times with different initial values for each noise level. Following that, we will compare the number of forward evaluations and the relative errors for every approach. We plot the difference of the relative errors err_DeepOnet - err_FEM in the left display of Fig.<ref>. It is clear that when noise levels increase, DeepOnet-UKI-Adaptive performs often better than FEM-UKI. This implies that our method can achieve higher accuracy than traditional solvers. For the reasons mentioned below, the computational cost of the new method can also be extremely small.First of all, it is much faster to fine-tune the original pre-trained surrogate model than it is to solve PDEs. Specially, we only need a maximum of 50online forward evaluations in this example to retrain the network, which drastically lowers computational costs. We are able to clearly see that DeepOnet-UKI-Adaptive has a substantially smaller total number of forward evaluations than FEM-UKI, as the middle display of Fig.<ref> illustrates. Secondly, the entire process is automatically stopped by applying the stop criterion.We can start with the initial model that has been trained offline and fine-tune it multiple times for a given inversion task. As a result, our framework can achieve an accuracy level comparable to traditional FEM solvers, but at a significant reduction in computational cost for such problems.§.§ Example 3: The reaction diffusion problem Here we consider the forward model as a parabolic PDE defined asu_t(x)-κΔ u(x)+𝐯(x) ·∇ u(x) =0in Ω×(0, 1), u(·, 0) =m in Ω,κ∇ u ·𝐧=0on ∂Ω×(0, 1), where κ = 1/30 is the diffusion coefficient, and v := (sin(π x)cos(π y), -cos(π x)sin(π y))^T is the velocity field. The forward problem is to find the concentration field u(x, t) defined by the initial field m(x). The inverse problem here is to find the true initial field m using noisy measurements of u(x, 1).The forward problem isdiscretized using FEM method on a 70× 70 grid, andthe resulting system of ordinary differential equations is integrated over time using a Crank-Nicolson scheme with uniform time step Δ t = 0.02.We only take into account the OOD data as Example 1 for the inverse problem.In other words, we will attempt to inverse the first 128 KL modes using the ground truth m_ref(x), which is defined by (<ref>) with θ_k∼𝒰[-20,20],k=1,⋯, 256.The exact solution and the correspondingsynthetic dataare displayed in Fig.<ref>. First, we plot the training loss in the left display of Fig.<ref>. It is evident that, the training loss will first decrease and then increase rapidly due to the refinement. We plot the model error in the middle display of Fig.<ref> to demonstrate the effectiveness of the adaptive refinement. It is obvious that without refinement, the model error will significantly increase, leading to a completely incorrect estimate. The local model error eventually drops as the initial model is improved, and after six iterations of the initial model, the retraining was terminated according tothe stop criteria.This suggests that our model can maintain the local accuracy during the inversionprocess by concentrating on the local area with a high posterior probability. Thus, as shown in the right display of Fig.<ref>, DeepOnet-UKI-Adaptive can achieve almost the same order of accuracy as FEM-UKI. Figs.<ref> and <ref>, which plot the final estimated initial fields and estimated states obtained by various methods, can be used to further verify this.We repeat the experiment with varying noise levels in order to thoroughly compare the performance of DeepOnet-UKI-Adaptive and FEM-UKI. Werepeat the experiment ten times for each noise level, varying the UKI initial values each time. The difference between the relative errors err_DeepOnet - err_FEM and the mean total number of forward evaluations is displayed in Fig <ref>. It is evident that DeepOnet-UKI-Adaptive can even achieve smaller relative errors than FEM-UKI when dealing with higher noise levels.Furthermore, our method has a very low computational cost. In comparison to DeepOnet-UKI-Adaptive, the total number of forward evaluations for FEM-UKI is at least ten times higher. That is to say, our approach can efficiently complete the inversion task with significantly lower computational cost once the initial model has been trained. This feature offers the possibility to handle real-time forecasts insome data assimilation tasks. § CONCLUSION We present an adaptive operator learning framework for iteratively reducing the model error in Bayesian inverse problems. In particular, the unscented Kalman inversion(UKI) is used to approximate the solution of inverse problems, and the DeepOnet is utilized to construct the surrogate. We suggest a greedy algorithm to choose the adaptive samples to retrain the approximate model. The performance of the proposed strategy has been illustrated by three numerical examples. Although only the UKI algorithm is considered in this paper, the framework can be conveniently extended to a much wider class of particle-based methods with simple and minor modifications.The extension of the present algorithm with other neural operators is also straightforward. § Proof of Theorem <ref>: We first consider the error estimate of the covariance matrix.Using Eqs. (<ref>) and (<ref>), we haveC_∞^-1 - C_∞^-1 = 𝒢^TΣ_η^-1𝒢 - 𝒢^TΣ_η^-1𝒢_I_1+(α^2C_∞+Σ_ω)^-1-(α^2C_∞+Σ_ω)^-1_I_2.Note that the first part is proved in Eq.(<ref>), i.e.,I_1_2≤2ϵ HΣ_η^-1_2. We consider the second part. Let us assume that ℬ represents the Banach spaces of matrices in ℝ^N_m×ℝ^N_m. The operator norm in ℝ^N_m is induced by the Euclidean norm. The Banach spaces of linear operators equipped with the operator norm are denoted by ℒ:ℬ→ℬ.If we define f(X;α):=(α^2X^-1 + Σ_ω)^-1, then I_2 = f(C_∞^-1;α) - f(C_∞^-1;α). Df(X;α), the derivative of f, is defined by the direction Δ X∈ℬ as Df(X;α)Δ X = α^2(α^2ℐ+XΣ_ω)^-1Δ X(α^2ℐ+XΣ_ω)^-1.According to <cit.>, f is a contraction map in ℬ, such that we have β:= sup_X∈ℬDf(X;α)_2 <1.Therefore, we can use the Mean Value Theorem in matrix functions to get that I_2_2 = f(C_∞^-1;α) - f(C_∞^-1;α)_2 ≤βC_∞^-1 - C_∞^-1_2.Combining Eqs.(<ref>) and (<ref>) yields C_∞^-1 - C_∞^-1_2≤ 2ϵ HΣ_η^-1_2 + βC_∞^-1 - C_∞^-1_2.Then we can have the error estimate of the covariance matrix C_∞^-1 - C_∞^-1_2≤2ϵ H/1-βΣ_η^-1_2.We now take into consideration the error estimate of the mean vector. Using Eqs. (<ref>) and (<ref>), we obtainC_∞^-1r_∞ - C_∞^-1r_∞ = (𝒢^T - 𝒢^T)Σ_η^-1y+(α^2C_∞+Σ_ω)^-1α r_∞ -(α^2C_∞+Σ_ω)^-1αr_∞ = (𝒢^T - 𝒢^T)Σ_η^-1y + α r_∞ f(C_∞^-1;α) - αr_∞f(C_∞^-1;α).Since C_∞^-1r_∞ - C_∞^-1r_∞ = C_∞^-1r_∞ - C_∞^-1r_∞ + C_∞^-1r_∞ - C_∞^-1r_∞ = C_∞^-1(r_∞ - r_∞) + (C_∞^-1 - C_∞^-1)r_∞,and α r_∞ f(C_∞^-1;α) - αr_∞f(C_∞^-1;α)= α (r_∞-r_∞) f(C_∞^-1;α) + αr_∞(f(C_∞^-1;α)- f(C_∞^-1;α) ).We can obtain(C_∞^-1 - α f(C_∞^-1;α))(r_∞ - r_∞)= (𝒢^T - 𝒢^T)Σ_η^-1y_I_3 - (C_∞^-1 - C_∞^-1)r_∞_I_4+ αr_∞ (f(C_∞^-1;α)- f(C_∞^-1;α) )_I_5.For the first part I_3, we have I_3_2≤𝒢^T - 𝒢^T_2Σ_η^-1y_2≤ϵΣ_η^-1y_2.And then the second part, I_4_2≤C_∞^-1 - C_∞^-1_2r_∞_2.For the last part, according to Eq.(<ref>) we have I_5_2 ≤αr_∞_2f(C_∞^-1;α)- f(C_∞^-1;α)_2≤αβr_∞_2C_∞^-1 - C_∞^-1_2.Moreover, from Eq.(<ref>) we have C_∞^-1 - α f(X_∞^-1;α) = (1-α)(α^2C_∞ + Σ_ω)^-1 + 𝒢^TΣ_η^-1𝒢≻ 0.Combining Eqs.(<ref>)-(<ref>), we have r_∞ - r_∞_2 ≤((1-α)(α^2C_∞ + Σ_ω)^-1 + 𝒢^TΣ_η^-1𝒢)^-1_2I_1 - I_2 + I_3_2≤(𝒢^TΣ_η^-1𝒢)^-1_2( ϵΣ_η^-1y_2 + (1 + αβ) r_∞_2C_∞^-1 - C_∞^-1_2)≤(𝒢^TΣ_η^-1𝒢)^-1_2( Σ_η^-1y_2 +2(1 + αβ)H/1-βr_∞_2Σ_η^-1_2) ϵ.Note that by Eq.(<ref>), we have (C_∞^-1 - α(α^2C_∞+Σ_ω)^-1)r_∞ =(𝒢^TΣ_η^-1𝒢 + (1-α)(α^2C_∞+Σ_ω)^-1)r_∞=𝒢^TΣ_η^-1y.Afterwards, we can get the bound of r_∞ asr_∞_2 ≤(𝒢^TΣ_η^-1𝒢 + (1-α)(α^2C_∞+Σ_ω)^-1)^-1_2𝒢^TΣ_η^-1y_2≤(𝒢^TΣ_η^-1𝒢)^-1_2𝒢^TΣ_η^-1y_2≤ H(𝒢^TΣ_η^-1𝒢)^-1_2Σ_η^-1y_2.Combining Assumption <ref> and Eqs.(<ref>) and (<ref>), we can get r_∞ - r_∞_2 ≤(𝒢^TΣ_η^-1𝒢)^-1_2( Σ_η^-1y_2 +2(1 + αβ)H/1-βr_∞_2Σ_η^-1_2)ϵ≤(𝒢^TΣ_η^-1𝒢)^-1_2Σ_η^-1y_2(1 + 2(1 + αβ)H^2/1-β(𝒢^TΣ_η^-1𝒢)^-1_2Σ_η^-1_2)ϵ≤K_1Σ_η^-1y_2/𝒢^TΣ_η^-1𝒢_2(1 + 2(1 + αβ) K_2 H^2/1-βΣ_η^-1_2/𝒢^TΣ_η^-1𝒢_2)ϵ≤K_1H_ηH_y/C_1(1 + 2(1 + αβ) K_2H_η H^2/(1-β)C_2)ϵ,where K_1, K_2 is the upper bound of the condition number of 𝒢^TΣ_η^-1𝒢, 𝒢^TΣ_η^-1𝒢 respectively and H_η:=Σ_η^-1_2, H_y = y_2. 10cui2016scalable Tiangang Cui, Youssef Marzouk, and Karen Willcox. Scalable posterior approximations for large-scale bayesian inverse problems via likelihood-informed parameter and state reduction. Journal of Computational Physics, 315:363–387, 2016.zhu2016bayesian Hejun Zhu, Siwei Li, Sergey Fomel, Georg Stadler, and Omar Ghattas. A bayesian approach to estimate uncertainty for full-waveform inversion using a priori information from depth migration. Geophysics, 81(5):R307–R323, 2016.alexanderian2016fast Alen Alexanderian, Noemi Petra, Georg Stadler, and Omar Ghattas. A fast and scalable method for a-optimal design of experiments for infinite-dimensional bayesian nonlinear inverse problems. SIAM Journal on Scientific Computing, 38(1):A243–A272, 2016.bui2013computational Tan Bui-Thanh, Omar Ghattas, James Martin, and Georg Stadler. A computational framework for infinite-dimensional bayesian inverse problems part i: The linearized case, with application to global seismic inversion. SIAM Journal on Scientific Computing, 35(6):A2494–A2523, 2013.petra2014computational Noemi Petra, James Martin, Georg Stadler, and Omar Ghattas. A computational framework for infinite-dimensional bayesian inverse problems, part ii: Stochastic newton mcmc with application to ice sheet flow inverse problems. SIAM Journal on Scientific Computing, 36(4):A1525–A1555, 2014.cotter2013mcmc Simon L Cotter, Gareth O Roberts, Andrew M Stuart, and David White. Mcmc methods for functions: modifying old algorithms to make them faster. 2013.goodman2010ensemble Jonathan Goodman and Jonathan Weare. Ensemble samplers with affine invariance. Communications in applied mathematics and computational science, 5(1):65–80, 2010.gelman1997weak Andrew Gelman, Walter R Gilks, and Gareth O Roberts. Weak convergence and optimal scaling of random walk metropolis algorithms. The annals of applied probability, 7(1):110–120, 1997.cui2015data Tiangang Cui, Youssef M Marzouk, and Karen E Willcox. Data-driven model reduction for the bayesian solution of inverse problems. International Journal for Numerical Methods in Engineering, 102(5):966–990, 2015.lieberman2010parameter Chad Lieberman, Karen Willcox, and Omar Ghattas. Parameter and state model reduction for large-scale statistical inverse problems. SIAM Journal on Scientific Computing, 32(5):2523–2542, 2010.marzouk2009dimensionality Youssef M Marzouk and Habib N Najm. Dimensionality reduction and polynomial chaos acceleration of bayesian inference in inverse problems. Journal of Computational Physics, 228(6):1862–1902, 2009.schillings2020convergence Claudia Schillings, Björn Sprungk, and Philipp Wacker. On the convergence of the laplace approximation and noise-level-robustness of laplace-based monte carlo methods for bayesian inverse problems. Numerische Mathematik, 145:915–971, 2020.li2014adaptive Jinglai Li and Youssef M Marzouk. Adaptive construction of surrogates for the bayesian solution of inverse problems. SIAM Journal on Scientific Computing, 36(3):A1163–A1186, 2014.yan2017convergence Liang Yan and Yuan-Xiang Zhang. Convergence analysis of surrogate-based methods for bayesian inverse problems. Inverse Problems, 33(12):125001, 2017.yan2020adaptive Liang Yan and Tao Zhou. An adaptive surrogate modeling based on deep neural networks for large-scale bayesian inverse problems. Communications in Computational Physics, 28(5):2180–2205, 2020.Han+Jentzen+E2018PNAS J. Han, A. Jentzen, and W. E. Solving high-dimensional partial differential equations using deep learning. Proceedings of the National Academy of Sciences, 115(34):8505–8510, 2018.raissi2019physics Maziar Raissi, Paris Perdikaris, and George E Karniadakis. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational physics, 378:686–707, 2019.Schwab+Zech2019AA C. Schwab and J. Zech. Deep learning in high dimension: Neural network expression rates for generalized polynomial chaos expansions in uq. Analysis and Applications, 17(01):19–55, 2019.Tripathy+Bilionis2018JCP R. K. Tripathy and I. Bilionis. Deep UQ: Learning deep neural network surrogate models for high dimensional uncertainty quantification. Journal of Computational Physics, 375:565–588, 2018.Zhu+Zabaras2018bayesian Y. Zhu and N. Zabaras. Bayesian deep convolutional encoder–decoder networks for surrogate modeling and uncertainty quantification. Journal of Computational Physics, 366:415–447, 2018.deveney2019deep Teo Deveney, Eike Mueller, and Tony Shardlow. A deep surrogate approach to efficient Bayesian inversion in PDE and integral equation models. arXiv:1910.01547, 2019.yanRTO2021 Liang Yan and Tao Zhou. An acceleration strategy for randomize-then-optimize sampling via deep neural networks. Journal of Computational Mathematics, 39(6):848–864, 2021.li2023surrogate Yongchao Li, Yanyan Wang, and Liang Yan. Surrogate modeling for bayesian inverse problems based on physics-informed neural networks. Journal of Computational Physics, 475:111841, 2023.nabian2020adaptive Mohammad Amin Nabian and Hadi Meidani. Adaptive Physics-Informed Neural Networks for Markov-Chain Monte Carlo. arXiv: 2008.01604, 2020.wang2022and Sifan Wang, Xinling Yu, and Paris Perdikaris. When and why pinns fail to train: A neural tangent kernel perspective. Journal of Computational Physics, 449:110768, 2022.krishnapriyan2021characterizing Aditi Krishnapriyan, Amir Gholami, Shandian Zhe, Robert Kirby, and Michael W Mahoney. Characterizing possible failure modes in physics-informed neural networks. Advances in Neural Information Processing Systems, 34:26548–26560, 2021.gao2023failure Zhiwei Gao, Liang Yan, and Tao Zhou. Failure-informed adaptive sampling for pinns. SIAM Journal on Scientific Computing, 45(4):A1971–A1994, 2023.gao2023rFINN Zhiwei Gao, Tao Tang, Liang Yan, and Tao Zhou. Failure-informed adaptive sampling for pinns, part ii: combining with re-sampling and subset simulation. arXiv preprint arXiv:2302.01529, 2023.mcclenny2020self Levi McClenny and Ulisses Braga-Neto. Self-adaptive physics-informed neural networks using a soft attention mechanism. arXiv preprint arXiv:2009.04544, 2020.xiang2022self Zixue Xiang, Wei Peng, Xu Liu, and Wen Yao. Self-adaptive loss balanced physics-informed neural networks. Neurocomputing, 496:11–34, 2022.li2020fourier Zongyi Li, Nikola Kovachki, Kamyar Azizzadenesheli, Burigede Liu, Kaushik Bhattacharya, Andrew Stuart, and Anima Anandkumar. Fourier neural operator for parametric partial differential equations. arXiv preprint arXiv:2010.08895, 2020.lu2021learning Lu Lu, Pengzhan Jin, Guofei Pang, Zhongqiang Zhang, and George Em Karniadakis. Learning nonlinear operators via deeponet based on the universal approximation theorem of operators. Nature machine intelligence, 3(3):218–229, 2021.cao2023residual Lianghao Cao, Thomas O'Leary-Roseberry, Prashant K Jha, J Tinsley Oden, and Omar Ghattas. Residual-based error correction for neural operator accelerated infinite-dimensional bayesian inverse problems. Journal of Computational Physics, 486:112104, 2023.genzel2022solving Martin Genzel, Jan Macdonald, and Maximilian März. Solving inverse problems with deep neural networks–robustness included? IEEE transactions on pattern analysis and machine intelligence, 45(1):1119–1134, 2022.huang2022iterated Daniel Zhengyu Huang, Tapio Schneider, and Andrew M Stuart. Iterated kalman methodology for inverse problems. Journal of Computational Physics, 463:111262, 2022.stuart2010inverse Andrew M Stuart. Inverse problems: a bayesian perspective. Acta numerica, 19:451–559, 2010.Brooks2011 S. Brooks, A. Gelman, G. L. Jones, and X. L. Meng, editors. Handbook of Markov chain Monte Carlo. Chapman & Hall/CRC Handbooks of Modern Statistical Methods. CRC Press, Boca Raton, FL, 2011.Blei2017variational D. M. Blei, A. Kucukelbir, and J. D. McAuliffe. Variational inference: A review for statisticians. Journal of the American statistical Association, 112(518):859–877, 2017.lanthaler2022error Samuel Lanthaler, Siddhartha Mishra, and George E Karniadakis. Error estimates for deeponets: A deep learning framework in infinite dimensions. Transactions of Mathematics and Its Applications, 6(1):1–141, 2022.yan2019adaptive1 Liang Yan and Tao Zhou. Adaptive multi-fidelity polynomial chaos approach to bayesian inference in inverse problems. Journal of Computational Physics, 381:110–128, 2019.Chen2019projected P. Chen, K. Wu, J. Chen, T. O'Leary-Roseberry, and O. Ghattas. Projected stein variational newton: A fast and scalable bayesian inference method in high dimensions. In Advances in Neural Information Processing Systems, pages 15130–15139, 2019.Detommaso2018stein G. Detommaso, T. Cui, Y. Marzouk, A. Spantini, and R. Scheichl. A stein variational newton method. In Advances in Neural Information Processing Systems, pages 9169–9179, 2018.Garbuno2020interacting A. Garbuno-Inigo, F. Hoffmann, W. Li, and A. M. Stuart. Interacting langevin diffusions: Gradient structure and ensemble kalman sampler. SIAM Journal on Applied Dynamical Systems, 19(1):412–441, 2020.Liu2016stein Q. Liu and D. Wang. Stein variational gradient descent: A general purpose bayesian inference algorithm. In Advances in neural information processing systems, pages 2378–2386, 2016.yan2021stein Liang Yan and Tao Zhou. Stein variational gradient descent with local approximations. Computer Methods in Applied Mechanics and Engineering, 386:114087, 2021.Chada2021 N.K. Chada and X. T. Tong.Convergence acceleration of ensemble Kalman inversion in nonlinear settings. Mathematics of computation, 91:1247–1280, 2021.carrillo2022consensus José A Carrillo, Franca Hoffmann, Andrew M Stuart, and Urbain Vaes. Consensus-based sampling. Studies in Applied Mathematics, 148(3):1069–1140, 2022.ernst2015analysis O.G. Ernst, B. Sprungk, and H. Starkloff. Analysis of the ensemble and polynomial chaos Kalman filters in Bayesian inverse problems. SIAM/ASA Journal on Uncertainty Quantification, 3(1):823–851, 2015.huang2022efficient Daniel Zhengyu Huang, Jiaoyang Huang, Sebastian Reich, and Andrew M Stuart. Efficient derivative-free bayesian inference for large-scale inverse problems. Inverse Problems, 38(12):125006, 2022.iglesias2013ensemble M.A. Iglesias, K.J.H. Law, and A.M. Stuart. Ensemble Kalman methods for inverse problems. Inverse Problems, 29(4):045001, 2013.wang2023adaptive Yanyan Wang, Qian Li, and Liang Yan. Adaptive ensemble kalman inversion with statistical linearization. Communications in Computational Physics, 33(5):1357–1380, 2023.Weissmann_2022 S. Weissmann, N.K. Chada, C. Schillings, and X. T. Tong. Adaptive Tikhonov strategies for stochastic ensemble Kalman inversion. Inverse Problems, 38(4):045009, 2022.yan2019adaptive Liang Yan and Tao Zhou. An adaptive multifidelity PC-based ensemble kalman inversion for inverse problems. International Journal for Uncertainty Quantification, 9(3):205–220, 2019.wan2000unscented Eric A Wan and Rudolph Van Der Merwe. The unscented kalman filter for nonlinear estimation. In Proceedings of the IEEE 2000 Adaptive Systems for Signal Processing, Communications, and Control Symposium (Cat. No. 00EX373), pages 153–158. Ieee, 2000.
http://arxiv.org/abs/2310.17844v1
{ "authors": [ "Zhiwei Gao", "Liang Yan", "Tao Zhou" ], "categories": [ "math.NA", "cs.NA", "stat.CO", "stat.ML" ], "primary_category": "math.NA", "published": "20231027015033", "title": "Adaptive operator learning for infinite-dimensional Bayesian inverse problems" }
Shell et al.: A Sample Article Using IEEEtran.cls for IEEE JournalsWhat You See Is What You Detect: Towards better Object Densification in 3D detection Tianran Liu, Zeping Zhang, Morteza Mousa Pasandi, Robert LaganiereThis paper was supported, in part, by Synopsys under a partnership program of the Natural Sciences and Engineering Research Council of Canada (NSERC) 2023-10-25 ======================================================================================================================================================================================================================================= Recent works have demonstrated the importance of object completion in 3D Perception from Lidar signal. Several methods have been proposed in which modules were used to densify the point clouds produced by laser scanners, leading to better recall and more accurate results. Pursuing in that direction, we present, in this work, a counter-intuitive perspective: the widely-used full-shape completion approach actually lead to a higher error-upper bound especially for far away objects and small objects like pedestrians. Based on this observation, we introduce a visible part completion method that requires only 11.3% of the prediction points that previous methods generate. To recover the dense representation, we propose a mesh-deformation-based method to augment the point set associated with visible foreground objects. Considering that our approach focuses only on visible part of the foreground objects to achieve accurate 3D detection, we named our method What You See Is What You Detect (WYSIWYD). Our proposed method is thus a detector-independent model that consists of 2 parts: an Intra-Frustum Segmentation Transformer (IFST) and a Mesh Depth Completion Network(MDCNet) that predicts the foreground depth from mesh deformation. This way, our model does not require the time-consuming full-depth completion task used by most pseudo-lidar-based methods. Our experimental evaluation shows that our approach can provide up to 12.2% performance improvements over most of the public baseline models on the KITTI and NuScenes dataset bringing the state-of-the-art to a new level. The codes will be available at <https://github.com/Orbis36/WYSIWYD>Cross modality 3D detection, Object completion, Mesh deformation§ INTRODUCTION For high-performance autonomous driving perception, lidar is probably the most critical sensor. Although we have recently witnessed an increasing amount of work based on lidar-image multimodality input, the lidar features extractor is still used as the mainstream branch in most network design, since it provides accurate depth information.The main limitation of lidar is the sparsity of its representations: with the depth increasing, the density of the geometric features obtained from the point cloud decreases rapidly. For the lidar-based methods, recent works <cit.> have realized that better performance can be obtained by performing shape completion of objects with the network. However, it is important to point out that, most of these works are established under the premise that the first stage 3D Region of Interest(ROI) proposal is accurate enough, such that objects inside the proposed boxes can be refined and completed reliably. Considering that the Region Proposal Network(RPN) module at this stage still needs to face the sparsity of the lidar, especially for distant objects, this precondition can hardly be achieved. At the same time, even if the first stage detection fulfills the requirements, the completion task itself is inherently difficult for sparse inputs when relying only on lidar signal.In early multimodal detection networks <cit.>, RGB features are used to decorate points or voxel features. With the research progressing, lifting 2D features to the pseudo lidar virtual points in 3D space has become increasingly popular. The success of recent cross-modality completion-based methods<cit.> confirms the appropriateness of this approach. Specifically, SFDNet<cit.> has pioneered an attempt to introduce external points to complement the objects. This augmented data from depth completion, is equivalent to interpolating the lidar signals of the objects surface. By fusing this information with real lidar signals, complete objects can thus be generated from sparse point clouds. This enriched 3D point representation has led to significant improvements in performance.However, it is important to note that these pseudo-lidar representations are still plagued with inherent artifacts introduced by boundary depth dispersion problems, as shown in Fig <ref>. Considering that the ground truth used to train the depth completion network is semi-dense and the inherent smoothing properties of convolutional layers used in these models, the depth estimation of boundary pixels always tends to leak toward the background. As a matter of fact, this issue seriously affects the accuracy of subsequent detections, as it will later be demonstrated in section <ref>.Although a recent work <cit.> has made it possible to reduce the inaccuracy of the depth estimation through a learnable discard module, models based on depth completion still suffer from two other issues. First, full frame depth completion itself is time-consuming and most of the generated background depth information is useless for the later process. A typical depth completion method like PENet<cit.> needs 161ms<cit.> per frame when CUDA synchronization is used. In consequence, a complete system based on such a pre-processing network will be far from real-time detection. Second, even if the outlier depth points can be dropped accurately, this discard-based strategy will actually further dilute the already limited amount of semantic information available for small objects.In this work, we revisit the fundamental issues related to the quality of foreground depth and our objective is to make the entire detector to get rid of time-consuming full frame depth completion networks. Overall, we only complete the foreground points instead of performing global depth completion. This process can be divided into two steps: foreground points segmentation and object densification. Specifically, we demonstrate that visible part completion leads, in fact, to equivalent or even superior results than full shape completion used by most of the previous methods. This good performance also explains why methods like SFDNet<cit.> or VirConv<cit.> perform better than more traditional completion-based methods. To estimate the foreground region which needs to be densified, rather than relying on 3D RPNs that are insensitive to sparse objects, we choose to use the well-developed 2D instance segmentation networks. Within the 3D frustum of a specific segmented 2D object, considering the depth distribution of the point cloud, we design a lightweight Transformer, named IFST, to filter out the noise points.Next, different from the existing models that use pseudo points from depth completion, here we chose to reconstruct the object directly from the lidar signal. The points from the IFST still contain some noise, so an ideal model should be able to control the shape and be invariant to noise as well. To fulfill such requirement, we integrated a mesh deformation approach to completion-based 3D detection for the first time. Through a lidar aggregation layer, we make the mesh learn the distribution of the lidar signal in a coarse to fine manner and restore the shape of the object progressively. Notably, the consistency of the resulting mesh is guaranteed and the vertex will not leak into the background benefiting from the specific guidance of the Laplacian loss.It is also worth noting that although there are several works that claim to produce accurate point cloud completion<cit.>, they have all been tested in a noiseless or indoor environment. Few studies have demonstrated that they can be adapted to the case of sparse inputs. In summary, our contributions are as follows: * By analyzing the previous depth pseudo point-based completion models, we propose to perform visible part completion, which has a higher detection upper bound from our verification experiments. * We introduce a novel lightweight Intra-Frustum Segmentation Transformer that utilizes the 2D location prior and 3D locations to extract foreground points. As the main component, a mesh-deformation-based completion module is proposed to learn the visible shape of an object from the lidar signal. * By combining our modules with the publicly available 3D detector baseline, extensive experiments have demonstrated that it is possible to provide up to 12.2% performance improvement and obtain SOTA performance, especially in small object detection (see Figure <ref>). § RELATED WORK Image perception guided 3D detection. In comparison to 3D detection, image-based detection, and segmentation tasks have reached a high level of performance in recent years. With the help of a 2D detection module, F-PointNet<cit.> introduced intra-frustum detection to filter out the background points. F-Convnet<cit.> then further developed the idea by proposing a sliding windows approach inside the frustum. A similar strategy was also adopted to obtain better fusion results in F-fusion<cit.>. Considering that distant objects can still be well-detected in RGB images, FarFrustum<cit.> further improved detection performance at different objects scale. The recent years have observed a rapid growth of works that utilize the well-developed 2D perception to guide 3D detection. FSF<cit.> uses points in instance segmentation masks to augment the quality of lidar foreground queries before sending them to the Transformer Layer. MVF<cit.> utilizes the masks from 2D segmentation to add virtual points in 3D space. Frustumformer <cit.> proposed an instance-aware resampling method to better utilize the more informative foreground pixels in BEV representation. Fusion in homogeneous space. From the accuracy point of view, the ROI (region of interest) level fusion performed in MV3D <cit.> and AVOD<cit.>, in which features are learned in separate spaces but fused by concatenation or other learning-based method directly, is undoubtedly not optimal. The Deep Continuous Fusion<cit.>, pioneered the exploration of interpolating RGB features and using them as subsidiary information for lidar voxels or points to participate in detection. This method can also be observed in several 3D detection pipelines <cit.>. However, the mentioned cross-modality methods still adopt lidar as the mainstream detector and do not allowpoint clouds to be complemented by the RGB features. The image information in the sparse part of the lidar remains underutilized. To address the inadequate information fusion due to point cloud sparsity, several recent approaches have realized that fusion performed in homogeneous space, normally 3D space, can significantly improve performance. Specifically, following the method in FCOS3D<cit.>, the HomoFusion<cit.> projects lidar points to FOV and constructs depth confidence intervals for the RGB features, These points are sent to 3D space in order to perform homogeneous fusion. Furthermore, VPFNet<cit.> constructs virtual points from the foreground pixels to improve the utilization of local point clouds and RGB information. Through their virtual points multi-depth unprojection, MSMDFusion<cit.> also reaches SOTA performance on nuScenes.Object Completion in 3D detection. Intuitively, a more complete shape can clearly improve detection accuracy. Considering that over half of hard samples in KITTI contain no more than 30 points, object completion in lidar representation has therefore a high potential to significantly improve the performance of a 3D detector. However, the estimation of shape and position requires strong prior knowledge, which is difficult to learn through detection networks. Current works in object completion-assisted detection can be broadly divided into two categories: using pseudo points obtained by pre-computed deep completion networks or through a sub-network densifying foreground points or voxels.For the former, SFDNet<cit.> introduced the first detection architecture that includes a depth completion module. VirConv<cit.> further proposed Noise-Resistant Submanifold Convolution to identify and exclude the points for which the depth is incorrectly estimated. PseudoLidar++<cit.> proposed a KNN-based traditional optimization algorithm, which corrects the position of pseudo points obtained by monocular 3D detection while maintaining the real-time nature of the network. For the latter, BtcDet <cit.> designed an occlusion prediction sub-network to recover the missing points by self or external occlusion in a cylindrical coordinate system. SPG<cit.> chose to densify the foreground points by an unsupervised expansion process.Sparse2Dense<cit.> expressed this process more implicitly. In this work, densification is considered as a distance optimization problem under the hidden space.PC-RGNN<cit.> designed a GNN-based adversarial network and SieNet<cit.> proposed a PointNet-based interpolation method to generate the possible foreground points. GDCompletion<cit.> and PCN<cit.> proposed graph-based and point net-based methods to densify foreground points for improved downstream tasks. <cit.> introduced a matching mechanism by calculating the voxel gradient to obtain a better location of missing points in foreground regions.§ PROBLEM FOMULATION As discussed, 3D detection approaches based on point completion have had great success. In this section, we will outline some of the core reasons leading to this success. Based on this, a better completion method and the steps to generate the ground truth for training will be introduced.§.§ Hypothesis A typical pseudo-points-based detector method needs an upstream depth completion module, in which the depth of every pixel is estimated. Here we use SFDNet<cit.> as the testbed to explore the relation between the accuracy of foreground points and 3D detection. The authors of SFDNet mentioned PENet<cit.> and TWISE<cit.> in their paper and implementation but only adopted the latter in the published code. Consequently, we first simply replace this completion module with different depth completion methods to verify their effectiveness inside the downstream 3D detection. As shown in Fig. <ref>, the highest 3D detection performance occurs when TWISE is used. But when applying RMSE, the criterion for evaluating the performance of depth completion, TWISE obtains the worst results (as shown by the pink line). Considering that the RMSE evaluates both foreground and background depth, while the foreground is more critical to 3D detection, we speculate that TWISE<cit.> obtain a better performance in foreground depth prediction. To verify this hypothesis, we need to generate the ground truth of foreground depth, since the lidar points cannot guaranteed that all pixels in the 2D foreground region can get a depth estimate. This generation process is introduced in section <ref> and will later be used to estimate the RMSE of the foreground objects in section <ref>. §.§ Dense depth generation of visible partWith the annotated 3D boundary boxes and 2D masks<cit.>, the points that can be projected to the specific mask of objects in the KITTI dataset can be extracted from the lidar file, denoted by P={ p_1, p_2,⋯, p_Q } p_i ∈ℝ^(n_i × 3), Q objects in total, n_i points for the object i. The masks of all objects are denoted as 𝒪={Ω_1, Ω_2,⋯, Ω_Q } Ω_i ∈ℕ^(m_i × 2), m_i pixels for the mask i. Considering the correspondence relationship, set P and 𝒪 are equinumerous. We now want to get a dense depth that can make ∀ i, n_i = m_i. To generate a groundtruth for the dense visible part depth, we first get the full shape of the object and then obtain the depth of its visible part. For the KITTI dataset, we first maintain a pool of objects that contain every object p_i extracted by 3D boxes, if n_i > 20 for cars and 10 for pedestrians. Secondly, for every object p_i in the pool, we mirror it to recover the backside of the object, and the output obtained is denoted by A. Then we basically follow the heuristic ℋ(A, B) mentioned in BtcDet<cit.> to find the best match B in the pool for the sample A. However, we noticed that this method can hardly reflect the shape of a real 2D mask when we project them to the image, especially when the object is very sparse. So here, the heuristics are modified as the follows: for every sample A, a best matching B will minimize the 𝒢(A, B) as shown in Equation <ref>. Here, we add a term to calculate the pixel-wise IOU between ground-truth and 2D mask obtained from the projection of completed objects. ϵ is an indicator function, when n_i > 10 , ϵ equal to zero otherwise 1. The Γ is a surjection between space location and 2D location: Γ : ℝ^(n_i × 3)→ℕ^(n_i × 2), determined by the intrinsic matrix. 𝒢(A, B) = ℋ(A, B) + ϵ IOU(c_A, Γ(p_B)) p^'_A=p_A + p_B∈ℝ^(n_A + n_B) × 3 After the completed p^'_A is obtained, a triangle mesh surface reconstruction denoted as ℱ, is adopted to get the hull of objects, denoted by ℳ_A. ℳ_A=ℱ(p^'_A)Placing the obtained hull in 3D space, with the known camera intrinsic matrix, extrinsic matrix, and the pixel coordinates of mask Ω_i, a ray casting model can be established. Note that since the rays from the pixel on the boundary sometimes misses the object, a volume expansion coefficient α is used here to make sure there is always a bijection from pixel set to depth set. For each pixel in the mask, when we project it to 3D space, there is a well-determined point c_o: (0, y_c, z_c) and its direction vector (x⃗_⃗c⃗, y⃗_⃗c⃗, z⃗_⃗c⃗). Then the depth of a specific pixel, denoted by d_c, can be obtained by:L_c : ([ x; y; z ])=([ 0; y_c; z_c ])+λ([ x⃗_⃗c⃗; y⃗_⃗c⃗; z⃗_⃗c⃗ ]) d_c = |L_c ∩αℳ_A -c_o | ·|x⃗_⃗c⃗| We can get the dense depth for the visible part after this process, as shown in Figure <ref>. With the ground truth obtained, the foreground RMSE for 3 different models have been calculated as shown in Figure <ref>. The plain green line here demonstrates that among all candidate models, TWISE can achieve the best performance in foreground depth prediction. We thus showed that the key reason TWISE benefit downstream 3D detection is its high quality foreground depth prediction. In other words, the higher the foreground pseudo points quality, the better our 3D model performance. Pursuing with this idea, next we will present the result of using the generated ground truth to train a model directly. We will explore the upper boundary performance of this completion method and compare it with that obtained from the full completion. §.§ Completion Method ComparasionIntuitively, a more complete object should lead to better results. However, from our experiments, the visible part of ground truth provides a counter-intuitive answer. Here we select 4 recent lidar detection models, which take the original lidar points and augmented points from different methods as input. As shown in Table 1, a wide performance increment can be observed especially in pedestrian detection. With only 11.3% (on average) of the points contained in full shape completion, in most cases, the improvement of our proposed completion approach can be up to 8.5%. We also noticed that worse results happen in the hard category, which can be explained by the small number of pixels available. This experiments proves that our proposed visual partial completion is in fact more suitable for 3D detection tasks. We believe this result can also be used to explain why the recent proposed depth completion model can outstand the vanilla shape completion-based methods: the former actually provides an overall higher upper boundary and needs less points to be predicted.After having demonstrated the higher potential of the proposed visable part completion approach, our next question is if visable part foreground points matter, then how can we learn from these densified points and complete the objects? In the following section, we will introduce the proposed network which uses the points in frustum as input and generate pixel-wise visible part depth in a mesh-deformation manner.§ PROPOSED METHOD§.§ OverviewIn order to identify the foreground points that need to be densified, we first project all lidar points onto a 2D mask which is obtained from an image instance segmentor. Although this approach successfully removes most irrelevant background points, it is worth mentioning that there may still be some noisy lidar points within the frustum due to inaccuracies in 2D segmentation and potential occlusions in 3D space. So we propose a lightweight transformer network to identify the foreground lidar points. In the subsequent step, by considering the depth of each pixel as a vertex, a mesh deformation-based network will densify the sparse lidar signal, allowing downstream detectors to benefit from this augmented pseudo-point representation. The overall structure of our model is illustrated in Figure <ref> and Figure <ref>. §.§ Intra-Frustum points segmentationLarge-scale point cloud segmentation tasks have been well developed in recent years<cit.>, however, few works focus on Intra-Frustum segmentation. From our experiments, we identify three essential characteristics that the performance of models can benefit from when conducting Intra-Frustum points segmentation. * Eliminate downsampling operation: Different from the traditional full scene point segmentation scenario, in a typical frustum produced by an image mask, each point has the potential to bring in useful semantic information. Considering that the overall number of points in frustums is only of the order of 100-1000, there is no need to adopt any downsampling strategy in the design of this network. * Guidance from 2D location: During the projection from 3D to 2D, the background points naturally have a higher probability of being located on pixels at the boundary of the mask. This is a priori assumption has to be considered in order to obtain a more accurate point segmentation. * Guidance from Perspective relationship and Points Density: Another geometric property that is often overlooked is that the objects in the image are naturally larger when close to the camera. This perspective phenomenon allows us to easily filter out some of the noise points. To optimize the utilization of the mentioned characteristics without compromising inference speed, we propose the Intra-Frustum Segmentation Transformer (IFST), as depicted in Figure <ref>. To use point density to guide the intra-frustum segmentation, we first divide the whole frustum by the density-adaptive splitting scheme shown in Algorithm 1. By combining Gaussian kernel density estimation and softmax function, we assign to each point a score proportional to the density of points around it. By accumulating this value in the order of depth, we can give bins a finer granularity when the points are dense and vice versa. After partitioning the frustum space, a pointnet-like structure will process points in different sub-frustum and concatenate the feature of the mask size later. These sub-frustum wise feature will predicts the 𝒦(x) which shows the probability of each sub-frustum being foreground. This probability distribution will later be concatenated with 3D location embedding, and then sent to the subsequent network.To get a better representation, we adopted two tricks from <cit.>, in the design of IFST. First, we transform the 2D/3D points into the frequency domain using sinusoidal functions as shown in equation 6. This projection allows similar inputs under Euclidean space to be clearly recognized by the network. σ_i here represent the feature in the i-th dims and ∑ represent feature stack operation. Specifically, both p_i and its projection (u_i, v_i) are processed by this function separately, and the computed 3D embedding γ(σ) will be concatenated with 𝒦(x) as shown in Fig 5. Secondly, to better describe the local features of the point cloud, an SA-Layer<cit.> is used to aggregate the features of local neighbors before the 3D features are processed by a transformer layer. γ(σ) =∑_i=0^5[sin(2^0 πσ_i), cos(2^0 πσ_i), ⋯, . . sin(2^L-1πσ_i), cos(2^L-1πσ_i)]The 2D location will be processed by several stacks of Linear-LayerNorm-Relu layers, denoted by LLR in Figure 5, to get F_2d . F_3d, the feature from 3D stream will then be guided by F_2d in a cross-attention manner. More details on this attention pipeline will be introduced in the section <ref>. The final output is the probability for each point to be part of foreground. The role of the IFST is therefore to filter out the noise/background points on the objects identified by the instance segmentation module.§.§ Mesh deformation based foreground depth prediction The existing depth completion models always suffer from boundary depth dispersion problem, which is actually due to the tendency of convolutional networks to smooth the signal. Here we regard the pixel for which depth needs to be estimated as the vertices of a deformable mesh in 3D space. The specific network structure of this module is described in Figure 4. Inspired by recently proposed depth completion networks<cit.>, the core of our Mesh Depth Completion Network (MDCNet) design is a geometric position-based hierarchical Transformer that allows the model to learn from points at different locations while still maintaining a strong prior: to estimate the depth of a specific pixel, neighboring lidar points in 2D space are more referential. It is the role of the aggregation layer to make the network learn from real lidar point distribution and densify the mesh in an iterative manner.Based on the previous work<cit.>, we also adopt a coarse-to-fine strategy to get the final shape. Given the 2D location of mask region Ω∈ℕ^m × 2, we first downsample the dense pixels to 1/2 and then 1/5 of the original by using the reverse process of the graph up-sampling layer presented in Figure <ref>, to get Ω^' and Ω^'', | Ω^''| = 0.2m. With the points p^'∈ℕ^t × 3 filtered by IFST, their pixels location Ω_L ∈ℕ^t × 2 can be calculated by the camera internal and external parameters. Ω^L should be a subset of Ω, with | Ω^L∩Ω^''| ≥ 0. Let the depths of these points denoted by p^'_d ∈ℝ^+^t × 1, in this (first) stage, the depth of pixels Ω^'' - Ω^L need to be estimated by the obtained p^'_d and Ω_L.Here, we propose an explicit local-to-global feature aggregation strategy to estimate the depth of the pixels on a mask. In general, given a specific pixel using Ω^''_i to represent its 2D location, we first sort the t pixels locations Ω_L by Ω^''_i - Ω_L _2, i.e. the Euclidean distance. We then estimate the KDE embedding μ for p ', which will later used to guide the aggregation. Next, we calculate the distance matrix between different pixels with and without lidar depth and process them with a MLP. This embedded 2D distance will be multiplied by the lidar features to get the relative location-weighted 3D features . This process can be described by Equation 7. ℱ_b∈ℝ^m × t × c, c is features dimension. Further, using the distance matrix, we divide the features into η chunks, denoted by ℱ_bi∈ℝ^m ×1/ηt × c, to explore the intra-cluster relationship.ℱ_b = MLPs(Embed(p^'_d)) * MLPs(Ω^'' - Ω_L_2)For features in different chunks,we apply 2 layers of transformer encoder along the first dimension of ℱ_b as shown in Equation 8. The stack operation denoted by ∑. This process allows the network to find a better representation for lidar in different distance bins. ℱ_bi will then be used as a query for the attention matrix calculated by the KDE feature of corresponding lidar points.⊕ in equation 8 stands for matrix multiplication.ℱ_bi = ∑_j=0^s SelfAtt(ℱ_bij) ℱ_bij∈ℝ^1 ×1/ηt × c ℱ_bi = sigmoid(MLPs(μ_i) ⊕ MLPs(μ_i)) ⊕ℱ_biAt the end of cluster-wise aggregation, we dilute the intra-cluster feature for each vertex by max pooling to get new ℱ_bi∈ℝ^m ×1/η× c→ℱ^'_bi∈ℝ^m × 1 × c, so for all η cluster, the feature before global aggregation is ℱ^'_b∈ℝ^m ×η× cFor global aggregation, after the flattening in the last 2 dimensions of ℱ^'_b, the dimension of representation for a different vertex is | η * c |, which corresponds to the feature learnt from all clusters. The final transformer block was added to allow the network to learn features from other vertex directly, instead of by multiple neighborhood propagation in later GNNs. Note that the hop of our network is quite larger than the classic scenario for GNNs, e.g. a social network or recommender systems, in which hop is around 6-10<cit.>. In our scenario, the distance from vertex to vertex may need over 200 hops (proportional to the number of pixels in the mask), this design actually provides a short-cut for the vertex to exchange features.The following GNNs aggregate and pass the features to the neighbor of every vertex. In our design, to maintain the stability of gradient flow, and for every 2 layers of GNNs, we add a residual connection. We leverage spectrum-free graph convolutions following <cit.>. Given the feature on vertex f and its neighbor 𝒩(i), the specific design is shown in Equation 9.𝐟^'_i = 1/1+|𝒩(i)|[𝐖_0 𝐟_i+𝐛_0+∑_j ∈𝒩(i)(𝐖_1 𝐟_j+𝐛_1)]where 𝐖_0 and 𝐖_1 are learnable parameters for the vertex itself and its neighbours. After 6 GNN layers, a regression head is used to predict the depth of every vertex on the mesh, i.e. the Ω^''. This process will be iteratively repeated 3 times with Ω^'', then Ω^' and Ω to obtain a dense depth for the object. §.§ Training Losses The losses of the proposed modules can be divided into 2 parts, loss for segmentation and loss for mesh regression. Specifically, lidar segmentation is here a binary classification task, and a simple BCE Loss is adopted to provide guidance as shown in Equation 10, with y_i being denote the label of p_i.L_seg=1/n∑_i=0^ny_i ·logσ(x_i)+(1-y_i) ·log(1-σ(x_i))The mesh regression loss is composed of the location loss and mesh shape loss. We combine the MSE losses in all different stages as the location loss as shown in the first term of L_mesh in Equation 11. The λ_i in the early stage will be higher. For the shape loss, N represents the number of vertices in the mesh. The second and third items in L_mesh aim to control the length of the edges in the predicted mesh and provide consistency among the normals of adjacent faces. This approach effectively prevents the occurrence of a long tail problem in the estimated points. The loc denotes the predicted 3D depth of a specific vertex, and n_i represents the normal vector of the triangular plane on the mesh. To balance the different loss terms, we introduce ω_1, ω_2, and λ_m.L_mesh = ∑_i=1^3λ_iMSE(loc, l̂ôĉ) + ω_1L_edge + ω_2L_con L_con = 1/N∑_i=0^N1 - cos(n_i, n_j),j=Neighbour(i) L_edge = 1/N∑_i=0^N loc_i, loc_j_2 ,j=Neighbour(i)L = L_seg + λ_mL_mesh§ EXPERIMENTS In this section, the experimental setup and related details are first introduced. Then we give a comparison between baseline models combined with WYSIWYD and previous SOTA solutions on both KITTI and nuScenes. Our code has been developed using the OpenPCDet toolbox<cit.>.§.§ Experimental setup Dataset and ground truth generation he KITTI<cit.> 3D object benchmark is one of the most famous datasets in autonomous driving perception. We follow the setting in previous works that split the training part into 3712 and 3768 samples as training and validation sets. In the following content, most experiments will be reported on the KITTI validation set. Compared to the former, nuScenes<cit.> is a benchmark dataset of a larger scale, which provides ten times more training data than KITTI in the form of continuous frame labeling. The performance of 3D detectors augmented with WYSIWYD generated pseudo points are tested on these 2 datasets. For the generation of the visible part ground truth, note that the nuScenes only provide a 2D instance mask for nuImage, so we pretrained the segmentation model on nuImage, and performed inference on nuScenes. Since these masks are not accurate enough, our MDCNet has only been trained on the KITTI visible part ground truth. We then performed a zero-shot inference to complete the objects in nuScenes. In addition to the above-mentioned scheme of using 2D mask labels to generate visible part ground truth, we also use masks from E2EC<cit.> predictions.The input of IFST is the points filtered by 2D masks, however, if we only use label masks in the training process, the noisy points will be very sparse. In consequence, when we infer the model on the mask provided by e2ec in real cases, the noise points can hardly be identified. Note that in our model training. we partly use masks from e2ec as long as the IOU between e2ec's predicted mask and the true mask is greater than 0.7.Evaluation metrics For the KITTI part we report results using average precision under 40 recall thresholds and 0.7, 0.5 IOU thresholds for cars and pedestrians respectively. The accuracy for lidar segmentation are measured by mIOU, which is calcualted from TP (True Positive), FP (False Positive) and FN (False Negative) as shown in Equation <ref>, c is the number of category. Here c=2, since we only split foreground from background points. meanIOU = 1/C∑_c=1^C TP_c/TP_c+FP_c+FN_c For the nuScenes, we follow the official evaluation protocol to evaluate the accuracy: nuScenes detection score (NDS) which consists of average translation error (ATE), average scale error (ASE), average orientation error (AOE), average velocity error (AVE), and average attribute error (AAE).Implementation Detail In this paper, thanks to the lightweight design of the module, all training is done by a single RTX3090. For the training of the baseline models, we used batches equal to 8 for 80 epochs training and other settings remain as in available implementation. For the training of MDCNet, we used a batch size of 4 and an Adam optimizer with a learning rate of 3e-5 for the first 15 epochs and 1e-5 for the remaining 25 epochs. The α in Equation 3 is set to 1.2 if the number of pixels is less than 2000, else 1.05. The ω_1, ω_2, and λ_m in the loss function are set to 2.0, 2.0, and 1.0 respectively. For the IFST, we trained it with 30 epochs before combining it with MDCNet to get a faster convergence in the early period of training and prevent gradient explosions. For the 2D instance segmentation part, we used E2EC<cit.> for KITTI and HTC<cit.> for nuScenes to get the best balance between efficiency and accuracy. Considering that the number of other objects is quite limited and some categories of the objects in nuScenes are not available in nuImage, we only report the results of pedestrians and cars detection.When we designed the network, we considered different settings in the self-attention Layer. We found that in our network, a sigmoid activation function outperforms the Softmax in the original design<cit.> and the combination of Conv-LayerNorm instead of Linear projection in the calculation of K, Q, V accelerates network convergence. So in IFST and MDCNet, the mentioned design is used to replace all self-attention operations. However, for the cross attention, the Softmax function is kept unchanged. Specific comparisons will be shown in the ablation study section. §.§ Main results In Table <ref> and <ref>, we combine the proposed method with most of the available code baseline models and compare them with the SOTA solution. Here we didn't use any GT sampling when training the baseline model with WYSIWYD, however, for a more convincing comparison, we still remain in this step when retraining the baseline models themselves. On the KITTI side, compared with the baseline detector, the proposed MDCNet and IFST provide improvements from 1.29% to 10.4% in 3D detection. For Voxel-RCNN, a 12.2% percent improvement in BEV detection is observed over the original performance. Furthermore, when combined with WYSIWYD, Voxel-RCNN achieves the SOTA pedestrian 3D detection model and surpasses all previous best models by 1.48%, 2.45%, 2.97%. For all other models, our method also brings significantimprovements in pedestain detection. Under the 0.7 IOU threshold, we also observe a SOTA performance in Car BEV detection when testing Part-A^2. On the nuScenes side, a wide performance improvement is also witnessed on nuScenes performance. Specifically, when combine VoxelNext with the WYIWYD augmented points, we obtain 3.2% and 2% improvement in MAP and NDS, which makes this baseline model exceed the latest BEV perception methods.We also compare the proposed solution with the previous SOTA detector-independent lidar completion methods in Table <ref> to demonstrate more salient properties of our method.Specifically, we directly refer to the data in SPG<cit.> and UYI<cit.>, while for BTC<cit.>, we used the completed point cloud output from the completion network as the input to different baseline detector models.The BtcDet only released configuration on car detection training, therefore the pedestrian items are not reported in the Table. Our proposed method also provides the highest performance improvement in all 3 pedestrian detection categories and most car detection categories when compared to the evaluated methods. § ABLATION STUDYIn this section, we will first show the overall analysis of the proposed model and then assess the effectiveness verification for the different modules. Finally, a qualitative analysis was performed which included visualization of the predicted mesh in 3D space and detection result comparisons.§.§ Overall AnalysisVehicle 3D Detection Analysis In Table <ref>, we noted the WYSIWYD brings less significant gains in car detection and there is even a decrease in some cases, compared with that of pedestrians. We attribute this to the detector independence of the proposed method. The added complementary point cloud is not perfect, as shown in Fig. <ref>, and from time to time the point cloud boundary exceeds the 3D GT box due to the inaccuracy of the 2D detection. Considering there is no specific design in the downstream detector to filter this noise, this seriously affects the performance of 3D detection under the 0.7/0.7/0.7 thresholds. However, if the thresholds are relaxed to 0.7/0.5/0.5, as shown in Table <ref>, the combination of Voxel-RCNN+WYSIWYD remains optimal in terms of performance. So in this way, the utilization of the proposed model can essentially mitigate the miss detection and low IOU detection problems.As we mentioned in the previous section, since the cross-modality restrain, we cancel the GT-sampling in training process. However, this strategy in fact plays an important role in preventing overfitting. Here we compare the original baseline with the one without the GT sampling augmentation, to further illustrate the improvment bring by WYSIWYD. In Table <ref>, a more obvious improvements can be observed in both pedestain or car detectionInference Speed comparison Another feature that deserves to be pointed out is the real-time nature of our algorithm. In Table <ref> we compare its inference time with the previous best-performing models on KITTI. In <cit.>, the time reported is not CUDA synchronized, which means the next frame can actually be processed when the GPU is available, as mentioned in <cit.>. When this is taken into account, the mentioned method will need more than 200 ms for single-frame inference. However, in our model, thanks to the fact that MDCNet is designed to only complete foreground points, compared to VirConv, the proposed method brings a 34.4 % efficiency improvement. Specifically, the reported 145ms in the Table is composed of the forward propagation time of Voxel-RCNN 44ms, the forward propagation time of E2EC 43ms and the 58ms of WYSIWYD.Conditional Analysis In addition, in order to explore in which scenarios our proposed method brings greater improvements, we analyzed the performance gain on Voxel-RCNN and PV-RCNN using distance and occlusion degree as indicators. As shown in Table <ref>, As shown in this Table, our approach has a substantial improvement for the detection of distant objects: we can bring up to 19.42% performance improvement for objects in the range of 20-40 meters. Even for targets at more than 40 meters away from the camera, we achieve at least 6.42% performance improvement.§.§ Component-wise AnalysisTo further explore the detail of the performance of our method, we split the detection results into 3 bins by distance and different occlusions as marked by KITTI. The results are shown in Table <ref>. In addition to this, we also compared the inference time between the proposed completion method and the previous pseudo-point-based solution.IFST design verification Here we decomposed the module in IFTS and verified its effectiveness in Table <ref>. By adding of 2D mask size feature, lidar 2D location feature, Local embedding layer, and Sinusoidal Embedding, we get 0.48%, 2.92%, 1.10%, 0.65% improvement. In summary, compared to the vanilla PointNet++<cit.> which is widely used in intra-frustum segmentation, IFST offers 10.58% performance improvement in terms of mIOU.Ablation of MDCNet design In Table <ref> In Table <ref> we show the direct relationship between generated mesh quality and detection result. When adding local-global aggregation and KDE guide, we can obtain 3.76% and 4.79% performance increase in PV-RCNN, for car moderate detection and pedestrian moderate detection respectively. In this process, the MSE mesh loss continues to fall from 489.3 to 283.4. Considering that our method requires both 2D instance segmentation and 3D points segmentation, we also report the performance of using the two labels directly in the last 2 lines of Table <ref>. As can be seen, our method still has great potential: a better 2D segmentation is enough to improve the performance by another 4.3% to 6.4% percent in terms of moderate 3D detection. Transformer Layer design In the Figure <ref>, we show the role of normlized projection layer and sigmoid activation function in the convergence rate of IFST and MDCNet. As shown, these design not only accelrate the training, but also improve the final accuracy from 2% to 9%. §.§ Qualitative AnalysisIn order to illustrate the completion points brought by WYSIWYD more intuitively, we visualize some examples of obvious performance improvements brought by it in Figure <ref>.For Figure <ref>(a) and Figure <ref>(b), we report a better boundary box estimation in terms of IOU and missing detection. The proposed model recovered the missing details in the visible part in a mesh-deformation manner, which is especially important when the lidar data is extremely sparse, as shown in Figure <ref>(b). Figure <ref>(c) shows the impact of WYSIWYD added points on pedestrian detection. From left to right, in the first 3 sets of images, we observe an improvement in the estimation of the bearing angle. Furthermore, the last 3 images prove that the miss-detection issue can be eliminated by adding points as well.§ CONCULSIONIn this work, we proposed a solution to improve the foreground depth in 3D detection in a mesh-deformation manner. In this process, we discard the traditional time-consuming global completion and our final result gets SOTA performance, especially in pedestrian 3D detection. Extensive experiments on baseline models demonstrate the effectiveness and robustness of our proposed model.IEEEtran
http://arxiv.org/abs/2310.17842v2
{ "authors": [ "Tianran Liu", "Zeping Zhang", "Morteza Mousa Pasandi", "Robert Laganiere" ], "categories": [ "cs.CV", "cs.RO" ], "primary_category": "cs.CV", "published": "20231027014637", "title": "What You See Is What You Detect: Towards better Object Densification in 3D detection" }
[email protected] of New South Wales Sydney NSW Australia [email protected] of New South Wales Sydney NSW Australia Energy load forecasting plays a crucial role in optimizing resource allocation and managing energy consumption in buildings and cities. In this paper, we propose a novel approach that leverages language models for energy load forecasting. We employ prompting techniques to convert energy consumption data into descriptive sentences, enabling fine-tuning of language models. By adopting an autoregressive generating approach, our proposed method enables predictions of various horizons of future energy load consumption. Through extensive experiments on real-world datasets, we demonstrate the effectiveness and accuracy of our proposed method. Our results indicate that utilizing language models for energy load forecasting holds promise for enhancing energy efficiency and facilitating intelligent decision-making in energy systems.<ccs2012><concept><concept_id>10010405.10010481.10010487</concept_id><concept_desc>Applied computing Forecasting</concept_desc><concept_significance>300</concept_significance></concept><concept><concept_id>10010147.10010178</concept_id><concept_desc>Computing methodologies Artificial intelligence</concept_desc><concept_significance>300</concept_significance></concept></ccs2012> [300]Applied computing Forecasting [300]Computing methodologies Artificial intelligenceUtilizing Language Models for Energy Load Forecasting Flora D. Salim January 14, 2024 =====================================================§ INTRODUCTIONWith the increasing need for energy efficiency and sustainable resource management, the forecasting of energy load has become a critical requirement in buildings, cities, and transportation systems. Accurate load forecasting enables proactive resource allocation, optimal demand response, and efficient energy management.Traditional approaches to energy load forecasting typically rely on statistical models and recent deep learning-based time series analysis techniques. In recent years, language models based on deep learning, particularly Transformer-based models, have shown remarkable performance in various natural language processing tasks. These models have the ability to learn rich representations of textual data and capture intricate relationships between words and concepts. Typically, the forecasting process of these deep learning models involves one encoder that takes a sequence of numbers that stands for the historical energy consumption values as input and one decoder to generate another sequence of numerical values as the predicted future energy data, as illustrated in Figure <ref> (a).Motivated by the success of language models in natural language processing, we propose to leverage their power for energy load forecasting. As demonstrated in Figure <ref> (b), the core of our approach is converting energy consumption data into natural language sentences using prompting techniques. By describing the data as sentences, we aim to unlock the potential of language models to capture nuanced patterns and dependencies within the data. This representation allows us to fine-tune pre-trained language models, enabling them to learn from the specific characteristics of energy consumption sequences. Similar numerical prompting has been used for human mobility data <cit.> recently. However, their prompts only support the forecasting of the next time step, which is limited for predicting the future energy load. To this end, we further introduce an autoregressive mechanism in the prediction generation process with the fine-tuned language models. This approach allows us to generate predictions for different horizons, ranging from short-term (, the next time step) to long-term (, the next 24 time steps) load forecasts.Our method presents a novel “code less” solution for energy load forecasting, which could provide a new perspective rather than focusing on designing complicated deep learning forecasting models (, Transformer-based methods). This would makes it a relatively easy-accessible and user-friendly method for non-AI users, compared to existing forecasting models that require many tedious parameter searching and training processes. In summary, our main contributions in this work are twofold:(1) We present a study on the utilization of language models for energy load forecasting. We design a pipeline that converts the energy consumption data into sentences for fine-tuning the language models and leverages the autoregressive mechanism for predicting different horizons with the same fine-tuned model. (2) We provide a comprehensive evaluation of the proposed solution with real world data from 6 buildings. We also conduct different evaluation settings including zero-shot performance evaluation and varied prediction horizon evaluation. § FORECASTING WITH LANGUAGE MODELS§.§ Problem Formulation and Method Overview Assume that the energy consumption records of a building i is represented by a sequence of t continuous time steps 𝒳^i= {x_1^i ,x_2^i, ⋯, x_t^i}. The value indicates that the energy consumption of building i at time t is x_t^i. The energy load forecasting problem can then be formulated as predicting the future load consumption values y^i_t_1: t_m of the next m time steps given the history observation x^i_t_1: t_n. Here, n and m are the observation length and the prediction horizon.Overall, as illustrated in Figure <ref> (b), the proposed method comprises three key enablers: (1) Prompting: to transform the raw consumption data into sentences that can be processed by language models; (2) Fine-tuning language models: to adapt them to the specifics of energy forecasting task; (3) Autoregressive generation: to enable the generation of multiple future steps forecasting. §.§ Prompting and Fine-tuningTo utilize language models for energy load forecasting, we employ a prompting technique that translates the usage data. Generally, the raw energy data is provided in a tabular format and we translate each row into a descriptive sentence. The objective is to transform the raw numerical data into a natural language text format that is suitable for language models and captures the relevant information and context necessary for predictions. By converting the energy consumption data into sentences, language models are enabled to take the transformed energy data as input and capture nuanced patterns and dependencies within the data.In the prompting process, we utilize a predefined template (, “The electric load at {Time} is {Usage}.”) that serves as a backbone for constructing the sentences. The template consists of placeholders for the actual values from the data, resulting in sentences that convey the energy consumption information in a human-readable format. The template includes variables such as date, time, energy consumption, and any other metadata provided in the raw data that may be relevant for forecasting. By replacing the placeholders in the template with the actual values, we obtain a sentence that represents the energy consumption data for a particular time step. This process is repeated for each row in the raw tabular data, resulting in a collection of descriptive sentences that are used for fine-tuning the language models. Through prompting, we bridge the gap between numerical energy consumption data and the language model's ability to comprehend and generate textual information.After generating the sentences from the energy consumption data, we proceed to fine-tune a pre-trained language model. Fine-tuning allows the model to adapt to the specific characteristics of load forecasting and capture the dependencies within the data. The language models are often pre-trained on a large corpus of text data to learn general language representations and common knowledge. In this study, we leverage the pre-trained models provided byhttps://huggingface.co/modelsHuggingFaceand the models are pre-trained only with general English-language corpora datasets without any specific energy usage-related numerical datasets. Fine-tuning involves training the language model on our generated load sentences to specialize it for load forecasting. During the fine-tuning process, we feed the generated sentences as input to the language models and optimize the parameters to minimize the difference between the predicted next sentence and the ground truth sentence. This process could make the language models suitable for our load forecasting task. §.§ Autoregressive GenerationTaking a set of historical sentences representing past energy load consumption as input, the fine-tuned language models generate the next sentence, which corresponds to the predicted energy load for the next time step. When longer prediction horizons are required, we adopt an autoregressive method and we extend the prediction horizon by appending the generated sentence to the end of the input sentence sequence. This extended sequence becomes the input for predicting the subsequent time step. By iteratively generating sentences in an autoregressive manner, we can forecast energy load consumption for multiple future time steps.The autoregressive generation approach leverages the fine-tuned language model's ability to capture dependencies between historical consumption and future load patterns. By using the generated sentences as input, the model can adjust its predictions based on the evolving context, enabling dynamic forecasts for different horizons.§ EXPERIMENTS§.§ Dataset and EvaluationThis study uses data from a certain block within the Melbourne CBD area in Australia. We select aggregated and anonymised smart meter data of hourly energy consumption for 6 buildings (, Building A-F). The data is collected from 2018 January to 2019 December. For each building, the data of the first 22 months is considered as the training set to fine-tune the language models. The data of the last month (Dec. 2019) is used as the testing set and the remaining month (Nov. 2019) is split as the validation set. For evaluating the performance of each method, we use the Root Mean Squared Error (RMSE) and the Mean Absolute Error (MAE) as metrics. For both the measures, the lower error means the better performance. These errors are calculated based on the predicted ŷ_t_1:t_m^i and the ground truth y_t_1:t_m^i to measure the closeness of the predicted values. For this study, we have fixed a forecast horizon size of m=24 hours, to mimic a day-ahead forecasting experiment. To make the input observation size bigger than the output prediction horizon, we set the input observation length n=30.§.§ Performance §.§.§ Comparing Against Numerical Forecasting MethodsTo evaluate the performance of our approach, we compare it against the typical numerical forecasting methods commonly used for time series forecasting. Specifically, we select the popular Transformer <cit.> as well as the more recent Informer <cit.>, Autoformer <cit.>, and FEDformer <cit.> as numerical baselines. For language models, a recent benchmark study <cit.> has shown that three language models (Bart <cit.>, Bigbird <cit.>, and Pegasus <cit.>) have better forecasting ability. These models also have reasonable size and number of parameters so that they can run on a single GPU (, we used Nvidia V100 in our experiments). Thus, these three language models are selected in our evaluation and our implementations are available at: <https://github.com/xuehaouwa/LM-Load-Forecasting>. The best performance under each column is shown in bold in Table <ref>. From the table, it is evident that language models, with our proposed pipeline, outperform traditional numerical forecasting methods in the majority of cases. Specifically, the language models have superior performance over baselines with a significant gain on Building A, B, C, and F. These findings highlight the language models' ability to capture patterns within energy consumption data, ultimately leading to more accurate predictions. §.§.§ Zero-shot Performance In addition to the above evaluation of the language models, we also assess the performance through zero-shot setting. In our study, the zero-shot setting aims to evaluate the ability of the language model to generate reasonable predictions even without fine-tuning on the corresponding training data of one building. Specifically, as listed in Table <ref>, we fine-tuned the language models with the training set of one building. Instead of using the testing set of the same building, we directly use the fine-tuned model to generate predictions for the testings sets of other buildings. In the table, we highlight the results that achieve similar or even better performance than the results reported in Table <ref>. As we can see, with our proposed pipeline, the language models can still yield plausible predictions for most of the buildings under the challenging zero-shot setting. This evaluation provides insights into the language model's ability to generalize and transfer knowledge across different buildings. It also suggests that our proposed approach can be used to achieve reasonable forecasting results even for buildings without specific fine-tuning or buildings without enough recorded data to start with fine-tuning (, cold start situation), which highlights the versatility and potential of language models in energy load forecasting.§.§.§ Different Prediction Horizons Furthermore, we investigate the effectiveness of our approach for different prediction lengths. We examine the performance of our method for different prediction lengths to assess its adaptability over varying time scales. Specifically, we set the prediction horizon m={1, 4, 12, 24} and the performance of three language models on 6 buildings are visualized in Figure <ref>. For each building, the language model is only fine-tuned once (fine-tuned to generate the prediction of the next time step). The same fine-tuned model is used to yield forecasting for different horizons based on our autoregressive generation mechanism. From the figure, although the error increases when the prediction horizon is enlarged (which is as expected), the results show that once the model is fine-tuned, it can be applied to arbitrary forecasting horizons. The adaptability of language models in our forecasting pipeline across varying prediction horizons demonstrates the versatility and suitability of our method for real-world applications where dynamic, multi-step forecasting is essential. § CONCLUSIONWe present a novel approach for energy load forecasting that leverages existing language models. Through prompting, fine-tuning language models, and autoregressive prediction mechanism, our method enables accurate and dynamic predictions of energy consumption. We have demonstrated the potential and the good performance of our proposed approach by evaluation with real-world data and comparisons against traditional numerical forecasting methods. The zero-shot evaluation also reveals the ability of language models to generate reasonable predictions even without specific fine-tuning on a particular building. By harnessing the power of language models, our method provides a promising direction to unlock valuable insights for energy forecasting.Future research can focus on exploring prompt optimization to further improve the accuracy and applicability of language models in load forecasting.We would like to acknowledge the support of Cisco’s National Industry Innovation Network (NIIN) Research Chair Program. We also highly appreciate the support from https://c4net.com.au/C4NET and www.csiro.auCSIRO.ACM-Reference-Format
http://arxiv.org/abs/2310.17788v1
{ "authors": [ "Hao Xue", "Flora D. Salim" ], "categories": [ "cs.AI", "cs.CL" ], "primary_category": "cs.AI", "published": "20231026213606", "title": "Utilizing Language Models for Energy Load Forecasting" }
[email protected] Department of Physical Sciences, Indian Institute of Science Education and Research (IISER) Mohali, Sector 81 SAS Nagar, Manauli PO 140306 Punjab [email protected] Department of Physical Sciences, Indian Institute of Science Education and Research (IISER) Mohali, Sector 81 SAS Nagar, Manauli PO 140306 Punjab India Punjabi University, Patiala, 147002, Punjab india The emergence of the objective classical world from the quantum behavior of microscopic constituents is not fully understood. Models based on decoherence and the principle of quantum Darwinism, which attempt to provide such an explanation, require system-bath interactions in a preferred basis. Thus the generic emergence of objectivity in the position basis, as observed in the real world remains unexplained. In this Letter, we present a no-go theorem based on the principle of no-faster-than-light communication, showing that interactions between internal degrees of freedom unavoidably cause system wave functions to branch in the position basis. We apply this result to a spin decoherence model to demonstrate that a generic thermal spin-1/2 bath redundantly records information about the position of a spin-1/2 particle. Notably, the model does not assume any preferred spin interaction. These findings represent a compelling demonstration of the generic emergence of objectivity in the position basis. Quantum operations restricted by no faster-than-light communication principle and generic emergence of objectivity in position basis Arvind January 14, 2024 ==================================================================================================================================== *Introduction.— The principles of quantum theory imply that superposition of distinct classical possibilities is feasible at the atomic level.Further, the quantum states of composite systems in the tensor product space involve superposition of classical possibilities existing atarbitrarilylarge scales. Yet, we do not directly witness large objectsexhibiting quantum effects in our everyday observation of physical reality.Despite the universality of quantum theory, there appears to be a quantum-to-classical transition taking place when systems become macroscopic or they interact with macroscopic systems and after this transition the system behaves in a classically objective manner. Physicists over a long period have grappled with the challenge of comprehending the quantum-to-classical transition and the emergence of classical objectivity within the framework of quantum theory. In recent decades, substantial progress has been achieved in unraveling this mystery and the concept of quantum Darwinism has played a crucial role in in this process  <cit.>. Quantum Darwinism (QD) employs quantum decoherence <cit.>, which is now a well-understood phenomenon, to explain the emergence of objectivity. The central idea of QD is that environment not only decoheres but also actively records and proliferates the information about the system attributes <cit.>. The emergence of objectivity is identified with the simultaneous maximization of the quantum mutual information between an observable of the system and multiple environment fragments <cit.>. Alternatively and equivalently, a specific post-decoherence classical-quantum (CQ) state of the system and environment, known as the spectrum broadcast structure, can indicate objectivity <cit.>.While core predictions of QD are generic and thus are independent of specific interaction dynamics <cit.>, there is a specific set of commuting observables, not always easy to obtain, whose information getsobjectified by its redundant recording on environment fragments <cit.>. In this context, classical objectivity has been shown to emerge in spin-spin interaction models <cit.>, dielectric illuminated spheres <cit.> and quantum Brownian motion <cit.> with ideal environments.More practical decoherence models, which relax various ideal conditions on the initial state of the environment and the interaction Hamiltonian, have also demonstrated QD <cit.>.The predictions of QD for certain models have also been experimentally verified <cit.>. The aforementioned models have several limitations; they exhibit preferred system observables in their interaction dynamics, they do not capture the realistic universality that the objectivity emerges in position bases and they do not work with initial states and interaction dynamics that are entirely random and generic.This Letter presents a no-go theorem that imposes restricts on interactions between two quantum systems. More specifically, we demonstrate that the manipulation of the internal degrees of freedom of a system inevitably disrupts its spatial wave function. These results are proven using the no-faster-than-light communication principle (NFLCP), which makes the theorem fundamental and generic. Since, this no-go result imposes particular constraints on the interactions between quantum systems, we utilize it to construct a generic decoherence model. We observe that in this scenario the classical objectivity emerges in the position basis irrespective of the interaction dynamics of the internal degree of freedom. Further, we make minimal assumptions about the initial state of the environment and the interaction dynamics.*No-go theorem.— Before stating the no-go result, let us define NFLCP in the information theoretic framework.Let's suppose Alice generates a random bit string A within the space-time region E_A and inputs it into a black box 𝐀. Similarly, Bob generates a bit string B as the output from a black box 𝐁 within the space-time region E_B. If E_A and E_B are separated by a space-like interval, then ℐ(A:B)=0, where ℐ(A:B) represents the mutual information between the strings A and B.In non-relativistic quantum theory, internal degrees of freedom of a particle are regarded as independent physical systems. As a consequence, while examining the dynamics ofthe spin of a particle there is no necessity to explicitly account for the spatial wavefunction. Nonetheless, as we shall see our demonstration reveals the impossibility of consistently describing dynamics without taking the spatial degree into consideration.If we have the Hilbert spaces ℋ_S and ℋ_I associated with the spatial and internal degrees of freedom of a quantum particle, respectively, then the unitaries U=1_S⊗ U_I and measurements of the form 𝕄≡{1_S ⊗Π_I,1_S⊗Π̃_I} on the state space ℋ_S⊗ℋ_I are unallowedby NFLCP. Here, Π_I and Π̃_I are projection operators, and their sum satisfies Π_I+Π̃_I=1_I.We prove the theorem by contradiction. Let us consider, a spin half particle where the internal degree of freedom is a two-level quantum system. Using a thought experiment, we demonstrate that the operations mentioned in the theorem violate the NFLCP.Let us consider that the spin half particle is prepared in a state where the spatial wavefunction extends over a large distance and is asuperposition of being located in the labs of observers Alice and Bob, positioned at x=-α and x=α, respectively (see Fig. <ref>). Additionally, let's assume that thespin is prepared in the state |0⟩. Specifically, the composite state of the particle, describing its spatial and spin degrees of freedom, is given by |Ψ⟩_SI=|ψ⟩_S⊗|0⟩_I∈ℋ_S⊗ℋ_I such that⟨x|ψ⟩= N[exp(-(x-α)^2/4σ^2)+ exp(-(x+α)^2/4σ^2)]where N is the normalization factor and σ≪α. In this case, we make the assumption that the distance between the two labs is large in the sense that if T is the typical operation time for any process in the labs of Alice and Bob then T ≪α/c (here c is the velocity of light).Given that the particle is spread across these two labs, it effectively functions as a long black box that can be accessed by observers in both the labs. It is pertinent to note that quantum mechanics does not attribute a notion of `physical space' to the spin degree of freedom. Consequently, it is reasonable to assume that the particle's spin is accessible wherever its wavefunction is nonzero. We use the scenario to prove the theorem. Consider a situation where Alice wants to senda classical bit of information ato Bob.Imagine a specific protocol where she applies an a dependentoperation U_a∈{0,1} on the internal degree of freedom of the partice in an initial state |Ψ⟩_SI. Let us assume that U_0=1⊗1 and U_1=1⊗σ_x.Bob measures σ_z and records the outcome as a bit b∈{0,1}.Now assume that τ is the time difference between Alice's bit generation (call it event E_A) and Bob's act of recording his measurement outcome (event E_B)   cτ≪ 2α. However, we observe that ℐ(A:B)=1.This contradicts NFLCP. Similarly, consider another protocol for sending information about the classical bit a from Alice to Bob where Alice performs a dependent measurements on the particle. Alice performs the measurement 𝕄_a∈0,1 with𝕄_0≡{1⊗1}, meaning she does not disturb the particle state, and 𝕄_1≡{1⊗|+⟩⟨,|1⊗|-⟩⟨|}. As before, Bob measures the complementary observable σ_z and records the outcome as a bit b∈{0,1}. In this case, we find that ℐ(A:B)=0.19. Assuming space-like separation of events of measurements performed by Alice and Bob, this contradicts the NFLCP. In this analysis, we have examined specific instances of U_I and Π_I. Nevertheless, it is apparent that the obtained results can be extended to encompass general operations. A direct implication of Theorem <ref> is that quantum maps consistent with NFLCP must incorporate the space-time region where the interactions take place. Let's consider Alice's action of applying unitary U or measurement 𝕄 as an event (-α,t_A) occuring in the lab. In order to ensure the consistency of these operations with NFLCP, we can express the unitary U as U=|-α⟩⟨_|S⊗ U_I+(1-|-α⟩⟨)|_S⊗1_I. Similary the measurement operation 𝕄, occurring at her location, can be represented as 𝕄≡{|-α⟩⟨_|S⊗Π_I, |-α⟩⟨_|S ⊗Π̃_I,(1-|-α⟩⟨)|_S⊗1_I}.In both the cases the rationale being that these operations are being performed in Alice's lab alone and there is no information about these operations available outside her lab. Implications of the above should be examined for the Bell violation scenarios.Bell scenarios have two inputs and two outputs case. Here, only one side gives inputs while the other side has only fixed setting that generates random outcomes. Well I agree that Bell scenario is different, however, the no-go theorem should also apply to the two particle Bell state and the projective measurements on that individual particles. It will be useful to make a statement about that. For example we could add something along the following lines “Given two particles in a Bell state, since one particle is with Alice and the other particle is with Bob our no-go theorem does not restrict the measurement of spin components or application of local unitary operations” The above scenario has an intriguing explanation in the many-world interpretation <cit.>: The ontology of the world always branches in the position basis, and transformations on internal degrees happen accordingly. In the case considered above, for instance, the unitary U_I or the measurement {Π,Π̃} on ℋ_I takes place in the world where the particle is present at -α. In all other worlds, the internal degree remains unchanged. Moreover, when arbitrary and repeated interactions occur with an internal degree across different locations, it results in a continuous branching in the position basis. This branching process effectively resolves the preferred basis problem <cit.>. Additionally, as we will explore further, within an arbitrary spin environment model, the former leads to the emergence of classical objectivity in the position basis. *The model.— A spin-1/2 quantum particle denoted by 𝒮 is assumed to be in a spatial superposition of being at d possible locations {x⃗_1, x⃗_2, …, x⃗_d}≡𝕏. Let the initial state of 𝒮 be ϱ_𝒮 = |Ψ⟩⟨_|𝒮⊗ρ_𝒮∈ℋ_𝒮^x ⊗ℋ_𝒮, where |Ψ⟩_𝒮 = ∑_i=1^dα_i |x⃗_i⟩ such that ∑_i=1^d‖α_i‖^2=1 represents the spatial state and ρ_𝒮 represents the spin state of 𝒮. Here, ℋ_𝒮^x and ℋ_𝒮 are the Hilbert spaces associated with the spatial and spin degrees of freedom of the system.The spin of the system 𝒮 interacts with an environment denoted as ℰ, which is composed of point-like spin-1/2 particles fixed in position. The interactions among environmental subsystems (en-subs) are assumed to be absent. Let ℋ^x_ℰ_i and ℋ_ℰ_i denote the spatial and spin state spaces of i-th en-sub ℰ_i. Initially, each en-sub is in a random spin state and at a random location: ϱ_ℰ_i = |x⃗⟩⟨_|ℰ_i⊗ρ_ℰ_i∈ℋ_ℰ_i^x ⊗ℋ_ℰ_i, where x⃗∈𝕏 and ρ_ℰ_i is an arbitrary spin state. Furthermore, we assume that the spin-spin interaction between 𝒮 and ℰ_i is arbitrary and generic. The spin-spin interaction Hamiltonian in this model, when the system and all en-subs are localized at x⃗∈𝕏, is given as: H_𝒮:ℰ(x)= -∑_i=1^N g_i(x,t)σ_𝒮^i(x)⊗σ_ℰ_i(x) ⊗_j≠ i1_ℰ_j,where N is the number of en-subs, σ_𝒮^i(x) and σ_ℰ_i(x) are random spin observables of 𝒮 and ℰ_i, respectively, when they interact at position x⃗, and g_i(x,t) is a function of time that quantifies the interaction strength.It is important to note that, unlike all the previous spin models, we do not assume any preferred spin observable for the system. In fact, we have considered that an en-sub can couple to different spin observable of the system at different locations. *Emergence of objectivity.— We demonstrate that the dynamics governed by H_𝒮:ℰ leads to the formation of a spectrum broadcast structure in the system's position basis.The initial state of the composite system is ϱ_𝒮:ℰ = ϱ_𝒮⊗_i=1^N ϱ_ℰ_i.Let us divide ℰ into d macro-fractions {ℰ^mac_k}_k∈{1,2,⋯,d} based on the locations of the constituent en-subs {ℰ_i}_i∈{1,2,⋯,N}: all en-subs located at x⃗_k form the macro-fraction ℰ^mac_k. Furthermore, let us re-index the en-subs accordingly: the l-th en-sub in the k-th macro-fraction ℰ^mac_k is denoted by ℰ_kl, where l=1,2,⋯,m_k. Here, m_k represents the total number of en-subs in the k-th macro-fraction. See Fig. <ref> for details.Incorporating the spatial degree of freedom, the unitary operator representing the interaction between 𝒮 and ℰ_kl, in accordance with the no-go result, can be formulated as:U̅_𝒮:ℰ_kl = ∑_i=j|x⃗_i⟩⟨_|𝒮⊗|x⃗_j⟩⟨_|ℰ_kl⊗U_𝒮:ℰ_kl(x_i) +∑_i≠ j|x⃗_i⟩⟨_|𝒮⊗|x⃗_j⟩⟨_|ℰ_kl⊗1_𝒮:ℰ_kl.Here, the spin-spin interaction of 𝒮:ℰ_kl at x⃗_i is calculated using the interaction Hamiltoian described in Eq.(<ref>) and is represented by the unitary operator as:U_𝒮:ℰ_kl(x_i)= expιθ_kl(x_i)σ_𝒮^kl(x_i)⊗σ_ℰ_kl(x_i),where, θ_kl(x_i)=∫ g_kl(x_i,t)dt is the interaction strength.We can make the assumption that the en-subs interact with the system one-by-one, without compromising generality. The complete interaction between 𝒮 and ℰ is given by U̅_𝒮:ℰ= ∏_i,jU̅_𝒮:ℰ^mac_ij⊗_k≠ i,l≠ j1_ℰ^mac_kl. Let ℰ_∖ℰ̃ represent the environment after tracing out a subenvironment ℰ̃ and, similarly, let ℰ^mac_i∖ℰ̃_i be the i-th macro-fraction after discarding a portion of it that is denoted by ℰ̃_i. Tracing out ℰ̃_i, for all i, decoheres the composite system.After tracing out the spatial degrees of en-subs, the post-interaction state, ϱ^'_𝒮:ℰ_∖ℰ̃= _ℰ̃(U̅_𝒮: ℰϱ_𝒮:ℰU̅_𝒮:ℰ^†), for sufficiently large ℰ̃_̃ĩ ∀ i∈{1,2,⋯,d} is obtained as (see supp. mat.):ϱ^'_𝒮:ℰ_∖ℰ̃≈∑_i=1^d‖α_i‖^2|x⃗_i⟩⟨_|𝒮⊗ρ^'_𝒮:ℰ^mac_i∖ℰ̃_i⊗_j≠ iρ_ℰ^mac_j∖ℰ̃_j.Here, portions {ℰ̃_̃ĩ} constitute the whole of ℰ̃, ρ_ℰ^mac_j∖ℰ̃_j across all j is the initial state of j-th macro-fraction minus ℰ̃_j, andρ^'_𝒮:ℰ^mac_i∖ℰ̃_i≈1/21_𝒮⊗_j:ℰ_ij∈ℰ^mac_i∖ℰ̃_iρ^'_ℰ_ij+Ω_𝒮:ℰ^mac_i∖ℰ̃_iis the composite spin state of system and ℰ^mac_i after decoherence, where ρ^'_ℰ_ij=cos^2(θ_ij)ρ_ℰ_ij +sin^2(θ_ij)ρ̃_ℰ_ij with ρ̃_ℰ_ij=σ_ℰ_ijρ_ℰ_ijσ^†_ℰ_ij and the form of Ω_𝒮:ℰ^mac_i∖ℰ̃_i is such that _𝒮(Ω_𝒮: ℰ^mac_i∖ℰ̃_i)=0.For clarity, the notation x_i has been omitted from the above expression for obvious reasons. Therefore, after discarding the system's spin, the spin state of i-th macro-fraction becomes ρ^'_ℰ^mac_i=⊗_jρ^'_ℰ_ij. Index j runs over the remaining en-subs of ℰ^mac_i, that is, ℰ_ij∈ℰ^mac_i∖ℰ̃_i. Each en-sub in ℰ^mac_i∖ℰ̃_i contains a fraction of the information regarding the system's presence (or absence) at x⃗_i. However, it is worth noting that the macro-fraction state ρ^'_ℰ^mac_i∖ℰ̃_i, when it is of sufficiently large size, can contain redundant information about the position. Next we shall proceed to fragment the environment in a manner that allows each fragment to carry the complete classical information about the position of the system. Suppose there are n observers, denoted as 𝒪_1, 𝒪_2, …, 𝒪_n, who have access to different portions with equal proportions of each macro-fraction {ℰ^mac_i}_i∈{1,2,…,d} within the environment ℰ. We can represent these fragments as ℱ_1, ℱ_2, …, ℱ_n (see Fig. <ref>). The revised form of Eq. (<ref>), after tracing out the system's spin, is as follows (see supp. mat.): ϱ^'_𝒮:ℰ_∖ℰ̃= ∑_i=1^d‖α_i‖^2|x⃗_i⟩⟨_|𝒮⊗ξ^ℱ_1_i⊗ξ^ℱ_2_i⊗ξ^ℱ_3_i⊗⋯⊗ξ^ℱ_n_i.Here, ξ^ℱ_k_i represents the spin state of the k-th fragment when the system is localized at x⃗_i. This state, ξ^ℱ_k_i, is a product state in which the en-subs present at x⃗_i have the post-interaction spin state ρ^'_ℰ_ij, while the remaining en-subs are in their initial states ρ_ℰ_i^' j (here i^'≠ i). The state ϱ^'_𝒮:ℰ, Eq. (<ref>), is spectrum broadcast structure if ξ^ℱ_k_i and ξ^ℱ_k_i^' are perfectly distinguishable for i≠ i^' across all k∈{1,2,⋯, n}.We use quantum fidelity measure to show the distinguishability  <cit.>. Since fidelity is multiplicative under tensor product <cit.>, we obtain: F(ξ^ℱ_k_i,ξ^ℱ_k_i^')= ∏_j∈{i,i^'}j^':ℰ_jj^'∈ℱ_k F(ρ_ℰ_jj^', ρ^'_ℰ_jj^').Here, F(ρ,σ)=√(ρ^1/2σρ^1/2) represents the fidelity between states ρ and σ. To clarify, the index j^' spans all en-subs that constitute the fragment ℱ_k, and also remember that i,i^'∈{1,2,⋯,d}. Since the interactions are non-zero, meaning θ_jj^'≠ 0, the fidelity F(ρ_ℰ_jj^', ρ^'_ℰ_jj^')<1. Consequently, the product effectively approaches zero for large fragments: F(ξ^ℱ_k_i,ξ^ℱ_k_i^')≈ 0 for i≠ i^' across all k∈{1,2,⋯, n}. Thus, states ξ^ℱ_k_i and ξ^ℱ_k_i^' become perfectly distinguishable.The structure described in Eq. (<ref>) represents a complete and redundant encoding of information about the position of the system S on the environment spins. Observers gain complete information about the position of the system when they access spins of randomly sampled portions of the environment. However, it is necessary that a significant portion of the environment which decoheres the system's state is inaccessible. One intriguing aspect of Eq. (<ref>) is its inherent integration of the Born probabilities {‖α_i‖^2}meaning all the observers simultaneously observe the system at x⃗_i with probability ‖α_i‖^2. The environment naturally selects, records and collapses the system's wavefunction in the position basis as per Born rule in Quantum Darwinian manner.*Discussion.— In this Letter, we have presented an intriguing no-go result and explored its implications for the generic emergence of classical objectivity in the position basis. The significance of the no-go result lies in its relevance to quantum foundations: it demonstrates that interactions in internal degrees of freedom are always mediated by the spatial degree of freedom. Moreover, these interactions couple the involved systems in the position basis, effectively preventing faster-than-light communications.Our no-go theorem directly challenges the widely held belief that internal degrees of freedom can act as isolated physical systems that can be entirely dissociated from spatial wavefunctions of the involved particles <cit.>. This result bears implications for the preferred basis problem for the models of emergence of classical objectivity based on decoherence and QD. Interactions among internal degrees, which are pervasive at the atomic and sub-atomic levels, continually lead to the unavoidable branching of the universe's wavefunction in the position basis in the spirit of many worlds interpretation of QM. As a consequence, the position basis naturally emerges as the preferred basis.In our analysis, we managed to successfully integrate the constraints imposed by the no-go result into the dynamics of a spin thermalization model, leading to the objectivity of the system's spatial degree of freedom in the position basis. A key strength of our model lies in its genuine generality, as it avoids any reliance on a preferred spin observable and preferred interaction Hamiltonian. Another notable feature is the flexibility regarding the environment's initial state, which can be arbitrary. However, we do make certain assumptions: Firstly, we consider non-interacting environmental spins, and secondly, we set the self-Hamiltonian to zero. We acknowledge that relaxing these assumptions requires further investigation and should be a subject of future work.Despite these advancements, there are important limitations that need to be addressed. In our theorem and decoherence model, we made simplifying assumptions of point-like particles and interactions with vanishing range. To create a more comprehensive and practical representation, Eq. (<ref>) should be extended to incorporate interactions of non-vanishing range, while also considering the speed limit of propagation for interaction influences. Lastly, the no-go theorem where we have used NFLCP within non-relativistic QM should be re-interpreted and generalized within relativistic quantum mechanics. Apart from its fundamental implications, such a result can have potential importance in many-body dynamics and energetics of quantum measurements.Authors acknowledge the financial support from DST/ICPS/QuST/Theme-1/2019/General Project number Q-68.59 fxundefined [1] ifx#1 fnum [1] #1firstoftwo secondoftwo fx [1] #1firstoftwo secondoftwo noop [0]secondoftwo ref[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0] rl [1]href #1 @bib@innerbibempty [Zurek(2022)]e_24111520 author author W. H. Zurek, title title Quantum theory of the classical: Einselection, envariance, quantum darwinism and extantons, journal journal Entropy volume 24, https://doi.org/10.3390/e24111520 10.3390/e24111520 (year 2022)NoStop [Zurek(2009)]Zurek_2009 author author W. H. Zurek, title title Quantum darwinism, https://doi.org/10.1038/nphys1202 journal journal Nature Physics volume 5, pages 181 (year 2009)NoStop [Zurek(2014)]10.1063/PT.3.2550 author author W. H. Zurek, title title Quantum Darwinism, classical reality, and the randomness of quantum jumps, https://doi.org/10.1063/PT.3.2550 journal journal Physics Today volume 67, pages 44 (year 2014)NoStop [Zurek(2003a)]RevModPhys_75.715 author author W. H. Zurek, title title Decoherence, einselection, and the quantum origins of the classical, https://doi.org/10.1103/RevModPhys.75.715 journal journal Rev. Mod. Phys. volume 75, pages 715 (year 2003a)NoStop [Zurek(2003b)]PhysRevLett_90.120404 author author W. H. Zurek, title title Environment-assisted invariance, entanglement, and probabilities in quantum physics, https://doi.org/10.1103/PhysRevLett.90.120404 journal journal Phys. Rev. Lett. volume 90, pages 120404 (year 2003b)NoStop [Ollivier et al.(2004)Ollivier, Poulin, and Zurek]PhysRevLett_93.220401 author author H. Ollivier, author D. Poulin, and author W. H. Zurek, title title Objective properties from subjective quantum states: Environment as a witness, https://doi.org/10.1103/PhysRevLett.93.220401 journal journal Phys. Rev. Lett. volume 93, pages 220401 (year 2004)NoStop [Ollivier et al.(2005)Ollivier, Poulin, and Zurek]PhysRevA_72.042113 author author H. Ollivier, author D. Poulin, and author W. H. Zurek, title title Environment as a witness: Selective proliferation of information and emergence of objectivity in a quantum universe, https://doi.org/10.1103/PhysRevA.72.042113 journal journal Phys. Rev. A volume 72, pages 042113 (year 2005)NoStop [Blume-Kohout and Zurek(2006)]PhysRevA_73.062310 author author R. Blume-Kohout and author W. H. Zurek, title title Quantum darwinism: Entanglement, branches, and the emergent classicality of redundantly stored quantum information, https://doi.org/10.1103/PhysRevA.73.062310 journal journal Phys. Rev. A volume 73, pages 062310 (year 2006)NoStop [Riedel et al.(2016)Riedel, Zurek, and Zwolak]PhysRevA.93.032126 author author C. J. Riedel, author W. H. Zurek, and author M. Zwolak, title title Objective past of a quantum universe: Redundant records of consistent histories, https://doi.org/10.1103/PhysRevA.93.032126 journal journal Phys. Rev. A volume 93, pages 032126 (year 2016)NoStop [Horodecki et al.(2015)Horodecki, Korbicz, and Horodecki]PhysRevA_91.032122 author author R. Horodecki, author J. K. Korbicz, and author P. Horodecki, title title Quantum origins of objectivity, https://doi.org/10.1103/PhysRevA.91.032122 journal journal Phys. Rev. A volume 91, pages 032122 (year 2015)NoStop [Riedel(2017)]PhysRevLett_118.120402 author author C. J. Riedel, title title Classical branch structure from spatial redundancy in a many-body wave function, https://doi.org/10.1103/PhysRevLett.118.120402 journal journal Phys. Rev. Lett. volume 118, pages 120402 (year 2017)NoStop [Korbicz(2021)]Korbicz2021roadstoobjectivity author author J. K. Korbicz, title title Roads to objectivity: Quantum Darwinism, Spectrum Broadcast Structures, and Strong quantum Darwinism – a review, https://doi.org/10.22331/q-2021-11-08-571 journal journal Quantum volume 5, pages 571 (year 2021)NoStop [Le and Olaya-Castro(2019)]PhysRevLett.122.010403 author author T. P. Le and author A. Olaya-Castro, title title Strong quantum darwinism and strong independence are equivalent to spectrum broadcast structure, https://doi.org/10.1103/PhysRevLett.122.010403 journal journal Phys. Rev. Lett. volume 122, pages 010403 (year 2019)NoStop [Brandão et al.(2015)Brandão, Piani, and Horodecki]Brandao2015 author author F. G. S. L. Brandão, author M. Piani, and author P. Horodecki, title title Generic emergence of classical features in quantum darwinism, https://doi.org/10.1038/ncomms8908 journal journal Nature Communications volume 6, pages 7908 (year 2015)NoStop [Knott et al.(2018)Knott, Tufarelli, Piani, and Adesso]PhysRevLett.121.160401 author author P. A. Knott, author T. Tufarelli, author M. Piani, and author G. Adesso, title title Generic emergence of objectivity of observables in infinite dimensions, https://doi.org/10.1103/PhysRevLett.121.160401 journal journal Phys. Rev. Lett. volume 121, pages 160401 (year 2018)NoStop [Qi and Ranard(2021)]Qi2021emergent author author X.-L. Qi and author D. Ranard, title title Emergent classicality in general multipartite states and channels, https://doi.org/10.22331/q-2021-09-28-555 journal journal Quantum volume 5, pages 555 (year 2021)NoStop [Fu(2021)]PhysRevA.103.042210 author author H.-F. Fu, title title Uniqueness of the observable leaving redundant imprints in the environment in the context of quantum darwinism, https://doi.org/10.1103/PhysRevA.103.042210 journal journal Phys. Rev. A volume 103, pages 042210 (year 2021)NoStop [Blume-Kohout and Zurek(2005)]Blume-Kohout_2005 author author R. Blume-Kohout and author W. H. Zurek, title title A simple example of “quantum darwinism”: Redundant information storage in many-spin environments, https://doi.org/10.1007/s10701-005-7352-5 journal journal Foundations of Physics volume 35, pages 1857 (year 2005)NoStop [Touil et al.(2022)Touil, Yan, Girolami, Deffner, and Zurek]PhysRevLett_128.010401 author author A. Touil, author B. Yan, author D. Girolami, author S. Deffner, and author W. H. Zurek, title title Eavesdropping on the decohering environment: Quantum darwinism, amplification, and the origin of objective classical reality, https://doi.org/10.1103/PhysRevLett.128.010401 journal journal Phys. Rev. Lett. volume 128, pages 010401 (year 2022)NoStop [Zwolak et al.(2016a)Zwolak, Riedel, and Zurek]Zwolak2016 author author M. Zwolak, author C. J. Riedel, and author W. H. Zurek, title title Amplification, decoherence and the acquisition of information by spin environments, https://doi.org/10.1038/srep25277 journal journal Scientific Reports volume 6, pages 25277 (year 2016a)NoStop [Zwolak et al.(2014)Zwolak, Riedel, and Zurek]PhysRevLett_112.140406 author author M. Zwolak, author C. J. Riedel, and author W. H. Zurek, title title Amplification, redundancy, and quantum chernoff information, https://doi.org/10.1103/PhysRevLett.112.140406 journal journal Phys. Rev. Lett. volume 112, pages 140406 (year 2014)NoStop [Riedel and Zurek(2010)]PhysRevLett_105.020404 author author C. J. Riedel and author W. H. Zurek, title title Quantum darwinism in an everyday environment: Huge redundancy in scattered photons, https://doi.org/10.1103/PhysRevLett.105.020404 journal journal Phys. Rev. Lett. volume 105, pages 020404 (year 2010)NoStop [Korbicz et al.(2014)Korbicz, Horodecki, and Horodecki]PhysRevLett_112.120402 author author J. K. Korbicz, author P. Horodecki, and author R. Horodecki, title title Objectivity in a noisy photonic environment through quantum state information broadcasting, https://doi.org/10.1103/PhysRevLett.112.120402 journal journal Phys. Rev. Lett. volume 112, pages 120402 (year 2014)NoStop [Riedel and Zurek(2011)]Jess_2011 author author C. J. Riedel and author W. H. Zurek, title title Redundant information from thermal illumination: quantum darwinism in scattered photons, https://doi.org/10.1088/1367-2630/13/7/073038 journal journal New Journal of Physics volume 13, pages 073038 (year 2011)NoStop [Tuziemski and Korbicz(2015a)]Tuziemski_2015 author author J. Tuziemski and author J. K. Korbicz, title title Dynamical objectivity in quantum brownian motion, https://doi.org/10.1209/0295-5075/112/40008 journal journal Europhysics Letters volume 112, pages 40008 (year 2015a)NoStop [Blume-Kohout and Zurek(2008)]PhysRevLett_101.240405 author author R. Blume-Kohout and author W. H. Zurek, title title Quantum darwinism in quantum brownian motion, https://doi.org/10.1103/PhysRevLett.101.240405 journal journal Phys. Rev. Lett. volume 101, pages 240405 (year 2008)NoStop [Paz and Roncaglia(2009)]PhysRevA.80.042111 author author J. P. Paz and author A. J. Roncaglia, title title Redundancy of classical and quantum correlations during decoherence, https://doi.org/10.1103/PhysRevA.80.042111 journal journal Phys. Rev. A volume 80, pages 042111 (year 2009)NoStop [Tuziemski and Korbicz(2016)]Tuziemski_2016 author author J. Tuziemski and author J. K. Korbicz, title title Analytical studies of spectrum broadcast structures in quantum brownian motion, https://doi.org/10.1088/1751-8113/49/44/445301 journal journal Journal of Physics A: Mathematical and Theoretical volume 49, pages 445301 (year 2016)NoStop [Tuziemski and Korbicz(2015b)]photonics2010228 author author J. Tuziemski and author J. K. Korbicz, title title Objectivisation in simplified quantum brownian motion models, https://doi.org/10.3390/photonics2010228 journal journal Photonics volume 2, pages 228 (year 2015b)NoStop [Zwolak et al.(2009)Zwolak, Quan, and Zurek]PhysRevLett_103.110402 author author M. Zwolak, author H. T. Quan, and author W. H. Zurek, title title Quantum darwinism in a mixed environment, https://doi.org/10.1103/PhysRevLett.103.110402 journal journal Phys. Rev. Lett. volume 103, pages 110402 (year 2009)NoStop [Zwolak et al.(2010)Zwolak, Quan, and Zurek]PhysRevA_81.062110 author author M. Zwolak, author H. T. Quan, and author W. H. Zurek, title title Redundant imprinting of information in nonideal environments: Objective reality via a noisy channel, https://doi.org/10.1103/PhysRevA.81.062110 journal journal Phys. Rev. A volume 81, pages 062110 (year 2010)NoStop [Zwolak et al.(2016b)Zwolak, Riedel, and Zurek]Zwolak_2016 author author M. Zwolak, author C. J. Riedel, and author W. H. Zurek, title title Amplification, decoherence and the acquisition of information by spin environments, https://doi.org/10.1038/srep25277 journal journal Scientific Reports volume 6, pages 25277 (year 2016b)NoStop [Mirkin and Wisniacki(2021)]e_23111377 author author N. Mirkin and author D. A. Wisniacki, title title Many-body localization and the emergence of quantum darwinism, journal journal Entropy volume 23, https://doi.org/10.3390/e23111377 10.3390/e23111377 (year 2021)NoStop [Riedel et al.(2012)Riedel, Zurek, and Zwolak]Jess_Riedel_2012 author author C. J. Riedel, author W. H. Zurek, and author M. Zwolak, title title The rise and fall of redundancy in decoherence and quantum darwinism, https://doi.org/10.1088/1367-2630/14/8/083010 journal journal New Journal of Physics volume 14, pages 083010 (year 2012)NoStop [Campbell et al.(2019)Campbell,  Çakmak, Müstecaplıo ğğlu, Paternostro, and Vacchini]PhysRevA.99.042103 author author S. Campbell, author B. i. e. i. f. m. c.  Çakmak, author O. E. Müstecaplıo ğğlu, author M. Paternostro, and author B. Vacchini, title title Collisional unfolding of quantum darwinism, https://doi.org/10.1103/PhysRevA.99.042103 journal journal Phys. Rev. A volume 99, pages 042103 (year 2019)NoStop [Mironowicz et al.(2018)Mironowicz, Nale żżyty, Horodecki, and Korbicz]PhysRevA_98.022124 author author P. Mironowicz, author P. Nale żżyty, author P. Horodecki, and author J. K. Korbicz, title title System information propagation for composite structures, https://doi.org/10.1103/PhysRevA.98.022124 journal journal Phys. Rev. A volume 98, pages 022124 (year 2018)NoStop [Ryan et al.(2021)Ryan, Paternostro, and Campbell]RYAN2021127675 author author E. Ryan, author M. Paternostro, and author S. Campbell, title title Quantum darwinism in a structured spin environment, https://doi.org/https://doi.org/10.1016/j.physleta.2021.127675 journal journal Physics Letters A volume 416, pages 127675 (year 2021)NoStop [García-Pérez et al.(2020)García-Pérez, Chisholm, Rossi, Palma, and Maniscalco]PhysRevResearch.2.012061 author author G. García-Pérez, author D. A. Chisholm, author M. A. C. Rossi, author G. M. Palma, and author S. Maniscalco, title title Decoherence without entanglement and quantum darwinism, https://doi.org/10.1103/PhysRevResearch.2.012061 journal journal Phys. Rev. Res. volume 2, pages 012061 (year 2020)NoStop [Lorenzo et al.(2020)Lorenzo, Paternostro, and Palma]PhysRevResearch.2.013164 author author S. Lorenzo, author M. Paternostro, and author G. M. Palma, title title Anti-zeno-based dynamical control of the unfolding of quantum darwinism, https://doi.org/10.1103/PhysRevResearch.2.013164 journal journal Phys. Rev. Res. volume 2, pages 013164 (year 2020)NoStop [Kici ńński and Korbicz(2021)]PhysRevA.104.042216 author author M. Kici ńński and author J. K. Korbicz, title title Decoherence and objectivity in higher spin environments, https://doi.org/10.1103/PhysRevA.104.042216 journal journal Phys. Rev. A volume 104, pages 042216 (year 2021)NoStop [Lampo et al.(2017)Lampo, Tuziemski, Lewenstein, and Korbicz]PhysRevA.96.012120 author author A. Lampo, author J. Tuziemski, author M. Lewenstein, and author J. K. Korbicz, title title Objectivity in the non-markovian spin-boson model, https://doi.org/10.1103/PhysRevA.96.012120 journal journal Phys. Rev. A volume 96, pages 012120 (year 2017)NoStop [Balanesković(2015)]Balaneskovic2015 author author N. Balanesković, title title Random unitary evolution model of quantum darwinism with pure decoherence, https://doi.org/10.1140/epjd/e2015-60319-9 journal journal The European Physical Journal D volume 69, pages 232 (year 2015)NoStop [Balaneskovic and Mendler(2016)]Balaneskovic2016 author author N. Balaneskovic and author M. Mendler, title title Dissipation, dephasing and quantum darwinism in qubit systems with random unitary interactions, https://doi.org/10.1140/epjd/e2016-70174-9 journal journal The European Physical Journal D volume 70, pages 177 (year 2016)NoStop [García-Pérez et al.(2020)García-Pérez, Rossi, and Maniscalco]GP2020 author author G. García-Pérez, author M. A. C. Rossi, and author S. Maniscalco, title title Ibm q experience as a versatile experimental testbed for simulating open quantum systems, https://doi.org/10.1038/s41534-019-0235-y journal journal npj Quantum Information volume 6, pages 1 (year 2020)NoStop [Unden et al.(2019)Unden, Louzon, Zwolak, Zurek, and Jelezko]PhysRevLett.123.140402 author author T. K. Unden, author D. Louzon, author M. Zwolak, author W. H. Zurek, and author F. Jelezko, title title Revealing the emergence of classicality using nitrogen-vacancy centers, https://doi.org/10.1103/PhysRevLett.123.140402 journal journal Phys. Rev. Lett. volume 123, pages 140402 (year 2019)NoStop [Chen et al.(2019)Chen, Zhong, Li, Wu, Wang, Li, Liu, Lu, and Pan]CHEN2019580 author author M.-C. Chen, author H.-S. Zhong, author Y. Li, author D. Wu, author X.-L. Wang, author L. Li, author N.-L. Liu, author C.-Y. Lu, and author J.-W. Pan, title title Emergence of classical objectivity of quantum darwinism in a photonic quantum simulator, https://doi.org/https://doi.org/10.1016/j.scib.2019.03.032 journal journal Science Bulletin volume 64, pages 580 (year 2019)NoStop [Ciampini et al.(2018)Ciampini, Pinna, Mataloni, and Paternostro]PhysRevA.98.020101 author author M. A. Ciampini, author G. Pinna, author P. Mataloni, and author M. Paternostro, title title Experimental signature of quantum darwinism in photonic cluster states, https://doi.org/10.1103/PhysRevA.98.020101 journal journal Phys. Rev. A volume 98, pages 020101 (year 2018)NoStop [Brunner et al.(2008)Brunner, Akis, Ferry, Kuchar, and Meisels]PhysRevLett.101.024102 author author R. Brunner, author R. Akis, author D. K. Ferry, author F. Kuchar, and author R. Meisels, title title Coupling-induced bipartite pointer states in arrays of electron billiards: Quantum darwinism in action?, https://doi.org/10.1103/PhysRevLett.101.024102 journal journal Phys. Rev. Lett. volume 101, pages 024102 (year 2008)NoStop [Burke et al.(2010)Burke, Akis, Day, Speyer, Ferry, and Bennett]PhysRevLett.104.176801 author author A. M. Burke, author R. Akis, author T. E. Day, author G. Speyer, author D. K. Ferry, and author B. R. Bennett, title title Periodic scarred states in open quantum dots as evidence of quantum darwinism, https://doi.org/10.1103/PhysRevLett.104.176801 journal journal Phys. Rev. Lett. volume 104, pages 176801 (year 2010)NoStop [Everett(1957)]RevModPhys.29.454 author author H. Everett, title title "relative state" formulation of quantum mechanics, https://doi.org/10.1103/RevModPhys.29.454 journal journal Rev. Mod. Phys. volume 29, pages 454 (year 1957)NoStop [DeWitt(1970)]DeWitt1970 author author B. S. DeWitt, title title Quantum mechanics and reality, https://doi.org/10.1063/1.3022331 journal journal Physics Today volume 23, pages 30 (year 1970), https://arxiv.org/abs/https://pubs.aip.org/physicstoday/article-pdf/23/9/30/8272650/30_1_online.pdf https://pubs.aip.org/physicstoday/article-pdf/23/9/30/8272650/30_1_online.pdf NoStop [Barrett(2005)]Barrett2005-BARTPP author author J. A. Barrett, title title The preferred-basis problem and the quantum mechanics of everything, https://doi.org/10.1093/bjps/axi114 journal journal British Journal for the Philosophy of Science volume 56, pages 199 (year 2005)NoStop [Zurek(1981)]PhysRevD.24.1516 author author W. H. Zurek, title title Pointer basis of quantum apparatus: Into what mixture does the wave packet collapse?, https://doi.org/10.1103/PhysRevD.24.1516 journal journal Phys. Rev. D volume 24, pages 1516 (year 1981)NoStop [Inamori(2016)]Inamori author author H. Inamori, title title No quantum process can explain the existence of the preferred basis: Decoherence is not universal, https://doi.org/10.4236/jqis.2016.63014 journal journal Journal of Quantum Information Science volume 6, pages 214 (year 2016)NoStop [Uhlmann(1976)]UHLMANN1976273 author author A. Uhlmann, title title The “transition probability” in the state space of a^∗-algebra, https://doi.org/https://doi.org/10.1016/0034-4877(76)90060-4 journal journal Reports on Mathematical Physics volume 9, pages 273 (year 1976)NoStop [Liang et al.(2019)Liang, Yeh, Mendonça, Teh, Reid, and Drummond]Liang_2019 author author Y.-C. Liang, author Y.-H. Yeh, author P. E. M. F. Mendonça, author R. Y. Teh, author M. D. Reid, and author P. D. Drummond, title title Quantum fidelity measures for mixed states, https://doi.org/10.1088/1361-6633/ab1ca4 journal journal Reports on Progress in Physics volume 82, pages 076001 (year 2019)NoStop [Jozsa(1994)]Jozsa1994 author author R. Jozsa, title title Fidelity for mixed quantum states, https://doi.org/10.1080/09500349414552171 journal journal Journal of Modern Optics volume 41, pages 2315 (year 1994)NoStop [Aharonov et al.(2013)Aharonov, Popescu, Rohrlich, and Skrzypczyk]Aharonov_2013 author author Y. Aharonov, author S. Popescu, author D. Rohrlich, and author P. Skrzypczyk, title title Quantum cheshire cats, https://doi.org/10.1088/1367-2630/15/11/113015 journal journal New Journal of Physics volume 15, pages 113015 (year 2013)NoStop [Denkmayr et al.(2014)Denkmayr, Geppert, Sponar, Lemmel, Matzkin, Tollaksen, and Hasegawa]Denkmayr2014 author author T. Denkmayr, author H. Geppert, author S. Sponar, author H. Lemmel, author A. Matzkin, author J. Tollaksen, and author Y. Hasegawa, title title Observation of a quantum cheshire cat in a matter-wave interferometer experiment, https://doi.org/10.1038/ncomms5492 journal journal Nature Communications volume 5, pages 4492 (year 2014)NoStop Supplemental Materials:Quantum operations restricted by no faster-than-light communication principle and generic emergence of objectivity in the position basis Rajendra Singh Bhati and Arvind Department of Physical Sciences, Indian Institute of Science Education and Research (IISER) Mohali, Sector 81 SAS Nagar, Manauli PO 140306 Punjab India§ APPENDIX–A: DETAILED STEPS OF THE PROOF OF NO-GO THEOREM Here, we evaluate the mutual entropy ℐ(A:B) in the thought experiment considered in the proof of the no-go theorem in the main text. We consider both cases one by one. (a) Alice generates a uniformly random bit a as a message to send it to Bob. The quantum state representing the bit can be written as ρ_A=1/2∑_a∈{0,1}|a⟩⟨_|A. The classical-quantum (cq) state of Alice's bit and the spin-1/2 particle can be expressed as ρ_ASI=1/2∑_a∈{0,1}|a⟩⟨_|A⊗|ψ⟩⟨_|S⊗|0⟩⟨_|I. If Alice wants to send a bit a∈{0,1} to Bob, she applies an operation U_a on the particle's state |Ψ⟩_SI. U_a is specified as U_a={[ 1⊗1; a=0,; 1⊗σ_x; a=1.;]. The state after Alice's operation becomes ρ^'_ASI=1/2∑_a∈{0,1}|a⟩⟨_|A⊗|ψ⟩⟨_|S⊗|a⟩⟨_|I. Bob measures the operator σ_z on the electron and records the outcome as a bit b∈{0,1}. After tracing out particle's wavefunction and spin state, we have classical-classical (cc) state of Alice and Bob's bits as ρ_AB=1/2∑_a,b∈{0,1}|a⟩⟨_|A⊗|a⟩⟨_|B. The mutual entropy for ρ_AB is ℐ(A:B)=1. (b) Similar to the previous case, Alice generates a bit a∈{0,1}, the state of which is given by Eq. (<ref>) and the corresponding cq-state of Alice's bit and the particle is given by Eq. (<ref>). To send bit a=0, Alice performs 𝕄_01_S⊗1_I on the particle, meaning she does not disturb its state. If the bit is a=1, she performs the following measurement: 𝕄_1={1⊗|+⟩⟨,|1⊗|-⟩⟨|} Notably, 𝕄_1 is a measurement on the internal degree of freedom without disturbing the spatial wavefunction. The state after Alice's operation becomes ρ^'_ASI =1/2|0⟩⟨_|A⊗|ψ⟩⟨_|S⊗|0⟩⟨_|I +1/4|1⟩⟨_|A⊗|ψ⟩⟨_|S⊗∑_k∈{+,-}|k⟩⟨_|I. Bob measures σ_z on the electron and records the outcome as a bit b∈{0,1}. After tracing out particle's wavefunction and spin state, we have classical-classical (cc) state of Alice and Bob's bits as ρ_AB =1/2|0⟩⟨_|A⊗|0⟩⟨_|B+1/4|1⟩⟨_|A⊗|1⟩⟨_|B +1/4|1⟩⟨_|A⊗|0⟩⟨_|B Using Eq. (<ref>), the mutual information between Alice and Bob is ℐ(A:B) =1-h(1/4) =1+1/4log_21/4+3/4log_23/4≈0.19 where h(·) is the binary Shannon entropy.Our main argument is based on the assumption that the spin of the electron (or the internal degree of any quantum particle) is accessible at locations where ever the wavefunction is non-zero. Moreover, we implicitly assume that the internal degree has no association with the spatial degree of freedom and, thus, any manipulation at any point in space updates the spin-state at all points in the space and that is how Alice and Bob are able to signal. In order to make all operations spatially local, we need to include the notion of spatially localized quantum operations such asU^'=|-α⟩⟨⊗|σ_x+(1-|-α⟩⟨)|⊗1,or measurement of the form𝕄^'≡ {|-α⟩⟨⊗||+⟩⟨,||-α⟩⟨⊗||-⟩⟨,|. .(1-|-α⟩⟨)|⊗1}.Eqs. (<ref>) and (<ref>) incorporate the fact that manipulations on spin that take place inside Alice's lab do not disturb the spin in Bob's lab. It is easy to follow that such operations do not violate the no faster-than-light communication principle. At first glance, Theorem <ref> and its proof appear very trivial. However, our proof using no faster-than-light communication principle highlights a deeper aspect of the connection between the spatial wavefunction and the internal degree of freedom. Moreover, our theorem has established that no manipulations (unitary or measurements) on the internal degree can be performed without disturbing the spatial wavefunction. If operations of the form U or 𝕄 (as specified in Theorem <ref>) are not permitted, it may be questioned what types of operations are permissible under the no faster-than-light communication principle. An accurate answer to this question may not be plausible here. However, we propose a possible solution which can be used in a crude way in certain physical scenarios to get interesting results.Here, we have assumed that the observer is sharply localized at a position x. This is an unrealistic scenario. In a more practical situation, we can assume the effects of observer's action are reachable in a spatial region x±δ. In that case, we can replace the projection operator |x⟩⟨$| by∫_x-δ^x+δ|x^'⟩⟨d|x^'. So far, we have only considered the simple case of one dimensional spatial degree of freedom. However, the generalization to three dimensional space is straightforward and more realistic.§ APPENDIX–B: POST-INTERACTION STATE The post-interaction spin state of system and environment,ϱ^'_𝒮:ℰ, is evaluated as:ϱ^'_𝒮:ℰ=_⊗_ijℋ^x_ℰ_ij(U̅_𝒮:ℰϱ_𝒮:ℰU̅^†_𝒮:ℰ). Here,_⊗_ijℋ^x_ℰ_ij(·)represents tracing out spatial degrees of freedom of all environmental-subsystems (en-subs){ℰ_ij:i=1,2,⋯,d; ∀j}. The unitaryU̅_𝒮:ℰis given byU̅_𝒮:ℰ=∏_k,lU̅_𝒮:ℰ_kl⊗_ij≠ kl1_ℰ_ij,whereU̅_𝒮:ℰ_kl =∑_i=j|x⃗_i⟩⟨_|𝒮⊗|x⃗_j⟩⟨_|ℰ_kl⊗U_𝒮:ℰ_kl(x_i) +∑_i≠ j|x⃗_i⟩⟨_|𝒮⊗|x⃗_j⟩⟨_|ℰ_kl⊗1_𝒮:ℰ_kl.The spin-spin interaction between the system𝒮and en-subℰ_kl ∀k,latx⃗_iisU_𝒮:ℰ_kl(x_i)=expιθ_kl(x_i)σ_𝒮^kl(x_i)⊗σ_ℰ_kl(x_i). Notably,σ_𝒮^kl(x_i)andσ_ℰ_kl(x_i)are arbitrary spin observables. The variableθ_kl(x_i)represents the interaction strength. Let us now evaluate the post-interaction stateϱ^'_𝒮:ℰ. The initial state of the system and environment is expressed as: ϱ_𝒮:ℰ =(∑_i,jα_iα^∗_j |x⃗_i⟩⟨x⃗_j|_𝒮⊗ρ_𝒮)⊗ϱ_ℰ^mac_1⊗ϱ_ℰ^mac_2⊗⋯⊗ϱ_ℰ^mac_d.Here,ϱ_ℰ^mac_i=⊗_jϱ_ℰ_ij. Let us assume without loss of generality that the system interacts with macroscopic fractions one by one in the orderℰ^mac_1,ℰ^mac_2,⋯,ℰ^mac_d. Furthermore, we can assume that en-subs interact with the system one by one within a macroscopic fraction. Suppose the order of interactions within the macro-fractionℰ^mac_iisℰ_i1,ℰ_i2,ℰ_i3,⋯,ℰ_im_i. The corresponding unitary operation can be decomposed as:U̅_𝒮:ℰ^mac_i=U̅_𝒮:ℰ_i m_i⋯U̅_𝒮:ℰ_i3U̅_𝒮:ℰ_i2U̅_𝒮:ℰ_i1.The interaction unitaryU̅_𝒮:ℰ_11transforms the stateϱ_𝒮:ℰintoϱ^(11)_𝒮:ℰ, thenU̅_𝒮:ℰ_12transformsϱ^(11)_𝒮:ℰintoϱ^(12)_𝒮:ℰand so on. Letρ_𝒮:ℰ^mac_idenote the composite spin state of system andi-th macro-fractionρ_𝒮:ℰ^mac_i=ρ_𝒮⊗_jρ_ℰ_ij. Hereafter, similar uses of this notation are understood. We obtainϱ_𝒮:ℰ ϱ^(11)_𝒮:ℰ= ‖α_1‖^2|x⃗_1⟩⟨_|𝒮⊗_i^'|x⃗_1⟩⟨_|ℰ_1i^'⊗(U_𝒮:ℰ_11(x_1))ρ_𝒮:ℰ^mac_1(U_𝒮:ℰ_11(x_1))^†⊗_j^'≠ 1ϱ_ℰ^mac_j^' +(∑_l≠ 1α_lα_1^∗|x⃗_l⟩⟨x⃗_1|_𝒮)⊗_i^'|x⃗_1⟩⟨_|ℰ_1i^'⊗ρ_𝒮:ℰ^mac_1(U_𝒮:ℰ_11(x_1))^†⊗_j^'≠ 1ϱ_ℰ^mac_j^' +(∑_m≠ 1α_1α_m^∗|x⃗_1⟩⟨x⃗_m|_𝒮)⊗_i^'|x⃗_1⟩⟨_|ℰ_1i^'⊗ U_𝒮:ℰ_11(x_1)ρ_𝒮:ℰ^mac_1⊗_j^'≠ 1ϱ_ℰ^mac_j^' +(∑_l≠ 1,m≠ 1α_1α_m^∗|x⃗_l⟩⟨x⃗_m|_𝒮)⊗_i^'|x⃗_1⟩⟨_|ℰ_1i^'⊗ρ_𝒮:ℰ^mac_1⊗_j^'≠ 1ϱ_ℰ^mac_j^'.Similarly,ϱ^(11)_𝒮:ℰ ϱ^(12)_𝒮:ℰ= ‖α_1‖^2|x⃗_1⟩⟨_|𝒮⊗_i^'|x⃗_1⟩⟨_|ℰ_1i^'⊗(U_𝒮:ℰ_12ℰ_11(x_1))ρ_𝒮:ℰ^mac_1(U_𝒮:ℰ_12ℰ_11(x_1))^†⊗_j^'≠ 1ϱ_ℰ^mac_j^' +(∑_l≠ 1α_lα_1^∗|x⃗_l⟩⟨x⃗_1|_𝒮)⊗_i^'|x⃗_1⟩⟨_|ℰ_1i^'⊗ρ_𝒮:ℰ^mac_1(U_𝒮:ℰ_12ℰ_11(x_1))^†⊗_j^'≠ 1ϱ_ℰ^mac_j^' +(∑_m≠ 1α_1α_m^∗|x⃗_1⟩⟨x⃗_m|_𝒮)⊗_i^'|x⃗_1⟩⟨_|ℰ_1i^'⊗ U_𝒮:ℰ_12ℰ_11(x_1)ρ_𝒮:ℰ^mac_1⊗_j^'≠ 1ϱ_ℰ^mac_j^' +(∑_l≠ 1,m≠ 1α_1α_m^∗|x⃗_l⟩⟨x⃗_m|_𝒮)⊗_i^'|x⃗_1⟩⟨_|ℰ_1i^'⊗ρ_𝒮:ℰ^mac_1⊗_j^'≠ 1ϱ_ℰ^mac_j^',whereU_𝒮:ℰ_12ℰ_11(x_1)=U_𝒮:ℰ_12(x_1)U_𝒮:ℰ_11(x_1). Continuing the above process of derivation, we obtain:ϱ_𝒮:ℰ ϱ^(1m_1)_𝒮:ℰ= ‖α_1‖^2|x⃗_1⟩⟨_|𝒮⊗_i^'|x⃗_1⟩⟨_|ℰ_1i^'⊗ρ^'_𝒮:ℰ^mac_1⊗_j^'≠ 1ϱ_ℰ^mac_j^' + (∑_l≠ 1α_lα_1^∗|x⃗_l⟩⟨x⃗_1|_𝒮)⊗_i^'|x⃗_1⟩⟨_|ℰ_1i^'⊗Ξ^†_𝒮:ℰ_1^mac⊗_j^'≠ 1ϱ_ℰ^mac_j^' + (∑_m≠ 1α_1α_m^∗|x⃗_1⟩⟨x⃗_m|_𝒮)⊗_i^'|x⃗_1⟩⟨_|ℰ_1i^'⊗Ξ_𝒮:ℰ_1^mac⊗_j^'≠ 1ϱ_ℰ^mac_j^' + (∑_l≠ 1,m≠ 1α_lα_m^∗|x⃗_l⟩⟨x⃗_m|_𝒮)⊗_j^'ϱ_ℰ^mac_j^'.Here, we denoteρ^'_𝒮:ℰ^mac_i =(U_𝒮:ℰ_im_i⋯ℰ_i3ℰ_i2ℰ_i1(x_i))ρ_𝒮:ℰ^mac_i(U_𝒮:ℰ_im_i⋯ℰ_i3ℰ_i2ℰ_i1(x_i))^† Ξ_𝒮:ℰ^mac_i = U_𝒮:ℰ_im_i⋯ℰ_i3ℰ_i2ℰ_i1(x_i)ρ_𝒮:ℰ^mac_i.The stateϱ^(2m_2)_𝒮:ℰ=(U̅_𝒮:ℰ^mac_2)ϱ^(1m_1)_𝒮:ℰ(U̅_𝒮:ℰ^mac_2)^†is evaluated in the similar manner:ϱ^(1m_1)_𝒮:ℰ ϱ^(2m_2)_𝒮:ℰ= ‖α_1‖^2|x⃗_1⟩⟨_|𝒮⊗_i^'|x⃗_1⟩⟨_|ℰ_1i^'⊗ρ^'_𝒮:ℰ^mac_1⊗_j^'≠ 1ϱ_ℰ^mac_j^' + ‖α_2‖^2|x⃗_2⟩⟨_|𝒮⊗_i^'|x⃗_2⟩⟨_|ℰ_2i^'⊗ρ^'_𝒮:ℰ^mac_2⊗_j^'≠ 2ϱ_ℰ^mac_j^' + α_1α_2^∗|x⃗_1⟩⟨x⃗_2|_𝒮⊗_i^'|x⃗_1⟩⟨_|ℰ_1i^'⊗_i^'|x⃗_2⟩⟨_|ℰ_2i^'⊗Ω_𝒮:ℰ^mac_1ℰ^mac_2⊗_j^'≠ 1, 2ϱ_ℰ^mac_j^' + α_1^∗α_2|x⃗_2⟩⟨x⃗_1|_𝒮⊗_i^'|x⃗_1⟩⟨_|ℰ_1i^'⊗_i^'|x⃗_2⟩⟨_|ℰ_2i^'⊗Ω_𝒮:ℰ^mac_2ℰ^mac_1⊗_j^'≠ 1, 2ϱ_ℰ^mac_j^' + (∑_l≠ 1,2α_lα_1^∗|x⃗_l⟩⟨x⃗_1|_𝒮)⊗_i^'|x⃗_1⟩⟨_|ℰ_1i^'⊗Ξ^†_𝒮:ℰ_1^mac⊗_j^'≠ 1ϱ_ℰ^mac_j^' + (∑_l≠ 1,2α_lα_2^∗|x⃗_l⟩⟨x⃗_2|_𝒮)⊗_i^'|x⃗_2⟩⟨_|ℰ_2i^'⊗Ξ^†_𝒮:ℰ_2^mac⊗_j^'≠ 2ϱ_ℰ^mac_j^' + (∑_m≠ 1,2α_1α_m^∗|x⃗_1⟩⟨x⃗_m|_𝒮)⊗_i^'|x⃗_1⟩⟨_|ℰ_1i^'⊗Ξ_𝒮:ℰ_1^mac⊗_j^'≠ 1ϱ_ℰ^mac_j^' + (∑_m≠ 1,2α_2α_m^∗|x⃗_2⟩⟨x⃗_m|_𝒮)⊗_i^'|x⃗_2⟩⟨_|ℰ_2i^'⊗Ξ_𝒮:ℰ_2^mac⊗_j^'≠ 2ϱ_ℰ^mac_j^' + (∑_l≠ 1,2m≠ 1,2α_lα_m^∗|x⃗_l⟩⟨x⃗_m|_𝒮)⊗_j^'ϱ_ℰ^mac_j^',where we have denotedΩ_𝒮:ℰ^mac_iℰ^mac_j =(U_𝒮:ℰ_im_i⋯ℰ_i3ℰ_i2ℰ_i1(x_i))ρ_𝒮:ℰ^mac_iℰ^mac_j(U_𝒮:ℰ_jm_j⋯ℰ_j3ℰ_j2ℰ_j1(x_j))^†. It can be easily verified that the stateϱ_𝒮:ℰ^(dm_d)=U̅_𝒮:ℰϱ_𝒮:ℰU̅_𝒮:ℰ^†is thus given byϱ_𝒮:ℰ^(dm_d)= ∑_i=1^d‖α_i‖^2|x⃗_i⟩⟨_|𝒮⊗_i^'|x⃗_i⟩⟨_|ℰ_ii^'⊗ρ_𝒮:ℰ_i^mac^'⊗_j≠ iϱ_ℰ_j^mac + ∑_k≠ l^d α_k α_l^∗|x⃗_k⟩⟨x⃗_l|_𝒮⊗_k^'|x⃗_k⟩⟨_|ℰ_kk^'⊗_l^'|x⃗_l⟩⟨_|ℰ_ll^'⊗Ω_𝒮:ℰ_k^macℰ_l^mac⊗_j≠ k,lϱ_ℰ_j^macAfter tracing out the spatial degree of en-subs, see Eq. (<ref>), we obtain the composite spin state along with the spatial degree of the system,ϱ_𝒮:ℰ^'= ∑_i=1^d‖α_i‖^2|x⃗_i⟩⟨_|𝒮⊗ρ_𝒮:ℰ_i^mac^'⊗_j≠ iρ_ℰ_j^mac + ∑_k≠ l^d α_k α_l^∗|x⃗_k⟩⟨x⃗_l|_𝒮⊗Ω_𝒮:ℰ_k^macℰ_l^mac⊗_j≠ k,lρ_ℰ_j^mac§ APPENDIX–C: SPIN STATE OF THE SYSTEM AND MACRO-FRACTIONS AFTER DECOHERENCE Eq. (<ref>),ϱ_𝒮:ℰ^', represents a multipartite entangled state. Tracing out en-subs' spins decoheresϱ_𝒮:ℰ^'. Here, we evaluate the state after tracing out significantly large portions of macro-fractions. Without loosing the generality, we assume that the firstn_ien-subs ini-th macro-fraction that interact with the system are discarded or cannot be accessed by observers. Since interactions among en-subs are assumed to be absent, we can evaluate effect of interaction with each en-sub separately.As we will show, every discarded en-sub strictly increases the mixness in the system spin. Let ρ_𝒮=(1+r⃗·σ)/2, where 0≤‖r⃗‖≤ 1, and ρ_ℰ_kl represent the density matrices corresponding to the spins of the system 𝒮 and an en-sub ℰ_kl, respectively. Furthermore, let Ω=[ c_11 c_12; c_21 c_22 ] be an arbitrary matrix (operator). Consider a unitary operator U_𝒮:ℰ_kl=expιθ_klσ_𝒮^kl⊗σ_ℰ_kl acting on the composite space ℋ_𝒮⊗ℋ_ℰ_kl. Let ρ_𝒮^'=(1+r⃗^⃗'⃗·σ)/2, where 0≤‖r⃗^⃗'⃗‖≤ 1, and Ω^'=[ c_11^' c_12^'; c_21^' c_22^' ] be given as ρ_𝒮^'=_ℰ_kl(U_𝒮:ℰ_klρ_𝒮⊗ρ_ℰ_kl U^†_𝒮:ℰ_kl) and Ω^'=_ℰ_kl(U_𝒮:ℰ_klΩ⊗ρ_ℰ_kl), respectively. Then the followings are true: (i)‖r⃗^⃗'⃗‖^2 = ‖r⃗‖^2(1-δsin[2](θ_kl)sin[2](ϕ)), (ii)∑_i,j∈{1,2}‖ c^'_ij‖^2 = ∑_ij‖ c^'_ij‖^2(1-δsin[2](θ_kl)), where ϕ is the angle between σ^kl_𝒮=ŝ·σ and r⃗=‖r⃗‖r̂ cos(ϕ)=ŝ·r̂, and δ=1-(σ_ℰ_klρ_ℰ_kl)^2.(i) It is straightforward that ρ^'_𝒮=cos[2](θ_kl)ρ_𝒮+sin[2](θ_kl)ρ̃_𝒮-r/2sin(2θ_kl)⟨σ_ℰ_kl⟩(ŝ·r̂)·σ, where ρ̃_𝒮=σ^kl_𝒮ρ_𝒮(σ^kl_𝒮)^†, r=‖r⃗‖, and ⟨σ_ℰ_kl⟩=σ_ℰ_klρ_ℰ_kl. Furthermore, we have ρ̃_𝒮= 1/2+r/2σ^kl_𝒮(r̂·σ)(σ^kl_𝒮)^† =1/2 + r/2(ŝ·σ)(r̂·σ)(ŝ·σ)=1/2 + r/2(ŝ·r̂(ŝ·σ)+i((ŝ×r̂)·σ)(ŝ·σ))=1/2 + r/2(ŝ·r̂(ŝ·σ)-(ŝ×r̂)×ŝ·σ)=1/2 + r/2k̂·σ, where k̂=cos(ϕ)ŝ-sin(ϕ)n̂, ŝ·r̂=cos(ϕ), ŝ×r̂=sin(ϕ)n̂^' and n̂^'×ŝ=n̂. Note that n̂ is a unit vector and so is k̂. We can re-write Eq. (<ref>) as ρ^'_𝒮 =cos[2](θ_kl)(1/2 + r/2r̂·σ)+sin[2](θ_kl)(1/2 + r/2k̂·σ)-r/2sin(2θ_kl)sin(ϕ)⟨σ_ℰ_kl⟩n̂^'·σ = 1/2(1+ r⃗^⃗'⃗·σ), where r⃗^⃗'⃗=r(cos[2](θ_kl)r̂+sin[2](θ_kl)k̂-sin(2θ_kl)sin(ϕ)⟨σ_ℰ_kl⟩n̂^'). Since r̂=cos(ϕ)ŝ+sin(ϕ)n̂, thus ‖r⃗^⃗'⃗‖^2 = ‖r⃗‖^2(1-δsin[2](θ_kl)sin[2](ϕ))(ii) The operator Ω^' can be evaluated asΩ^' =_ℰ_kl(U_𝒮:ℰ_klΩ⊗ρ_ℰ_kl) =cos(θ_kl)Ω+ι⟨σ_ℰ_kl⟩sin(θ_kl)σ^kl_𝒮Ω.Let us now assume the following general form for the observable σ^kl_𝒮:σ^kl_𝒮=[ αβ e^ιγ; β e^-ιγ-α; ],where α,β and γ are real, and α^2+β^2=1. Thus,Ω^'=cos(θ_kl)[ c_11 c_12; c_21 c_22;]+ι⟨σ_ℰ_kl⟩sin(θ_kl)[ αβ e^ιγ; β e^-ιγ-α; ][ c_11 c_12; c_21 c_22 ]=[ cos(θ_kl)c_11+ι⟨σ_ℰ_kl⟩sin(θ_kl)(α c_11+β e^ιγc_21) cos(θ_kl)c_12+ι⟨σ_ℰ_kl⟩sin(θ_kl)(α c_12+β e^ιγc_22); cos(θ_kl)c_21+ι⟨σ_ℰ_kl⟩sin(θ_kl)(β e^-ιγ c_11-α c_21) cos(θ_kl)c_22+ι⟨σ_ℰ_kl⟩sin(θ_kl)(β e^-ιγ c_12-α c_22) ]≡[ c_11^' c_12^'; c_21^' c_22^';].Furthermore, we have‖ c^'_11‖^2 =(cos[2](θ_kl)+α^2⟨σ_ℰ_kl⟩^2sin[2](θ_kl))‖ c_11‖^2-β⟨σ_ℰ_kl⟩sin(2θ_kl)e^ιγc_21c_11^∗+2αβ⟨σ_ℰ_kl⟩^2sin[2](θ_kl)e^ιγc_21c_11^∗ + β^2⟨σ_ℰ_kl⟩^2sin[2](θ_kl)‖ c_21‖^2,‖ c^'_12‖^2 =(cos[2](θ_kl)+α^2⟨σ_ℰ_kl⟩^2sin[2](θ_kl))‖ c_12‖^2-β⟨σ_ℰ_kl⟩sin(2θ_kl)e^ιγc_22c_12^∗+2αβ⟨σ_ℰ_kl⟩^2sin[2](θ_kl)e^ιγc_22c_12^∗ + β^2⟨σ_ℰ_kl⟩^2sin[2](θ_kl)‖ c_22‖^2,‖ c^'_21‖^2 =(cos[2](θ_kl)+α^2⟨σ_ℰ_kl⟩^2sin[2](θ_kl))‖ c_21‖^2-β⟨σ_ℰ_kl⟩sin(2θ_kl)e^-ιγc_11c_21^∗-2αβ⟨σ_ℰ_kl⟩^2sin[2](θ_kl)e^-ιγc_11c_21^∗ + β^2⟨σ_ℰ_kl⟩^2sin[2](θ_kl)‖ c_11‖^2,‖ c^'_22‖^2 =(cos[2](θ_kl)+α^2⟨σ_ℰ_kl⟩^2sin[2](θ_kl))‖ c_22‖^2-β⟨σ_ℰ_kl⟩sin(2θ_kl)e^-ιγc_12c_22^∗-2αβ⟨σ_ℰ_kl⟩^2sin[2](θ_kl)e^-ιγc_12c_22^∗ + β^2⟨σ_ℰ_kl⟩^2sin[2](θ_kl)‖ c_12‖^2.Therefore,∑_i,j∈{1,2}‖ c^'_ij‖^2 = ∑_i,j∈{1,2}‖ c_ij‖^2(cos[2](θ_kl)+α^2⟨σ_ℰ_kl⟩^2sin[2](θ_kl)+β^2⟨σ_ℰ_kl⟩^2sin[2](θ_kl))= ∑_i,j∈{1,2}‖ c_ij‖^2(cos[2](θ_kl)+⟨σ_ℰ_kl⟩^2sin[2](θ_kl))= ∑_i,j∈{1,2}‖ c_ij‖^2(1-δsin[2](θ_kl)) Following inferences can be drawn from the above proposition: (i) For δsin[2](θ_kl)sin[2](ϕ)≠ 0, the Bloch vector of system's spin strictly decreases after the interaction with corresponding en-sub. The former requires simultaneous fulfillment of three conditions: (i) δ≠ 0, or equivalently ⟨σ_ℰ_kl⟩^2≠ 1,  the initial state of the interacting en-sub is not an eigen state of the coupling observable σ_ℰ_kl. (ii) The interaction between the system and the en-sub is non-zero  sin(θ_kl)≠ 0. (iii) The system observable σ^kl_𝒮 is not aligned with the system's Bloch vector r⃗ sin(ϕ)≠ 0. Given that these conditions are fulfilled, the state of system's spin gets more and more mixed as it interacts with the environment. Notably, for a sufficiently large portion ℰ̃_i of the macro-fraction ℰ_i, one can have ρ^'_𝒮=_ℰ̃_i(ρ^'_𝒮:ℰ̃_i)=1/2. Here we have borrowed notations from Eq. (<ref>). (ii) For repeated interactions satisfying the condition δsin[2](θ_kl)≠ 0 ⟨σ_ℰ_kl⟩^2≠ 1 and sin(θ_kl)≠ 0, matrix Ω approaches to zero. Consequently, Ξ^'_𝒮=_ℰ̃_i(Ξ_𝒮:ℰ̃_i)=0 Here we have borrowed notations from Eq. (<ref>). After tracing out a significantly large portion ℰ̃ of ρ^'_𝒮:ℰ, as specified in Eq. (<ref>), the state of the system and the remaining environment is given by, ϱ_𝒮:ℰ^'= ∑_i=1^d‖α_i‖^2|x⃗_i⟩⟨_|𝒮⊗1_𝒮/2⊗ρ_ℰ_i∖ℰ̃_i^mac⊗_j≠ iρ_ℰ_j^mac.Here, ρ_ℰ^mac_i∖ℰ̃_̃ĩ is the composite spin state of the en-subs in macro-fraction ℰ_i^mac which are accessible. The proof trivially follows from the above remark. Suppose the spin of the system is initially in maximally mixed stateρ_𝒮=1_𝒮/2. Letρ^(k)_𝒮:ℰ^mac_idenote the state of system plusi-th macro-environment after interaction withken-subs. It is straightforward to derive thatρ^(1)_𝒮:ℰ^mac_i =(U_𝒮:ℰ_i1)ρ_𝒮:ℰ^mac_i(U_𝒮:ℰ_i1)^†=(1/2⊗ρ^'_ℰ_i1+Ω_1)_𝒮:ℰ_i1⊗_j≠ 1ρ_ℰ_ijwhereρ^'_ℰ_i1 =(cos^2(θ_i1)ρ_ℰ_i1+sin^2(θ_i1)σ_ℰ_i1ρ_ℰ_i1σ_ℰ_i1)≡(cos^2(θ_i1)ρ_ℰ_i1+sin^2(θ_i1)ρ̃_ℰ_i1), Ω_1 =ιsin(2θ_i1)σ^i1_𝒮/2⊗σ_ℰ_i1ρ_ℰ_i1-ρ_ℰ_i1σ_ℰ_i1/2.Note thatρ^'_ℰ_i1=1and_𝒮((Ω_1)_𝒮:ℰ_i1)≡0. Similarly,ρ^(2)_𝒮:ℰ^mac_i =(U_𝒮:ℰ_i2)ρ^(1)_𝒮:ℰ^mac_i(U_𝒮:ℰ_i2)^†=(1/2⊗ρ^'_ℰ_i1⊗ρ^'_ℰ_i2+Ω_2)_𝒮:ℰ_i1ℰ_i2⊗_j≠ 1,2ρ_ℰ_ijwhereρ^'_ℰ_i2 =(cos^2(θ_i2)ρ_ℰ_i2+sin^2(θ_i2)σ_ℰ_i2ρ_ℰ_i2σ_ℰ_i2)≡(cos^2(θ_i2)ρ_ℰ_i2+sin^2(θ_i2)ρ̃_ℰ_i2), and(Ω_2)_𝒮:ℰ_i1ℰ_i2is a traceless operator. More specifically, we have_𝒮((Ω_2)_𝒮:ℰ_i1ℰ_i2)≡ 0.Supposem^'_iis the number of accessible en-subs in the macro-fractionℰ_i^mac. It is straightforward thatρ^'_𝒮:ℰ^mac_i∖ℰ̃_i≈1/21_𝒮⊗_j:ℰ_ij∈ℰ^mac_i∖ℰ̃_iρ^'_ℰ_ij+Ω_𝒮:ℰ^mac_i∖ℰ̃_i   _𝒮(Ω_𝒮:ℰ^mac_i∖ℰ̃_i)≡ 0.andρ^'_ℰ_ij =cos^2(θ_ij)ρ_ℰ_ij+sin^2(θ_ij)ρ̃_ℰ_ij,whereρ̃_ℰ_ij=σ_ℰ_ijρ_ℰ_ijσ_ℰ_ij.After tracing out system's spin, we obtainρ^'_ℰ^mac_i∖ℰ̃_i=⊗_j:ℰ_ij∈ℰ^mac_i∖ℰ̃_iρ^'_ℰ_ij.Therefore, using Eqs. (<ref>) and (<ref>), the post-interactionstate of the system's spatial degree of freedom and the spins of the accessible environment (the spin of the system is traced out) is given by:ϱ^'_𝒮:ℰ_∖ℰ̃≈∑_i=1^d‖α_i‖^2|x⃗_i⟩⟨_|𝒮⊗ρ^'_ℰ^mac_i∖ℰ̃_i⊗_j≠ iρ_ℰ^mac_j∖ℰ̃_j. § APPENDIX–D: FORMATION OF THE SPECTRAL BROADCAST STRUCTURERemember that the spatial degrees of freedom of en-subs and spin of the system are traced out. Additionally, a fraction of the environmentℰ̃which is inaccessible by the observers is also traced out. As we will see, Eq. (<ref>) is a spectrum broadcast structure where the information about the system's position is redundantly imprinted on multiple fragments of environment-spins. Since we have discarded the spatial degree of freedom of all en-subs, our environmentℰconsists of only en-sub spins hereafter. Let us now divideℰinto fragmentsℱ_1,ℱ_2,⋯,ℱ_nin such a way thatℱ_kfor allk∈{1,2,⋯,n}has randomly chosen en-subs from all macro-fractions{ℰ^mac_j}_j∈{1,2,⋯,d}. This can be achieved by applying a random permutation on all en-subs in Eq. (<ref>) and then dividing them intonfragments of equal size. Let us now denote the post-interaction state of the environment corresponding to the system's positionx⃗_ibyχ_i=ρ^'_ℰ^mac_i∖ℰ̃_i⊗_j≠ iρ_ℰ^mac_j∖ℰ̃_j.With re-indexing, the stateχ_ican be rewritten as:χ_i=ρ_ℰ_11⊗ρ_ℰ_12⊗⋯⊗ρ^'_ℰ_i1⊗ρ^'_ℰ_i2⊗ρ^'_ℰ_i3⋯⊗ρ^'_ℰ_im^'_i_ρ^'_ℰ^mac_i⊗ρ_ℰ_(i+1)1⊗ρ_ℰ_(i+1)2⊗⋯,whereρ_ℰklandρ^'_ℰ_klare initial and post-interaction states of thekl-th en-sub, respectively. Remember that states{ρ^'_ℰ_kl}are given by Eq. (<ref>). After a random shuffling (permutation) and re-indexing on en-subs, we obtainχ_i= ρ_ℰ_1⊗ρ^'_ℰ_2⊗ρ_ℰ_3⋯_ℱ_1⊗ρ^'_ℰ_r_1+1⊗ρ^'_ℰ_r_1+2⊗ρ_ℰ_r_1+1⋯_ℱ_2⊗ρ^'_ℰ_r_1+r_2+1⊗ρ_ℰ_r_1+r_2+2⊗ρ^'_r_1+r_2+3⋯_ℱ_3⊗ρ^'_∑_k^n-1 r_k+1⊗ρ_∑_k^n-1 r_k+2⊗ρ_∑_k^n-1 r_k+3⋯_ℱ_n ≡ ξ^ℱ_1_i⊗ξ^ℱ_2_i⊗ξ^ℱ_3_i⊗⋯⊗ξ^ℱ_n_i,whereξ^ℱ_k_iis the state ofk-th fragment corresponding to the system's positionx⃗_i. Here,{ρ_ℰ_k}and{ρ^'_ℰ_l}are randomly sampled from ⊗_j≠iρ_ℰ^mac_j∖ℰ̃_jandρ^'_ℰ^mac_i∖ℰ̃_i, respectively. Note thatξ^ℱ_k_ihas multiple perturbed (post-interaction) and unperturbed (initial) en-subs in the product state.Eq. (<ref>) can now be re-expressed as:ϱ^'_𝒮:ℰ=∑_i=1^d‖α_i‖^2|x⃗_i⟩⟨_|𝒮⊗ξ^ℱ_1_i⊗ξ^ℱ_2_i⊗ξ^ℱ_3_i⊗⋯⊗ξ^ℱ_n_i.Let us now prove that statesξ^ℱ_k_iare perfectly distinguishable   ξ^ℱ_k_iξ^ℱ_k_j=0 ∀i≠ j,k=1,2,3,⋯,n,or equivalently, the fidelity of statesξ^ℱ_k_iandξ^ℱ_k_jfori≠jis zero:F(ξ^ℱ_k_i,ξ^ℱ_k_j)=0The fidelity of two density matricesρandσis defined asF(ρ,σ)=√(ρ^1/2σρ^1/2).Further, the fidelity is multiplicative under tensor productsF(ρ_1⊗ρ_2,σ_1⊗σ_2)=F(ρ_1,σ_1)F(ρ_2,σ_2),Since the fidelity for same states is oneF(ρ,ρ)=1, we obtainF(ξ^ℱ_k_i,ξ^ℱ_k_i^')= ∏_j∈{i,i^'} j^':ℰ_jj^'∈ℱ_k F(ρ_ℰ_jj^', ρ^'_ℰ_jj^').The fidelity corresponding to unperturbed en-subs within fragmentℱ_kis one. However, for perturbed en-subs the fidelity is strictly less than one given thatsin(θ_jj^')≠0. Therefore, in the asymptomatic case where the size of the environment is infinitely large, we haveF(ξ^ℱ_k_i,ξ^ℱ_k_i^')≈0 ∀i≠ i^',k=1,2,3,⋯,n.This proves our main claim.
http://arxiv.org/abs/2310.18133v1
{ "authors": [ "Rajendra Singh Bhati", "Arvind" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231027132416", "title": "Quantum operations restricted by no faster-than-light communication principle and generic emergence of objectivity in position basis" }
[email protected] Dipartimento di Fisica G. Occhialini, Universitá degli Studi di Milano Bicocca, Piazza della Scienza 3, I-20126 Milano, Italy [email protected] CEICO, Institute of Physics of the Czech Academy of Sciences, Na Slovance 2, 182 21 Praha 8, CzechiaAn interesting mechanism for the formation of hairy black holes occurs when a vector field, non-minimally coupled to a source term, grows from a perturbation of the vacuum black hole, aka vectorization. Its study has, however, been lacking, in part due to the constant threat of ghost instabilities that have plagued vector fields. In this work, we show evidence that, in a generic family of extended-vector-tensor theories where the vector field is non-minimally coupled to the model's invariant (source term), a spherically symmetric, vectorized black hole always suffers from ghost instabilities. These ultimately turn the process of vectorization astrophysically unviable. The spooky ghost of vectorization Alexandre M. Pombo January 14, 2024 ================================= § INTRODUCTION Never in the more than 100 years of general relativity (GR) has been a better time to study compact objects. The gravitational wave emission from the binary coalescence of black holes (BHs) detected by the LIGO-VIRGO collaboration (e.g. <cit.>) and the direct imaging by the Event Horizon Telescope <cit.>, led to one of the most significant advances in BH history, allowing the study of gravity in its strong field regime where deviations from GR may arise. One of the simplest and more attractive alternatives to GR, which has been extensively studied both at the astrophysical (e.g. <cit.>) and cosmological (e.g. <cit.>) level, consists on the addition of a scalar field to GR. Of particular interest are theories where the scalar field non-minimally couples to an invariant of the theory. These are known as extended-scalar-tensor theories (eST) <cit.>, where BH's scalarization may arise (see <cit.> for a review). Scalarization occurs when perturbations of the vacuum solution push the BH to transfer part of its energy density to a surrounding scalar hair, giving rise to new BH solutions with significant deviations from the vacuum GR counterpart. Two exemplary models for the source term able to scalarize are the Gauss-Bonnet <cit.> and Maxwell <cit.> invariant. While the former, is more astrophysically interesting <cit.>, the latter is relatively simpler and easier to compute[In a dynamical astrophysical environment, the presence of plasmas around the BH leads to prompt discharge. Alternatively, the neutralization can occur through Hawking charge evaporation <cit.>.], serving as a toy model for a wide variety of coupling functions and dynamical studies. In both cases, perturbatively and entropically stable solutions can be formed dynamically by either a linear (aka spontaneous/normal scalarization <cit.>) or a non-linear <cit.> perturbation of the vacuum solution[Similar studies were performed for magnetized BHs <cit.>, spinning BHs <cit.> and spinning and charged BH <cit.>.]. It seems then reasonable to further generalize scalarization to higher particle spins, with a vector field as the most natural first candidate. In fact, spontaneous vectorization <cit.> has already been studied for both an Einstein-Maxwell-vector (EMv) model <cit.> and a vector-Gauss-Bonnet (vGB) model <cit.>[While vectorized solutions exist in both models, only the EMv has entropically preferable solutions when compared with vacuum GR BHs.].Recent works have, however, cast doubt on the viability of vector fields around astrophysical objects <cit.> due to the presence of ghost instabilities <cit.>. In fact, models with vector fields seem to be plagued with ghost instabilities (see e.g. <cit.> and references therein), with massive vector fields being especially sensitive. Self-interactions have also been shown to originate ghosts in otherwise seemingly ghost-free objects like Proca stars <cit.>. One major difference from scalar fields is in the number of additional degrees of freedom. Scalar fields contribute a single new degree of freedom irrespective of whether they are part of scalarization or not – provided that the scalar field equation is second order in derivatives. However, this is not the case for vectorization. In general, vector-tensor theories break the gauge freedom found in massless vector fields. As a result, the vector field of vectorization models carries three degrees of freedom instead of the two found in electromagnetism.This is not immediately a problem, since minimally coupled massive vectors (aka Proca), also break the gauge symmetry and still provide a well-behaved classical field theory. However, the extra degree of freedom appears to be problematic with vectorization. It seems then reasonable to ask if the same also occurs for vectorized BHs. In this work, we present analytical evidence that, in a generic family of extended-vector-tensor theories where the vector field is non-minimally coupled to the model's invariant (source term), a spherically symmetric, vectorized BH always suffers from ghost instabilities. For this, we follow the approach of <cit.>, which consists of re-writing the field equations in a wave-like fashion, and we study the behaviour of the effective metric arising from the computation. Throughout the paper, 4π G=1=4πϵ_0. The signature of the spacetime is (-,+,+,+). In this work, one is solely interested in spherical symmetry and the metric matter functions are only radially dependent. For notation simplicity, after being first introduced, the functions' radial dependence is omitted, e.g. X(r)≡ X, andX' ≡ dX/dr, and we consider the notation X̂≡ dX /d B^2 for the derivative with respect to the vector field.The paper is organized as follows. In Sec. <ref> we review the basic concept of ghost instabilities, while the framework for vectorization is presented in Sec. <ref>. We then show the occurrence of ghost instabilities in extended-vector-tensor (eVT) theory with a generic source term in Sec. <ref> and we derive our main conclusions in Sec. <ref>. Appendix <ref> and <ref> are devoted to the application of the previous result to the EMv and a vGB model, respectively.§ BASICS OF INSTABILITIES In this section, we briefly review the general aspects of ghosts and gradient instabilities in a simplified scenario. For this, let us follow <cit.> and consider the linearized scalar field equation in the 1+1 dimensional spacetime g^tt ∂ _t^2 δϕ +g^xx ∂_x ^2 δϕ = μ ^2 ϕ , where, for simplicity, assume a diagonal metric with constant components. The absence of instabilities requires g^tt < 0 ,g^xx >0 , μ ^2 >0 .If the field is described by a plain wave mode δϕ (t,x) ∝ e^i(ω t-k x), the resulting the dispersion relation comes as ω (k) =√(μ ^2 + g^xx k^2/-g^tt) . Three types of instability are then present. For μ^2 <0, the mode behaves as a tachyon, and ω (k) becomes imaginary for sufficiently small |k|, leading to exponential growth. The fastest growing mode behaves as ∼ e^√(μ ^2 /g^tt) t, implying an upper limit to the growth state.When g^tt>0 a ghost instability settles in. There is also an exponential growth of the field, however, this time the rate of growth diverges with increasing wave number as ∼ e^√(g^xx/g^tt)t. If g^xx<0, the same asymptotic behaviour as the ghost exists, creating an instability known as gradient instability. Observe that both ghost and gradient instabilities are qualitatively different from the limited growing tachyonic instability.§ VECTORIZATIONAs already mentioned in the introduction Sec. <ref>, while spontaneous vectorized solutions have been studied, when compared to the scalarization phenomena, a huge gap in the literature still exists. In particular, concerning the latter, three types of solutions have been observed to exist <cit.>: dilatonic, connected scalarization (aka linear or spontaneous scalarization) and disconnected scalarization (aka non-linear scalarization); while for the former only connected vectorized solutions have been studied[The dilatonic solution seems to be incompatible with vector fields due to the vector nature of the field, however a term of the kind √(|B_μ B^ν|) could solve the problem. The latter is although not the topic of this work.].In this work, we are interested in a class of eVT theories which can be generically described by the action 𝒮_vℐ=1/16π G∫ d^4 x √(-g)[ R-G^μν G_μν+f(B^2) ℐ] , where G_μν=∇_μ B_ν -∇_ν B_μ is the field strength, f(B^2) is a coupling function that couples non-minimally the real vector field to the ℐ source term which is an invariant of the theory (two examples of ℐ will be given in Sec. <ref>). Let us also define the derivative of the coupling function with respect to the vector field f̂≡d f(B^2)/dB ^2, with B^2 = B_μ B^μ.Variation of the action (<ref>) with respect to the vector and metric fields gives the corresponding field equations∇ _μ G^μα =-1/2f̂ ℐ B^α ,R_μν-1/2g_μνR= 2T_μν , with the stress-energy tensor T_μν, T_μν =f(|B|^2)(F_μ^αF_να - 1/4g_μνF^αβF_αβ) + 1/2(G_μ^αG^*_να+G^*α_μ G_να-1/2g_μνG^μνG^*_μν) + 1/4df/d|B|^2F^αβF_αβ(B_μ B^*_ν+ B^*_μ B_ν). Observe that, B_t (r) = 0 solves the field equations, and thus the vacuum BH is a solution. This requires thatf̂(0) ≡df(B^2)/dB^2|_B_t=0 = 0 , which is easily implemented if one requires a Z_2-invariance: B_t → -B_t. The vectorized solutions are, however, in general not unique. Vectorization can be separated into two subclasses. To create a parallelism with the scalarized case, let us keep the same notation and consider the vectorization as class II, which is further separated into two subclasses:* Subclass IIA or linearly/normal vectorized type:In this subclass of v-ℐ models, the vectorized BHs bifurcate from the vacuum BHs and reduce to the latter for B_t= 0. This bifurcation moreover, may be associated with a tachyonic instability against vector (linear) perturbation of the vacuum BH. Let us consider a small-B_t expansion of the coupling function f(B_t) = f(0)+f̂(0)/2+⋯ . The linearized Proca equation (<ref>) for small B_t reads: ∇ _μ G^μν=-f̂/2 ℐB^ν , with an effective mass μ_eff^2=-f̂(0)/2 ℐ . The instability arises if μ _eff^2<0. * Subclass IIB or non-linearly vectorized type:In this class of vector-ℐ models, the vectorized BHs do not bifurcate from the vacuum BH and do not reduce to the later for B_t = 0. This is the case if there is no tachyonic instability but there is a non-linear instability. A sufficient (but not necessary) condition is that f̂(0) =0 , A non-linear instability implies ℐ d^2 f(B^2)/d(B^2)^2≡ℐ f̂̂̂⩽ 0 , with the difference in the sign associated with the B^2 that comes from the first derivative (see <ref> and <ref> for an example). A mixed vectorization with both mechanisms: tachyonic and nonlinear vectorization is also possible [Please see <cit.> for an example of nonlinear and mixed scalarization in Gauss-Bonnet.]. § VECTOR-ℐ GHOST INSTABILITY To show the presence of the ghost instability, it is important to remember that, in spherical symmetry, ℐ≡ℐ(r) and that B^2 (r) is negative everywhere outside the event horizon[The vector field B_μ of the vectorized BH can be shown to only contain the time component in the static limit: B_μ≡ B_t dt. This means that, due to our metric signature and assuming B_t ⩾ 0 for all the spacetime, B^2=B_tg^ttB_t<0.]. Let us also introduce the scalar quantity z = -1/2f̂ ℐ≡ z(r,B^2), which is a function of both the field and the spacetime coordinates.In order to identify the condition on f(B^2) (or on z) for which ghost instabilities arise, one must write the Proca eq. (<ref>) as a wave equation for the vector field B^μ with an effective "mass" matrix ℳ_αβ. In this regard, one needs to expand the field equation in terms of the vector field B^μ. The resulting Proca equation (<ref>) is 0 =∇ ^μ G_μν -z B_ν=∇ ^μ( ∇_μ B_ν -∇ _ν B_μ) -zB_ν = ∇ ^μ∇_μ B_ν-∇ ^μ∇_ν B_μ - z B_ν . By using the definition of the Riemann and Ricci tensors R^d _ cabB^c = ∇_a ∇_b B^d - ∇ _b ∇_a B^d ,R_μν = R^c _ μ c ν , the second term in the last equality of (<ref>) can be replaced. This leads to 0= ∇ ^μ∇ _μ B_ν -∇ _ν∇ _μ B^μ -R_μν B^μ-z B_ ν . Where the first term is the wave operator acting on the vector field, while the second term ∇ _ν∇ _μ B^μ, needs to be rewritten to render the whole equation manifestly hyperbolic. For this, consider the modified Lorentz condition: ∇_μ∇ _ν G^μν = 0 = ∇ ^μ (z B_μ) , which comes from the antisymmetry of the G^μν tensor. This can be rewritten as ∇_μ( zB^μ)= 0 ⇒∇ _μ B^μ = -1/zB^μ∇ _μ z , which we further insert in (<ref>). Note that, first-order derivatives of ∇ _μ B^ν do not contribute to the dynamics when the vector field is expanded in B^μ = B_0 ^μ + ϵδ B^μ around a constant B_μ ^0, and hence, only the no-derivative and second-order derivative terms matter. Equation (<ref>) becomes 0 = ∇ ^μ∇ _μ B_ν +∇ _ν( 1/zB^μ∇_μ z) -R_μν B^μ -z B_ν . Expanding the derivatives in the second term, we get 0 = ∇ ^μ∇ _μ B_ν +∇ _ν B^μ∇ _μ z/z- B^μ∇ _ν z ∇ _μ z/z^2+B^μ∇ _ν∇ _μ z/z-R_μν B^μ -zB_ν . As shown in <cit.>, if the radial dependence of z is solely through B_μ (r), then (<ref>) can be rephrased as: ∇ ^μ∇ _μ B_ν +( ∇ _μln |z| ) ∇ _ν B^μ = ℳ_μν B^μ , with the effective mass matrix ℳ: ℳ_μν = -∇ _μ∇ _νln |z| + R_μν+z g_μν . However, recall that in the v-ℐ model the situation is more complicated since z depends both implicitly – from B_t(r) – and explicitly – from ℐ – on r. Due to the shape of z, we can divide it into z(B^2 , r)≡ z_B (B^2) · z_r (r). With this ansatz, the covariant derivative of z is given by∇ _μ z = z_r ∇ _μz _B+z_B∇ _μ z_r= 2z_r ẑ_BB^ν∇ _μ B_ν+z_Bz_r ' δ _r ^μ , where d X(B^2)/dB_μ =2X̂ B^ν. The second-order derivative is ∇ _ν∇ _μ z = 2z_r ẑ_BB^α∇_ν∇ _μ B_α + z_r ”z_B δ _μ^r δ_ν ^r+𝒳_0 , with all the first-order contributions included into the 𝒳_0 term. Observe that the only term containing second-order derivatives of the coupling function f (i.e. ẑ̂_B) is included in the 𝒳_0. The other term we need to compute is the product of two derivatives of z. This is given by∇_ν z ∇_μ z =(z_Bz_r' δ_ν^r+2z_r ẑ_BB^ρ∇_ν B_ρ)(z_Bz_r' δ_μ^r+2 z_r ẑ_BB^α∇_μ B_α) = z_B^2(z_r')^2 δ_μ^r δ_ν^r+ 𝒳_1 , where again, 𝒳_1 contains all the terms with first-order derivatives. Introducing the results of (<ref>) and (<ref>) into (<ref>), and keeping only the second-order derivative and no-derivative terms, we obtain: 0=∇ ^μ∇ _μ B_ν +[( z_Bz_r”/z-z_B^2(z_r')^2/z^2)δ_μ^r δ_ν^r-R_μν-zg_μν] B^μ+2 z_r ẑ_B/zB^α∇_ν∇_μ B_α B^μ . The above equation can be rewritten in a more handful form by reorganizing the various terms as∇ ^μ∇ _μ B_ν +2 ẑ_B/z_BB^α∇_ν∇_μ B_α B^μ= B^μ[( (z_r')^2/z_r^2-z_r”/z_r)δ_μ^r δ_ν^r+R_μν + zg_μν]g^μα∇ _μ∇ _α B_ν + 2B^μ B^α(ẑ_B/z_B) ∇ _μ∇ _α B_ν+ 2B^μ B^α(ẑ_B/z_B) ∇ _μ G_να= B^μ[( (z_r')^2/z_r^2-z_r”/z_r)δ_μ^r δ_ν^r+R_μν + zg_μν] , where we have used the definition of the field strength tensor in the second term of the lhs. This can be further rewritten as a no-derivative term by means of the Proca equation (<ref>). The resulting equation is z_r[z_Bg^μα+2 ẑ_B B^μ B^α]∇ _μ∇ _α B_ν = B^μℳ_νμ , where ℳ_αβ = zg_αβ(z-ẑ_Bz_r/2 B^2 ) +z[R_αβ+((z_r')^2/z_r^2-z_r”/z_r)δ_α^rδ_β^r] , g̃_μν =z_r[z_Bg_μα+2 ẑ_B B_μ B_α] . The ghost instability appears if the effective metric satisfies g̃ ^tt >0. The condition for the existence of the ghost instability can be obtained by contracting the g̃_μν metric with the time-like normal vector n^μ. Decomposing the vector field into the scalar potential ψ and a purely spatial vector X^μ as B^μ = X^μ + n^μψ results ing̃_nn = g̃_μνn^μ n^ ν = z_r[2 ψ ^2ẑ_B -z_B] . Imposing the ghost condition g̃_nn⩾ 0 z_r[2 ψ ^2 ẑ_B - z_B] ⩾ 0 Observe that, we can recover the results obtained in <cit.> with z_B =V(r) and z_r ≡ -ℐ = 1, with V(r) the self-interacting potential of the vector field. Note that the ghost instability condition (<ref>) can be re-expressed in terms of the z function as 2ψ^2 ẑ-z⩾ 0 , which can be further expressed in terms of the coupling function and source term ℐ as -ψ ^2ℐf̂̂̂+ℐ/2f̂⩾ 0 . As shown in Sec. <ref>, a tachyonic instability arises when ℐ/2f̂ >0, which means that, in the absence of higher order terms, a tachyonic instability of a v-ℐ model is always followed by a ghost instability.On the other hand, in the absence of a tachyonic instability, f̂ =0, non-linear instabilities occur when ℐ/2f̂̂̂< 0, making them also prone to ghost instabilities. It seems then that, no vectorization is able to endow vectorized solutions that are free of ghosts in a v-ℐ model. We show two exemplary cases of models that can generate vectorized solutions and for which ghost instabilities seem to exist in the appendix (EMv <ref> and GBv <ref>).§ CONCLUSION In this work, we have provided evidence that spherically symmetric BHs with vector hair coming from a vectorization process in an eVT theory are always prone to ghost instabilities independently of the functional form of the coupling function between the real vector field and the theory's invariant. We performed analytical calculations suggesting that vectorized BH solutions from extended-vector-tensor theories with a non-minimal coupling between the field and a model's invariant ℐ are always prone to ghost instabilities. The computation is based on the approach presented in previous studies (e.g <cit.>), where the ghost is identified by looking at the effective metric which arises when re-writing the Proca equation in a wave-like form. The method doesn't require any assumption on the specific value of the vector field, which further indicates that the ghost could appear for all possible vectorized configurations. Observe that, in this work, we have only dealt with ghost instabilities associated with the time-time component of the effective metric; one could then assume that a change in the metric signature and/or vector field ansatz could avoid such instabilities. However, the same process can also be performed for the spatial components, for which one expects that a gradient instability will emerge, leaving the model again unstable. This appears to indicate a physical origin of the instabilities.In addition, our analysis considered only spherically symmetric solutions. With the addition of the adimensional spin J, the process of detecting a ghost instability seems to be simpler due to the change of sign in z for J>0.5 in some regions of the spacetime. For all the other spins, a result similar to the one present here is expected. One may argue that, just like a tachyonic instability, non-linearities of the model may be able to tame the exponential growth of vector hair and end up with a dynamically viable solution. Nevertheless, while a tachyonic instability has an upper bound to the growth, a ghost/gradient instability does not, making it harder to quench. In order to provide a definitive statement about the overall stability and viability of the solutions, full numerical time evolution study should be performed. This is, however, beyond the scope of this paper and it will be left for future work.Finally, we would like to comment on the possible generalization of the current results. The vectorization mechanism can be seen as a special case of a wider class of phenomena called tensorization <cit.>. However, all such theories seem to be plagued with ghost instabilities and hence one could assume that the current result may be extended to general tensor fields. It is worth to point out that models with scalar fields, due to the absence of additional degrees of freedom are less prone to instabilities and perhaps the most relevant on the astrophysical point of view.§ ACKNOWLEDGMENTSWe would like to thank Daniela Doneva, Nuno M. Santos and João M. S. Oliveira for their valuable discussions and comments. A. M. Pombo is supported by the Czech Grant Agency (GAĈR) under grant number 21-16583M. tocsectionAPPENDICES§ EINSTEIN-MAXWELL-VECTOR Let us now apply the main result of the paper to two eVT models which are known to generate spontaneous vectorized solutions: EMv and vGB (appendix <ref>). Consider first the Einstein-Maxwell-vector case where the source term is a “matter” source: ℐ≡ F_μν F^μν, with A_μ the 4-vector potential and F_μν =∂ _μA_ν - ∂ _ν A_μ the Maxwell tensor. The resulting z components are, z_B =df(B^2)/dB^2 , and z_r =-ℐ/2 = -F_μν F^μν/2=Q^2/2 r^4 . The onset of instability occurs when a vector field perturbs a vacuum Reissner-Nordstrom BH. The metric line element is ds^2 = -(1-r_H/r)dt^2+dr^2/(1-r_H/r)+r^2(dθ ^2+sin ^2 dφ ^2) , with r⩾ r_H= M^2+√(M^2-Q^2) the horizon radious of the BH and Q the electric charge. Observe that z_r⩾ 0 for any r⩾ r_H, however, the sign of z_B will depend on its functional form. In the literature, an exponential coupling was considered <cit.>. Let us consider the simplest, but generic, polynomial case (all the other functions reduce to the polynomial form for small vector field values) f=1+ 𝒞_k B^2k , with k=1,2,... an integer. For the lowest order(s) (k=1,2), a tachyonic instability is settled when 𝒞_1 < 0, while a non-linear instability is settled when 𝒞_2> 0 – the difference in sign comes from the negative sign associated with B^2. The statement on the sign of the 𝒞_k coefficient can be extended to higher powers, such that an instability leading to a growth of the vector field occurs whenever 𝒞_k <0(>0) if k is odd (even).The ghost instability appears whenz_r[ 2 ψ ^2 ẑ_B-z_B] ⩾ 0 , Q^2/2 r^4[2 ψ ^2 𝒞_kk(k-1) B^2(k-2) -𝒞_kkB^2(k-1)] ⩾ 0 , 𝒞_kk B^2(k-2)[2 ψ ^2 (k-1) - B^2] ⩾ 0 . Since B^2 is always negative, the above equation can be rewritten as𝒞_kk B^2(k-2)[2 ψ ^2 (k-1) + |B^2| ] ⩾ 0 , 𝒞_kk B^2(k-2) ⩾ 0 , where the second inequality comes from the fact that the terms inside square brackets are always positive for non-trivial solutions. When k is even, B^2(k-2) > 0,and a ghost instability is settled for 𝒞_k ⩾ 0. On the other hand, when k is odd B^2(k-2) < 0 and 𝒞_k ⩽ 0. As a consequence, there is no vectorized solution to EMv black holes free of ghosts.A set of exemplary solutions of non-linear vectorized BHs in EMv models have been computed. It was observed that solutions do exist and are entropically preferable when compared with vacuum solutions, however, the study and analysis of such solutions is not the point of the current work. We leave such an exercise for a future paper.§ EINSTEIN-GAUSS-BONNET-VECTOR In the vector-Gauss-Bonnet model case, the source term is a geometric source: ℐ=R_GB ^2 ≡ R^2 -4R_μνR^μν+R_μνρδR^μνρδ the Gauss-Bonnet scalar. ℐ≡𝒢, z_B =df(B^2)/dB^2 , and z_r= -𝒢/2=-24 M^2/r^6 , where we have assumed a Schwarzschild background with r⩾ r_H= 2M the horizon radius of the BH in the line element (<ref>). Observe that z_r⩽ 0 for any r⩾ r_H, however, the sign of z_B will depend on its functional form. In the literature, three terms have been considered <cit.>. Let us use the same polynomial expansion as before (the constant term is absent in agreement with the GBv theory)f=𝒞_k B^2k , The condition for the coefficients𝒞_k is now reversed, i.e. an instability leading to a growth of the vector field occurs whenever 𝒞_k >0(<0) if k is odd (even). This is due to the opposite sign of the z_r, which is reflected in the behaviour of the field through the Proca equation (<ref>).The condition for ghost instabilities becomes -24 M^2/r^6[2 ψ ^2 𝒞_kk(k-1) B^2(k-2) -𝒞_kkB^2(k-1)] ⩾ 0 , [2 ψ ^2 𝒞_kk(k-1) B^2(k-2) -𝒞_kkB^2(k-1)] ⩽ 0 .Applying the same reasoning as before, 𝒞_kk B^2(k-2)[2 ψ ^2 (k-1) + |B^2| ] ⩽ 0 , 𝒞_kk B^2(k-2) ⩽ 0 , So, a ghost instability is settled for 𝒞_k ⩾ 0(⩽ 0) for k odd (even). Thus, also in this case, all the fully-vectorized solutions of a Schwarzschild BH with a Gauss-Bonnet invariant are affected by ghost instabilities. ieeetr
http://arxiv.org/abs/2310.18399v1
{ "authors": [ "Lorenzo Pizzuti", "Alexandre M. Pombo" ], "categories": [ "gr-qc", "hep-th", "math-ph", "math.MP" ], "primary_category": "gr-qc", "published": "20231027180004", "title": "The spooky ghost of vectorization" }
An Energy-Efficient Near-Data Processing Accelerator for DNNs that Optimizes Data Accesses Bahareh Khabbazan, Marc Riera, Antonio Gonzálezdept. of Computer ArchitectureUniversitat Politècnica de Catalunya (UPC)Barcelona, Spain{bahareh.khabbazan, marc.riera.villanueva, antonio.gonzalez}@upc.eduJanuary 14, 2024 ==========================================================================================================================================================================================================================The constant growth of DNNs makes them challenging to implement and run efficiently on traditional compute-centric architectures. Some accelerators have attempted to add more compute units and on-chip buffers to solve the memory wall problem without much success, and sometimes even worsening the issue since more compute units also require higher memory bandwidth. Prior works have proposed the design of memory-centric architectures based on the Near-Data Processing (NDP) paradigm. NDP seeks to break the memory wall by moving the computations closer to the memory hierarchy, reducing the data movements and their cost as much as possible. The 3D-stacked memory is especially appealing for DNN accelerators due to its high-density/low-energy storage and near-memory computation capabilities to perform the DNN operations massively in parallel. However, memory accesses remain as the main bottleneck for running modern DNNs efficiently.To improve the efficiency of DNN inference we present QeiHaN, a hardware accelerator that implements a 3D-stacked memory-centric weight storage scheme to take advantage of a logarithmic quantization of activations. In particular, since activations of FC and CONV layers of modern DNNs are commonly represented as powers of two with negative exponents, QeiHaN performs an implicit in-memory bit-shifting of the DNN weights to reduce memory activity. Only the meaningful bits of the weights required for the bit-shift operation are accessed. Overall, QeiHaN reduces memory accesses by 25% compared to a standard memory organization. We evaluate QeiHaN on a popular set of DNNs. On average, QeiHaN provides 4.3x speedup and 3.5x energy savings over a Neurocube-like accelerator. DNN, NDP, Accelerators, 3D-Stacked Memory, Quantization, Exponential, Transformer§ INTRODUCTIONDeep Neural Networks (DNNs) represent the state-of-the-art solution to a broad range of machine learning applications such as natural language processing (NLP) and image classification. Modern DNNs can outperform human-level accuracy in many of these applications at the expense of high computational cost, memory requirements, and energy consumption. Complex DNN models are composed of hundreds of layers of artificial neurons with billions of model parameters and operations. The constant growth of DNNs makes them challenging to implement and run efficiently, even in the most recent accelerators <cit.> based on traditional computing architectures due to the memory wall problem. On the other hand, some recent research has focused on a new paradigm named Near-Data Processing (NDP) <cit.>, which seeks to break the memory wall by moving the computations closer to the memory hierarchy.Conventional DNN accelerators tend to dedicate a significant part of their area to the processing elements (PEs) that are responsible to speed-up the frequent dot-product operations of DNN layers. Due to the intrinsic parallel nature of DNN computations, many previous works aimed to exploit data and thread-level parallelism by having large PE arrays, what further stresses the memory bandwidth demands, which may become a main bottleneck that is unable to provide enough data to all PEs. The memory wall remains a major problem despite several attempts to tackle it by minimizing the off-chip memory accesses, maximizing the on-chip memory reuse factor <cit.>, and increasing the on-chip buffer sizes in each PE. As a result, DNNs are heavily constrained by compute-centric architectures due to the high memory storage and memory bandwidth demands.In addition, data movements normally represent the major cause of energy consumption. For instance, a recent study on Google workloads <cit.> shows that the data movements between memory and compute units contribute to 62.7% of the total energy consumption. Consequently, the energy cost of the data transfers is orders of magnitude higher than that of the computations. Another observation in this study indicates that most of the data movements in consumer workloads are generated by simple functions and primitives that can be implemented in hardware at low cost. These observations, together with the effects of the memory wall and the dramatic increase in the size of DNNs, motivate the transition from conventional compute-centric to data-centric architectures for data-intensive applications.Over the last few years, researchers have been exploring novel memory-centric architectures based on the so-called Near-Data Processing (NDP) paradigm to accelerate neural network algorithms by moving most of the computations ”in/near-memory” and, hence, reducing the data movements and their cost as much as possible. NDP has gained a lot of attention with the introduction of the 3D stack memory technology, which allows the integration of logic and memory in the same chip by stacking multiple dies vertically, providing high-speed connections between a high-density memory and a logic die. Micron’s Hybrid Memory Cube (HMC) <cit.>, High Bandwidth Memory (HBM) <cit.> from AMD/Hynix, and Samsung’s Wide I/O <cit.> are popular examples implementing this trending technology.NDP architectures based on 3D-stacked memory attack the memory wall by increasing storage capacity, memory bandwidth, and reducing power consumption <cit.>. Compared with the conventional 2D DRAM, 3D memory provides an order of magnitude higher bandwidth (160 to 250 GBps) with up to 5x better energy efficiency and, hence, 3D memory is an excellent option for meeting the high throughput, low energy requirements of scalable DNN accelerators <cit.>. All 3D-stacked memory system implementations provide highly parallel access to memory which is well suited to the highly parallel architecture of the DNN accelerators <cit.>.Neurocube <cit.> and TETRIS <cit.> are popular NDP 3D-stacked memory architectures that offer promising performance and energy consumption for accelerating DNNs. However, there is still large room for improvement, since these architectures and memory technology present multiple challenges to extend their adoption. First, architectures based on 3D-memory require to rethink of the design of on-chip buffers in the logic die as well as the location where the computations are executed. For example, performing simple operations on the DRAM dies can drastically reduce the amount of memory movements and the need for big on-chip buffers. Second, new approaches for dataflow scheduling and partitioning of the DNN computations are also required to reduce the memory pressure. Thus, changing the memory organization and data placement can fully exploit the features of 3D-stacked architectures. In addition, the area of the logic die is constrained by the package, and there are tight thermal constraints that limit the power dissipation of the system. Consequently, it is critical to propose solutions that improve in these aspects.In this paper, we show how to efficiently exploit a logarithmic base-2 quantization (LOG2) of activations on FC and CONV layers of typical DNN models. First, we perform an analysis of the exponents obtained after the LOG2 quantization, and observed that a huge percentage of the activations are represented with negative exponents, that is, their original value is in the range of [-1, 1]. LOG2 quantization has been proposed in previous works to reduce the numerical precision of either activations/weights and exploited to substitute multiplications by a bit-shifting of the other operand. Based on these observations we propose an implicit in-memory bit-shifting of the DNN weights to reduce the memory movements. Weights are uniformly quantized and stored at the bit-level granularity into different memory regions, that is, each bit of a set of weights is stored in a different memory bank to exploit the inherent parallelism of 3D-stacked architectures. Next, we propose a mechanism to avoid accessing the bits of the weights that are not useful due to the right bit-shifting of the negative exponents of the logarithmically quantized activations.Then, we present QeiHaN, a novel NDP accelerator that implements the above LOG2 quantization-shifting engine and efficient weight storage scheme for high-performance low-energy DNN inference. QeiHaN is implemented on top of a Neurocube-like architecture, but extended with an enhanced input stationary dataflow. The extra hardware required for our technique is modest since most of the components are already available in the baseline. QeiHaN only requires a small set of additional comparators and integer adders to perform the LOG2 quantization. Then, we also replace the multipliers by simple bit-shift logic, reducing the computational cost and the overall area of the PEs. Our experimental results show that the overheads are minimal compared to the savings in memory accesses and multiplications.To summarize, this paper focuses on efficient DNN inference leveraging logarithmic quantization in NDP 3D-stacked DRAM-based accelerators. The main contributions are: * We analyze the distribution of exponents of the logarithmically (i.e. LOG2) quantized activations in multiple layers of modern DNNs including CNNs, RNNs, and Transformers. We observe that a huge percentage of the exponents are negative, leading to potential memory savings as a result of reducing the accesses to only the useful bits of the weights. * We propose a novel data layout and an optimized data flow to exploit the bank-level parallelism of 3D-stacked memory together with the LOG2 quantization of activations. Each memory bank stores a different subset of the bits of the uniformly quantized weights to allow for parallel accesses to the required bits of the bit-shifting operations. On average, we reduce the memory accesses due to the weights by 25% compared to a standard memory organization. * We present QeiHaN, a 3D-stacked DRAM-based hardware accelerator that implements our data layout and dataflow for efficient DNN inference. We evaluate QeiHaN for several DNNs. QeiHaN improves performance by 1.4x and reduces energy consumption by 1.3x on average over NaHiD, a baseline accelerator implementing the same dataflow and quantization as QeiHaN but with a standard memory organization for weights. Compared to Neurocube <cit.>, QeiHaN achieves 4.3x speedup and 3.5x energy savings on average.The rest of the paper is organized as follows. Section <ref> introduces some preliminaries for QeiHaN and provides a summary of works related to 3D memory DNN accelerators. Section <ref> discusses the observations on the logarithmic quantization of activations for a modern set of DNNs. Section <ref> describes the architecture of QeiHaN including the implementation details of the main hardware components. Section <ref> presents the evaluation methodology and Section <ref> discusses the experimental results of QeiHaN on different networks. Finally, Section <ref> concludes the paper by summarizing the key insights of this design alongside the overall performance.§ BACKGROUND & RELATED WORKIn the following subsections we review some terminology and concepts that may be helpful throughout this paper. First, we give a general description of DNNs, including the main categories and different types of layers. Next, we review DNN quantization and common dataflows of DNN accelerators. Finally, we discuss 3D memory architectures, which offer more opportunities to implement a highly efficient DNN accelerator in terms of both performance and energy consumption. §.§ Modern DNNsDeep Neural Networks (DNNs) are classified into three main categories. First, Multi-Layer Perceptrons (MLP) consist of multiple Fully-Connected (FC) layers in which every input neuron is connected, via synapses with particular weights, to every output neuron. Second, Convolutional Neural Networks (CNN) are composed of multiple convolutional layers to extract features, usually followed by one or several FC layers to perform the final classification. CNNs, such as AlexNet <cit.>, have demonstrated to be particularly efficient for image and video processing. Finally, Recurrent Neural Networks (RNN) <cit.> are made of multiple layers of cells with feedback connections, stacked on top of each other. RNN cells store information from past executions to improve the accuracy of future predictions. The most popular RNN architecture is the Long–Short Term Memory (LSTM) cell, which consists of multiple single-layer FC networks commonly referred as gates. PTBLM <cit.>, an example of LSTM-based RNN, is used for various applications like language modeling, speech recognition, and machine translation.Attention-based DNNs, such as the Transformer <cit.> and all the BERT <cit.> variants, have become the state-of-art solution for important machine learning tasks such as natural language processing <cit.>, computer vision <cit.>, and video analysis <cit.>. These models have recently received special attention from the machine learning community for being extremely efficient in terms of both accuracy and performance. Transformers use attention mechanisms to gather information about the relevant context of a given input (i.e., a word of a sentence), and then encode that context in a vector. The attention mechanism allows to grab context information from distant parts of an input sequence to help understand its meaning, and it is implemented in the form of multiple feed-forward FC layers. However, the benefits of these networks come at the cost of long execution time due to the large memory footprint and low computation-to-memory access ratio. FC layers exhibit different characteristics with respect to CONV layers: weights are not reused by different neurons and the computation-to-memory access ratio is significantly smaller, i.e., FC layers are more memory intensive.Each type of DNN is effective for a specific subset of cognitive applications. Moreover, for each application, each DNN has a different arrangement of layers with specific operations. FC and CONV layers take up the bulk of computations in most DNNs. Other types of layers performing pooling, normalization, or activation functions are also common. However, these other layers have no synaptic weights and represent a very low percentage of the DNN execution time. In this paper, we focus on optimizing the performance and energy efficiency of hardware accelerators for the inference of FC and CONV layers in modern MLPs, CNNs, RNNs, and Transformers. §.§ DNN QuantizationQuantization is a highly popular technique to map values from a continuous range to a discrete set. The main purpose of quantization is to compress the original DNN models to reduce the memory footprint and the computational cost with a minor impact on accuracy. Equation <ref> shows an example of a function that quantizes real values (in floating-point, FP, precision) and maps them to an integer range. Q(r) = INT(r/s) - z where Q(r) is the quantized value, r is a FP value, s is a scaling factor, and z is an integer offset. The INT function is a rounding to the nearest value. This method is also referred to as linear uniform quantization since the resulting quantized values (a.k.a. quantization levels) are uniformly spaced.Recently, non-uniform quantization schemes have been proposed to further reduce the memory pressure. These methods have been designed for DNN models with tensors that have a bell-shaped long-tailed distribution of weights and activations <cit.>. Logarithm quantization is en example of a non-uniform scheme, where the quantization levels increase exponentially instead of linearly <cit.>. The Logarithmic Quantization (LQ) <cit.> offers smaller numerical precision (i.e. bitwidth) with lower accuracy loss compared to the linear quantization by exploiting the non-uniform distribution of tensors. QeiHaN employs uniform quantization for the weights, and a logarithmic base-2 (LOG2) quantization for the activations of all the FC/CONV layers. Section <ref> provides more details on the LOG2 quantization and its main benefits. §.§ Dataflows in DNN AcceleratorsThe dataflow of a DNN accelerator is defined as the mapping and scheduling of the computations as well as the data partitioning across compute units. The dataflow that is most effective to reduce the memory accesses and data movements to optimize performance and energy efficiency depends on the target cognitive computing task and hardware architecture <cit.>. The dataflow determines the storage requirements and communication patterns among main memory, local on-chip buffers inside PEs, and compute units.In previous works <cit.>, the election of the dataflow is based on minimizing the data movement of the inputs, outputs or weights. Therefore, DNN accelerators tend to follow one of these dataflows: Weight Stationary (WS), Output Stationary (OS), and Input Stationery (IS). In OS, each PE computes an output neuron at a time <cit.>. In the WS/IS dataflows, each PE pre-loads a set of weights/inputs from memory to local buffers, and those are used to perform all associated computations <cit.>.QeiHaN uses an input stationary dataflow, which means that each input of a given layer is read and reused, until all the related computations are done, before reading the next input. The IS dataflow is the most suitable for our logarithmic quantization of DNN activations and efficient weight storage scheme. We compare QeiHaN with two baseline accelerators, one with OS dataflow and the other with IS dataflow. §.§ 3D-Stacked MemoryHigh-density 3D memory is a promising technology for the memory system of DNN and other domain-specific accelerators <cit.>. It consists of stacking multiple memory dies on top of each other, which increases the memory capacity and bandwidth compared to 2D memory, and also reduces the access latency due to the shorter on-chip wiring interconnection. These aspects lead to an overall improvement in both energy efficiency and performance. The 3D memory dies are commonly based on DRAM, but the integration of other memory technologies is being actively researched with very promising results. On the other hand, recent advances in low-capacitance through-silicon vias (TSVs) technology have enabled 3D memory that includes a few DRAM dies on top of a logic chip, within a single package <cit.>. Although there are numerous implementations of 3D-stacked memory technologies, until now, the Hybrid Memory Cube (HMC) <cit.> by Micron and the High Bandwidth Memory (HBM) <cit.> from AMD/Hynix are the preferred choices for most DNN accelerator proposals <cit.>.HBM and HMC are designed for high performance data-centric applications. Both are composed of vertically stacked DRAM dies with a single logic layer at the bottom. These memory technologies take advantage of Through-Silicon Vias (TSVs) to enable high-bandwidth and low-latency communication between the stacked memory layers. In HBM, each DRAM die is partitioned horizontally, and different partitions on different dies are treated as independent memory channels. On the other hand, in HMC, each DRAM die is divided into multiple partitions in a 2D grid where the corresponding partitions in the vertical direction form a single vault. Both HBM and HMC can exploit memory-level parallelism by organizing the large number of TSVs into multiple independently-operated channels. This allows multiple partitions in the DRAM die to be accessed simultaneously, further enhancing memory bandwidth and overall system performance.NDP systems employing HBM or HMC associate the PEs of the logic die with each channel or vault to efficiently utilize the memory-level parallelism and achieve high data processing throughput. The choice between HBM and HMC would depend on the specific requirements of the NDP system, and the desired trade-offs between memory bandwidth, energy efficiency, and integration with the host processor. §.§ 3D-stacked DRAM-based DNN AcceleratorsNeurocube <cit.> is a programmable DNN accelerator integrated into the logic layer of a 3D stack DRAM-based HMC. The Neurocube architecture consists of clusters of processing engines (PE) connected by a 2D mesh NoC in the processing layer. Each PE of the logic layer is associated to a single memory vault, and can operate independently, and communicate through the TSVs and a vault controller (VC). The organization of each PE includes multiple memory buffers to store weights and inputs as well as some units to perform MAC operations. In addition, each vault controller includes a Programmable Neurosequence Generator (PNG) unit that generates the commands to orchestrate the corresponding operations of the DNN layers. The PNGs employ a simple finite state machine (FSM) with counters that are initialized depending on the number of MAC units in each PE and the DNN layer topology. Figure <ref> shows a general overview of the Neurocube architecture and a PE. In this work, we implement a Neurocube-like baseline accelerator to assess the performance improvement and energy savings of QeiHaN.In the same line of research, TETRIS <cit.> is another popular DNN accelerator based on HMC. Like Neurocube, TETRIS presents an optimized hardware architecture coupled with software scheduling and partitioning techniques that exploit the inherent characteristics of 3D memory. First, the authors show that the high throughput and low energy characteristics of 3D memory allow to rebalance the NN accelerator design, using more area for processing elements and less area for SRAM buffers. Second, they move some portions of the NN computations close to the DRAM banks to decrease the bandwidth pressure and increase performance and energy efficiency. Finally, they develop an optimized dataflow scheduler and hybrid partitioning scheme that parallelizes the DNN computations within and across multiple vaults and stacks.§ LOG2 QUANTIZATION ANALYSISDNN quantization allows to reduce the numerical precision of activations and weights, which in turn favors the memory footprint and the computational cost of hardware accelerated DNN architectures. Therefore, quantization techniques have been widely explored in previous studies as described in Section <ref>. In particular, logarithmic quantization takes advantage of the non-uniform distribution of tensors to significantly reduce the numerical precision of input activations and/or weights with a minor impact in accuracy. This section analyzes the effects of the LOG2 quantization of activations on multiple DNN models and layers. First, we explore the benefit of the logarithmic encoding of activations to simplify the dot-product operations. Then, we provide some hints on reducing the number of accesses to the main memory by exploiting the characteristics of the 3D memory and bit-shifting operation.Some prior works have used linear uniform quantization to compress the DNN parameters. However, we observe that activations and weights of most DNNs do not follow a uniform distribution, which causes a huge impact in terms of accuracy loss when the precision is further reduced to very low bitwidths (i.e. <8b). Specially in recent DNNs that are extremely deep and can have hundreds of layers, the error is propagated and expanded among layers.On the other hand, logarithmic base-2 (LOG2) quantization <cit.> leverages the usually non-uniform distribution of activations and weights in a pre-trained DNN. The study in <cit.> compared the impact of linear and LOG2 quantization on activations and weights of VGG16 and AlexNet. Their analysis shows an exponential distribution of activation values around 0. They also concluded that activations are more robust to LOG2 quantization than weights for several reasons. First, CONV layers reuse the weights multiple times when computing the dot-products, propagating the error across the inputs/outputs of all layers. Second, the range of the weights is not as wide as the activations <cit.>, and their density is often higher than that of the activations, that is, the amount of weights is huge, and their range is narrow. We performed an experiment applying LOG2 quantization to the activations and weights of modern DNNs, together and individually, and reached similar conclusions regarding weights being more sensitive to the LOG2 quantization error than activations. This suggests that the base-2 may not be the best fitting exponential base for quantizing the weights.In this paper, we apply logarithmic base-2 (LOG2) quantization to the input activations of all the FC and CONV layers of a set of DNNs. On the other hand, we apply INT8 uniformly distributed linear quantization to the weights of these layers based on Equation <ref>. These layers represent close to 100% of the total execution time for typical neural networks. Next, we analyze the distribution of exponents of the quantized activations, and the accuracy loss due to the LOG2 quantization. This scheme also allows us to efficiently re-organize offline the weights in-memory without additional expensive hardware, and exploit some of the intrinsic characteristics of the 3D-stacked memory, as described below. For each input x and each layer l, the LOG2 quantization is applied according to the following equations: LogQuant(x) =0 x= 02^x̃ otherwise.x̃ = Clip(Round(log_2(|x|))), min , max), where Clip(x, min, max)= min x⩽ minmax x⩾ maxxotherwise. The exponent x̃ is computed based on Equation <ref>. The Round function is defined as rounding to the nearest integer, and the clipping function in Equation <ref> forces the exponent values to be in the range of [min, max], where min = -(2^n-1) and max = (2^n-1-1). Assuming an n-bit exponential quantization (e.g. n=4), the number of unique intervals is 2^n-1. We store an extra bit for the sign of the value, but in most layers it is not necessary since the activations are all positive. The min exponent is also used as a special case to represent the exactly zero activation value, so all small activations are effectively pruned due to the clipping.The main benefit of the LOG2 quantization is that it not only reduces the numerical precision but also eliminates the bulky digital multipliers by using simple shift and ADD operations. The approximated activation values x̃ are stored as exponents to reduce the memory pressure and the computational complexity. Equation <ref> shows the transformed dot-product operation with the bit-shifting of w_i weights by the x̃_i exponents of the base-2 powers representing the activations, where x_i is quantized to an integer exponent using Equation <ref>. Note that the positive exponents will lead to a shift to the left, while negative exponents result in a shift to the right. w^Tx = ∑_i=1^n w_i× x_i≃∑_i=1^n w_i× 2 ^ x̃_i = ∑_i=1^n Bitshift(w_i, x_i) In order to further exploit the LOG2 quantization of the input activations, a key observation is that, if a given activation is represented with a base-2 power of a negative exponent, the bit-shifting to the right will discard the least significant bits (LSB) of the weights that are multiplied by the corresponding activation. In other words, during the right bit-shift operation, and assuming that weights are uniformly quantized to 8 bits, only 1 ⩽ 8-|x̃| ⩽ 7 bits of the weights are required while the rest can be avoided, reducing the memory accesses at a fine granularity.To demonstrate the potential of this idea, we perform an analysis of the exponents resulting from a LOG2 4-bit quantization of the activations in all FC/CONV layers of a popular set of DNNs from different domains. All the evaluated networks have been re-trained, reducing the accuracy loss after quantization to less than 1% in all cases. Figure <ref> shows the distribution of the non-zero, quantized activations. On average, more than 71% of the activations have negative exponents. PTBLM (98%), BERT-Base (82%), and BERT-Large (85%) have a similar distribution of exponents with a high concentration of negative values centered around -3, while the Transformer (57%) and AlexNet (36%) have the most symmetric distribution resulting in the lowest amount of negative exponents. QeiHaN, our proposed solution for efficient DNN inference, is based on exploiting this observation.We define the estimated memory savings as the percentage of bits from the weights that can be ignored because the negative exponents of the base-2 activations render those bits useless when performing the bit-shifting operation. Figure <ref> shows that the memory savings are directly related to the histograms of the quantized activations. On average, 25% of the memory accesses can be avoided. In addition, zero-activations are pruned in both, the baseline and our proposal, further reducing memory accesses. However, the conventional storage of weights in-DRAM is not suitable to exploit this optimization. The following section describes how to re-organize the weights in-memory to take full advantage of the LOG2 quantization.§ QEIHAN ACCELERATORThis section describes the hardware support required to implement QeiHaN. First, we present the main hardware components of the QeiHaN accelerator. Next, we describe the memory organization of weights and activations. Finally, we show how FC and CONV layers are executed in the accelerator using QeiHaN with an enhanced input stationary dataflow. §.§ ArchitectureThe goal of QeiHaN is to optimize the memory pressure by performing an implicit in-memory bit-shifting of the weights in the FC and CONV layers of different DNNs. QeiHaN leverages a large number of negative exponents after the LOG2 quantization of activations, and an efficient weight storage scheme, to save memory accesses. Similar to Neurocube <cit.> and TETRIS <cit.>, QeiHaN is based on NDP architectures <cit.> that leverage 3D stacked memory for high-performance, low-energy DNN inference. As described in Section <ref>, the 3D memory consists of multiple DRAM dies connected via TSVs to a logic die. DRAM dies are divided into vertical partitions named vaults that resemble conventional DDRx channels, which can operate independently. In addition, each vault is connected to a tile in the logic die to perform arithmetic computations on the stored data.Figure <ref> shows a high-level schematic of the QeiHaN architecture. Each tile in the logic die consists of a single PE, a Vault Controller (VC), a Router (R), and a PE Controller (PEC). The VC manages all the memory operations within the corresponding vault. The router provides local access between a given PE and its related vault, as well as remote access to the other vaults/PEs through a 2D mesh network. In addition, the PEC orchestrates the communication between the PE and the router by generating the addresses of the required data in each PE. Finally, the PE is the core of the tile, and is responsible for accelerating the DNN operations. The main components of a PE include the blocks of SRAM used for storing the inputs (IB), outputs (OB), and weights (WB), the LOG2 Quantization (LOG2-Quant) unit, the Weight Decoder and Shifter (D&S) unit, the ADD array, and the Special Function Unit (SFU). Below is a detailed description of each component:Memory Buffers: Each PE in the logic die has three individual on-chip SRAM buffers to store and reuse the data fetched from the main memory according to the dataflow of the accelerator. First, a small Input Buffer (IB) stores blocks of input FP16 activations until filling the whole buffer space. Second, an Output Buffer (OB) stores the partial and final results that are produced during the execution of a DNN layer. Third, a Weights Buffer (WB) keeps the required bits of the weights for the bit-shifting operations. All the SRAM memories are double buffered to load data from main memory while performing computations, avoiding stalls in the pipeline, and highly multi-banked to achieve the bandwidth required to feed a large number of functional ADD units. In addition, all these buffers are sized considering the worst case scenarios, that is, the biggest layer for the I/O buffer, and all the 8 bits of M weights for the WB, where M is the bus size of a vault in the 3D-stacked memory.LOG2-Quant Unit: This unit is in charge of the LOG2 quantization of input activations from FP16 to base-2 exponential values according to Equation <ref>. Figure <ref> shows the hardware required to compute the Round(log_2(|x|)) function of Equation <ref>. Unlike previous works that use relatively complex hardware <cit.>, we implement this function with a very simple scheme. In particular, we perform a comparison between the fractional part of the value |x| and the √(2) using a simple comparator. The standard half precision (FP16) format of a value x is encoded with a sign bit, mantissa m, and exponent e. The exponent e is already expressed as an integer in base-2 format, so the LOG2 function of |x| can be implemented by applying the logarithm on the mantissa m as shown in Equation <ref>. Taking into account the hidden bit of the mantissa, m is always a value between [1, 2). Therefore, the term Round(log_2m) can be further simplified by Equation <ref>. In the next step, each quantized value x̃ in QeiHaN is represented by a 4-bit exponent through a clipping function (i.e. Equation <ref>), resulting in a range of [-8, 7]. An extra bit may be used for the actual sign of the activations, except for when all are known to be positive due to the ReLU activation function. In addition, all zero activations will skip the quantization and all the related computations and memory accesses. Similarly, all the small activations clipped to -8 will be effectively pruned (rounded to zero). Finally, each quantized activation is sent to the D&S unit for further processing. Round(log_2 |x|) = e + Round(log_2 m)1 ⩽ m < 20 ⩽ log_2 m < 1 ⇒ Round(log_2m) =0 m < √(2)1 m ⩾√(2) Weight Decoder & Shifter Unit (D&S): The weights that multiply non-zero activations are decoded from a compressed stream and bit-shifted by appending the necessary amount of zeros based on the exponent from the LOG2-Quant unit. According to the exponent value x̃, the PE controller determines the required bits of the weights that have to be loaded from DRAM and stored into the WB. A non-negative exponent requires loading all 8 bits of each weight, and the D&S unit shifts the weights x̃ positions to the left before sending the results to the ADD array. Otherwise, we only need to fetch the 8-|x̃| MSBs of the weights. For example, given a negative exponent x̃=-3, only the 5 MSBs of each weight are loaded into the WB. Then, the D&S unit reads the selected bits of the weights from the WB and generates a set of 16-bit d values, where d is the amount of adders in the ADD array. Note that the bit-shifted weights are the result of the traditional multiplication of activations and weights. In order to use this unit efficiently, QeiHaN reorganizes the weights in-memory to a bit-level granularity as described below in Section <ref>.ADD Array: This array is made of d independent ADD units that are used to accumulate the products of each activation by the corresponding weights. According to the sign of the activation value, not the exponent, the bit-shifted weight is added/subtracted to/from the partial outputs computed in previous cycles and stored in the OB. The LOG2 quantization removes the need for any multiplier, so the partial outputs are loaded from the OB and the bit-shifted weights come from the D&S. As a result, in a single execution all the adders compute partial outputs related to the same input activation from d different convolutional kernels or output neurons.Special Function Unit (SFU): The SFU is composed of units to perform non-linear activation functions, pooling, and normalization, among others. These functions are usually applied to the final outputs of the FC/CONV layers at the end of their execution, and tend to require more numerical precision in order not to lose accuracy. Thus, QeiHaN de-quantizes the resulting 16-bit integer outputs back to FP16 before using those functions. The non-linear functions are implemented with Look-Up-Tables (LUTs). §.§ Memory OrganizationThis section describes the memory organization of the accelerator, which refers to the data layout of weights and activations inside the DRAM of each vault and the on-chip buffers of the PEs.To illustrate it, the top of Figure <ref> shows an example of a small CONV layer with an input feature map (IFM) size of four channels (IC1-IC4), and an output feature map (OFM) size of two channels (OC1-OC2). On the other hand, the bottom of Figure <ref> shows how the input/output activations of the different channels are partitioned and distributed among the I/O buffers of two different PEs/Vaults.In QeiHaN, the input activations are divided channel-wise across all vaults, that is, all inputs of a given channel are stored in the same vault. In contrast, each vault allocates a portion of the corresponding partial outputs of all the channels. In CONV layers, the dimensionality of the inputs/outputs may be quite large, so we employ a blocking scheme to reduce the on-chip storage requirements by segmenting the IFM and OFM into N blocks or tiles per channel. The I/O Buffer only stores a subset of blocks for each assigned channel of the IFM and OFM, the block size being significantly smaller than the dimensions of the feature maps. Note that each Vault/PE is working on a different set of inputs but producing partial outputs of the same OFM channels. Hence, a reduction is required at the end of the execution to obtain the final outputs. Likewise, FC layers are a special case of CONV, where there is just a single block and input per channel (i.e. N=1).Figure <ref> shows the layout of M filters or kernels with P weights per channel each in the DRAM dies of each vault, where each partition includes 4 banks, for the example of Figure <ref>. Similar to the activations, the weights of each kernel are also distributed channel-wise across all vaults. The bits of the weights of the corresponding channels are interleaved in the different banks and partitions of the same vault. That is, the least-significant bit (LSB) of a subset of weights is stored in the first bank of a vault, then the next bit in the second bank and so on. This layout simplifies the implementation of our implicit bit-shifting scheme, as it is easy to locate all the bits of the weights that are required to operate with a given input in case some have to be skipped and others accessed.In addition, most 3D-stacked DRAM-based operations use a Closed-Page Policy to reduce power consumption <cit.>. Consequently, applications benefit from Bank-Level Parallelism but not from spatial locality. QeiHaN remaps the data to avoid internal organization bottlenecks and, hence, requests to different banks can be concatenated/overlapped to effectively achieve high bandwidth. Note that weights are known statically so their organization can be pre-arranged offline. §.§ DataflowNeurocube <cit.> follows an output stationary (OS) dataflow in which each PE computes a subset of outputs at a time. This dataflow is inefficient to exploit the resources of the 3D memory, as demonstrated by our results in Section <ref>. On the other hand, QeiHaN uses an enhanced input stationary (IS) dataflow coupled with a blocking scheme to efficiently exploit the LOG2 quantization of the input activations, minimizing the memory accesses to both weights and activations. Figure <ref> illustrates the dataflow of the QeiHaN accelerator with a flowchart. The proposed dataflow includes three main stages marked in different colors: Pre-Processing (Gray), Execution (Orange), and Post-Processing (Blue).In the Pre-Processing stage, each PE reads input activations from DRAM until filling the input buffer space. That is, inputs (outputs) are pre-loaded (processed) on-demand by blocks, activations are stored in FP16 format, and the size of the blocks is computed according to the feature map sizes and the I/O buffer capacity. In the IS dataflow, each PE of the accelerator fetches and processes one input of a block at a time from the I/O buffer, and performs all the associated computations before moving to the next input. First, the LOG2 quantization and clipping function is applied to obtain the 4-bit exponent x̃. Then, QeiHaN also performs a zero and small activation pruning. Concurrently, the reading of input blocks from DRAM continues in the background, as long as there is space in the buffers, to hide the memory latency while doing computations of the current blocks.In the Execution stage, and based on the value of the exponent x̃, a set of M useful bits of INT8 uniformly quantized weights of M different kernels related to the input are read from DRAM at a time, where M is determined by the internal 3D-stacked memory bus size (e.g. 32-bit). Thus, in each request, the bits in the same position for M different weights are loaded into the weights buffer, and multiple requests are made until all the required bits are retrieved. Next, the bits of the weights are decoded and bit-shifted by appending the corresponding zeros, resulting in 16-bit integer values. These results are grouped and sent to the ADD array unit in batches of d values, where d is the number of adders (e.g. 16). In parallel, the partial outputs from previous executions are loaded from the output buffer. Then, the accelerator performs d ADDs to accumulate the results of each output with the shifted weights, followed by the write-back to the output buffer. This stage is repeated until all the weights of all filters related to the current activation are processed.Finally, in the Post-Processing stage, QeiHaN reduces the partial outputs of each PE. The reduction starts as soon as enough activations complete all their operations. Then, in a centralized PE, the final results are de-quantized, and the SFU performs the activation and pooling operations before distributing and storing the corresponding activations back to each vault. After processing all the blocks of inputs the layer execution is completed. Note that all the main steps are carried out in parallel in a deep pipeline.§ METHODOLOGYThis section presents the methodology for evaluating QeiHaN, our NDP accelerator for DNN inference.Workloads. Our objective is to prove that our scheme provides important savings for multiple applications and different DNN models. To this end, we evaluate QeiHaN on five state-of-the-art DNN workloads from different domains, summarized in Table <ref>. Their model sizes range from medium to large scale with several hundreds of MBytes in memory footprint. In particular, we include the ILSVRC 2012 winner, AlexNet <cit.> (5 CONV and 3 FC layers), one of the most popular CNNs for image classification with the ImageNet dataset, and PTBLM <cit.> (2 LSTM layers), an RNN that consists of LSTM cells for language modeling using the Penn Treebank dataset. In addition, we employ three attention-based networks: Transformer (6 Encoders, 6 Decoders), BERT-Base (12 Encoders, 110M Parameters), and BERT-Large (24 Encoders, 340M parameters). The Transformer <cit.> model is evaluated on the machine translation task of Newtest2014 (English to German) which contains 3003 sentences. BERT-Base <cit.>, and its larger variant BERT-Large, are evaluated on the question-answering task of SQuADv1 <cit.>. Finally, all these networks have been re-trained in order to recover the accuracy after quantization, that is, less than 1% loss. Accuracy is reported as Top-1 for image classification (higher is better), perplexity for language modeling (lower is better), bilingual evaluation understudy (BLEU) for machine translation (higher is better), and weighted average of the precision and recall (F1) for question-answering (higher is better).System models and simulation. We have developed a simulator that accurately models three different systems, QeiHaN and two baseline accelerators. The first baseline is inspired in Neurocube <cit.>, described in Section <ref>, but with some optimizations, such as a lower quantization bitwidth, to isolate the effects of our proposal when comparing the two. The second baseline, named NaHiD, implements the same architecture, dataflow, and quantization scheme as QeiHaN but with a standard memory organization of the weights. That is, NaHiD also replaces multiplications by bit-shift operations and additions but, in contrast to QeiHaN, it requires loading all the bits of the weights from memory. This comparison allows us to infer the main benefits due to the QeiHaN's efficient 3D memory-centric weight storage scheme. Table <ref> shows the parameters of the experiments. For a fair comparison, we set most of the configuration parameters to match the Neurocube baseline: a 3D-stacked memory of 4 GB with 4 DRAM dies partitioned into 4 × 4 vaults and PEs, an internal 3D memory bandwidth of 10 GB/s per vault, about 2.5 KB of SRAM per PE, 16 MAC/ADD units per PE, and a frequency of 300 MHz in the logic die. QeiHaN and NaHiD require slightly smaller memory buffers (i.e. 2KB of OB, 64B of IB, and 64B of WB) due to the different dataflow.Regarding area and energy consumption evaluation, the logic components are implemented in Verilog, including all the additional components required by QeiHaN, and synthesized to obtain the delay, area, and power using the Synopsys Design Compiler <cit.>, the modules of the DesignWare library and the technology library of 28/32nm from Synopsys. On the other hand, we characterize the memory buffers of the accelerator by obtaining the delay, energy per access, and area using CACTI-P <cit.>. We use the configurations optimized for low power and a supply voltage of 0.78V. Finally, the energy consumption of the 3D-stacked memory is estimated by using an HMC configuration of DRAMSim3 <cit.>. The results obtained with the aforementioned tools are combined with the activity factors and memory traces provided by our simulator to obtain the dynamic and static power of the accelerators.§ EVALUATIONThis section evaluates the performance, energy efficiency, and memory activity of our proposal. First, we introduce an analysis of the total number of memory accesses to the 3D-stacked DRAM dies after applying the QeiHaN scheme. Then, we present the speedups and energy savings achieved by QeiHaN compared to the Neurocube and NaHiD baselines. Finally, we discuss the accelerator overheads. §.§ 3D-stacked Memory AccessesFigure <ref> reports the normalized total 3D memory accesses of QeiHaN over the two baseline accelerators. This total includes both, memory accesses for reading/writing the weights and the input activations. On average for our set of DNNs, QeiHaN reduces the total DRAM accesses by 72.4% and 25% over Neurocube and NaHiD, respectively. The great reduction of memory accesses with respect to the baselines is mainly due to constraining the accesses to only the required bits of the weights for the bit-shifting operations. Moreover, QeiHaN shows a higher reduction of memory accesses over Neurocube due to two main reasons. First, the enhanced IS dataflow of QeiHaN requires each input activation to be accessed just once during the execution of a layer. In contrast, the OS dataflow of Neurocube may require multiple accesses to the activations. Second, QeiHaN performs pruning of zero and small activations after applying the quantization, removing all the related memory accesses to the weights. The efficiency of the activation pruning is limited in Neurocube due to its OS dataflow, so it is not implemented. On the other hand, compared to NaHiD, the reduction is well correlated to the estimated memory savings due to the huge amount of negative exponents as discussed in Section <ref>. Both QeiHaN and NaHiD use the same dataflow and pruning scheme, so both access the same input activations, and the savings come from the weights. §.§ PerformanceFigure <ref> shows the speedups achieved by QeiHaN. Compared to Neurocube, QeiHaN provides consistent speedups for the five DNNs that range from 8.69x (AlexNet) to 1.24x (Transformer), achieving an average performance improvement of 4.25x. The reduction in execution time is due to QeiHaN's efficient memory organization and enhanced IS dataflow. The number of memory accesses is dramatically reduced since only the meaningful bits of the weights required by the shift operations are loaded. In addition, QeiHaN employs a novel weight storage scheme to exploit the bank-level parallelism of the 3D memory. Moreover, QeiHaN overlaps the different stages of the dataflow in a deep pipeline, shortening the critical path of the execution. As shown in Figure <ref>, AlexNet and PTLBM exhibit the highest reduction in memory accesses and, hence, they obtain the largest performance improvements. The difference in speedup between these two networks and the attention-based models is in the percentage of zero and small activations that are effectively pruned in QeiHaN, skipping part of the execution and post-processing stages. The effect of activation pruning is minor in Transformer (3%), BERT-Base (7%), and BERT-Large (13%), but significant in AlexNet (47%) and PTLBM (55%).Compared to NaHiD, the benefits of QeiHaN are more modest but still quite important, achieving an average speedup of 1.38x. The main reason is that both accelerators benefit from the same architecture, dataflow, quantization, and activation pruning scheme. Therefore, the improvements come mainly from the novel memory layout for storing the weights in the 3D memory, and the corresponding reduction of memory accesses by leveraging the logarithmic quantization. PTBLM obtains the largest benefits, achieving an speedup of 1.86x whereas AlexNet gets the lowest improvements, that is, 1.07x speedup. These results are directly proportional to the percentage of negative exponents shown in Figure <ref>. §.§ Energy ConsumptionFigure <ref> reports normalized energy savings. On average, QeiHaN reduces the energy consumption of the accelerator by 3.52x and 1.28x over Neurocube and NaHiD, respectively. As we observed for performance, the energy savings are well correlated with the number of negative exponents and the corresponding reduction of memory accesses. These energy savings are due to two main reasons. First, dynamic energy is reduced due to the savings in multiplications and memory accesses. Second, the performance improvements shown in Figure <ref> provide a reduction in static energy. Again, PTBLM obtains the largest benefits, achieving a reduction of 8.2x and 1.6x in energy compared to both Neurocube and NaHiD respectively.Figure <ref> shows the energy breakdown of QeiHaN and NaHiD over Neurocube. The figure shows results for the five neural networks including the percentage of energy consumed by each major hardware block of the accelerators. As can be seen, the DRAM of the 3D-stacked memory (i.e. HMC) consumes most of the energy in all cases. The energy savings achieved by our proposal are significant, and are especially large in the 3D memory, since our scheme provides important savings in memory accesses for fetching the synaptic weights. In addition, the replacement of multipliers by simple bit-shift logic also results in smaller energy in the PEs. Note that the energy required for performing the logarithmic quantization is also included in the energy consumption of the PEs. §.§ AreaQeiHaN requires extra hardware in the PEs of the accelerator to perform the LOG2 quantization of activations. As shown in Figure <ref>, we implement the LOG2-Quant unit with a single comparator, one multiplexer, and one integer adder. These units represent less than 0.1% of the total area and energy. In addition, we replace the costly multipliers by simple bit-shift logic, and the size of the SRAM buffers is also smaller, reducing the computational cost and the overall area of the PEs. The area overhead of QeiHaN in the logic die due to 16 PEs is 0.389mm^2 (16 × 0.024mm^2) in 32nm. We can see that QeiHaN with 16 PEs fits in a small part of the logic die (68mm^2 <cit.>) of the 3D stack. In comparison, Neurocube extra storage and multipliers result in 20% more area than QeiHaN at the same technology node (i.e. 0.487mm^2). We do not evaluate the thermal constraints of QeiHaN since we expect them to be similar or lower than Neurocube due to the smaller area of our accelerator.§ CONCLUSIONSIn this paper, we show that the distribution of activations among different FC and CONV layers of a representative set of modern DNNs exhibits a high degree of negative exponents after the logarithmic quantization, resulting in a high number of right bit-shift operations. Then, we propose QeiHaN, a new 3D-stacked DRAM-based NDP accelerator that exploits the log quantization to replace multiplications and reduce memory accesses to only the useful bits of the weights. QeiHaN implements an implicit in-memory bit-shifting of the DNN weights coupled with an efficient weight storage scheme. We show that QeiHaN requires minor hardware changes over Neurocube, a state-of-the-art accelerator, mainly an additional quantization unit made of a small set of comparators. Our experimental results show that, on average, QeiHaN provides 3.5x energy savings and 4.3x speedup with negligible accuracy loss and lower area than Neurocube.§ ACKNOWLEDGEMENTThis work has been supported by the CoCoUnit ERC Advanced Grant of the EU’s Horizon 2020 program (grant No 833057), the Spanish State Research Agency (MCIN/AEI) under grant PID2020-113172RB-I00, and the ICREA Academia program.IEEEtranS
http://arxiv.org/abs/2310.18181v1
{ "authors": [ "Bahareh Khabbazan", "Marc Riera", "Antonio González" ], "categories": [ "cs.AR" ], "primary_category": "cs.AR", "published": "20231027144947", "title": "An Energy-Efficient Near-Data Processing Accelerator for DNNs that Optimizes Data Accesses" }
Text2Bundle: Towards Personalized Query-based Bundle Generation Zhihua Wei January 14, 2024 =============================================================== It is well established that a Dirac point of a periodic structure can bifurcate into in-gap eigenvalues if the periodic structure is perturbed differently on the two sides of an interface and if a common band gap can be opened for the two perturbed periodic structures near the Dirac point.This paper addresses the less-known situation when the perturbation only lifts the degeneracy of the Dirac point without opening a band gap. Using a two-dimensional waveguide model, we constructed a wave mode from the bifurcation of a Dirac point of a periodic waveguide. We proved that when the constructed mode couples with the Floquet-Bloch modes of quasi-momentum away from the Dirac point, its associated eigenvalue has a negative imaginary part and the mode is a resonant mode that can radiate its energy into the bulk. On the other hand, when the coupling vanishes, the imaginary part of the eigenvalue turns to zero, and the constructed mode becomes an interface mode that decays exponentially away from the interface. It is believed that the developed method can be extended to other settings, thus providing a clear answer to the problem concerned with the bifurcation of Dirac points.§ INTRODUCTIONThe study of localized waves in photonic systems has gained significant interest due to its potential applications in designing new optical devices <cit.>. One specific type of localized wave is known as the interface mode, which refers to waves that have their energy concentrated near the interface between two media. To create a structure that supports the interface mode, one approach is to perturb a periodic structure that has a Dirac point in its band structure differently on the two sides of the interface. A Dirac point is a special vertex in the spectral band structure of a periodic medium where two dispersion curves/surfaces intersect linearly or conically.For this approach to work, the perturbation to the periodic structure must open a band gap near the Dirac point, allowing the eigenvalue associated with the interface mode to bifurcate from the Dirac point and locate within the band gap. The condition that ensures band gap opening at a Dirac point is referred to as the spectral no-fold condition, which is defined and discussed in <cit.>. Under this condition, the in-gap eigenvalue bifurcated from Dirac points has been rigorously analyzed in various settings, including one-dimensional Schrödinger operators <cit.>, one-dimensional photonic structure <cit.>, two-dimensional Schrödinger operators <cit.>, two-dimensional Helmholtz equation in a photonic waveguide <cit.>, and two-dimensionalelliptic operators with smooth coefficients <cit.>.An intriguing question is whether an interface mode still exists if a perturbation lifts the degeneracy of the Dirac point without opening a band gap. In this scenario, it is conjectured that the mode bifurcated from the Dirac point will resonate with other Floquet-Bloch modes of energy near the Dirac point energy level but with quasi-momentum away from the Dirac point <cit.>. To date, a definitive answer to this conjecture is not yet available. In this paper, we provide a resolution to this conjecture by explicitly constructing a mode bifurcated from a Dirac point in a two-dimensional waveguide without the band gap opening condition. Specifically, when the constructed mode couples with the Floquet-Bloch modes of quasi-momentum away from the Dirac point, its associated eigenvalue has a negative imaginary part and the mode is a resonant mode. On the other hand, when the coupling vanishes, the imaginary part of the eigenvalue turns to zero, and the constructed mode becomes an interface mode.The rest of the paper is organized in the following way.In Section 1.1, we provide a detailed setup of the problem and present our main results. In Section 2, we briefly review the Floquet-Bloch theory for periodic differential operators and introduce Green's functions for periodic waveguide structures. In Section 3, we present the asymptotic expansions of Bloch eigenvalues and eigenfunctions near the energy level of the Dirac point; see Theorem <ref> and <ref>. These results demonstrate that a “local” band gap can be opened near the Dirac point upon applying appropriate perturbations without opening a “global” band gap that can separate the two spectral bands therein. Finally, in Section 4, we construct a mode bifurcated from the Dirac point by using the layer potential technique. We prove that the eigenvalue associated with this bifurcated mode has a non-positive imaginary part. When this eigenvalue is real, the mode constructed is an interface mode that localizes near the interface; while when the eigenvalue is non-real, the mode is a resonant mode. §.§ Problem setup and main resultsWe consider the propagation of a time-harmonic scalar wave in a two-dimensional periodic photonic waveguide Ω⊂𝐑^2 (see Figure <ref>) at frequency √(λ){ -1/n^2Δ u-λ u=0, x∈Ω ,∇ u (x)·n_x=0 , x∈∂Ω , .where n_x denotes the outward normal at x∈∂Ω, and n=n(x) is the refractive index. We assume that * The domain Ω is connected and open in 𝐑^2 with the boundary ∂Ω being C^2. Moreover, it's strip-like in the sense that there exists a compact set S⊂𝐑 such that Ω⊂𝐑× S;* Ω is periodic with period 1 in the sense that for all x∈Ω, we have x+e_1∈Ω;* n∈ L^∞(Ω), n(x+e_1)=n(x), and n(x)≥ c>0 for some constant c.The primitive cell of Ω is denoted by Y:=Ω∩ ((0,1)×𝐑). In particular, we assume that system (<ref>) is reflection symmetric in the sense that the following hold:(1) For any (x_1,x_2)∈Ω,(-x_1,x_2)∈Ω;(2) n(x)=(𝒫n)(x) for all x∈Ω, where 𝒫 is the reflection operator defined as (𝒫u)(x_1,x_2):=u(-x_1,x_2).Note that (<ref>) can be viewed as the eigenvalue problem of the following periodic operator ℒ=-1/n^2Δ: H_b^1(Δ, Ω)⊂ L^2(Ω)→ L^2(Ω),withH_b^1(Δ, Ω):={u∈ H^1(Ω):Δ u∈ L^2(Ω), ∇ u(x)·n_x|_∂Ω=0}.By the Floquet-Bloch theory <cit.>, the spectrum of ℒ satisfies that σ(ℒ)=∪_p∈ [-π,π]σ(ℒ(p)), where ℒ(p) is the Floquet-Bloch transform of ℒ at the quasi-momentum p∈ [-π,π]. In particular, ℒ(p) can be analytically extended to p∈𝐂; then {ℒ(p)} forms an analytic family of self-adjoint operators <cit.>. It's known from the analytic perturbation theory that there exist analytic functions {μ_n(p)}_n=1^∞ such that σ(ℒ(p))={μ_n(p):n≥ 1} for p∈ [-π,π]. We call {μ_n(p)}_n=1^∞ the analytical labeling of the Floquet-Bloch eigenvalues.By the analytic perturbation theory, the dimension of the eigenspace associated with each μ_n(p) is constant for almost every p <cit.>. In this paper, we assume for ease of presentation that the following stronger condition holds. For each n≥ 1, the eigenspace associated with μ_n(p) is one-dimensional except for finitely many p∈ [-π,π]. With Assumption <ref>, we denote the Floquet-Bloch eigenspace corresponding to μ_n(p) (n≥ 1, p∈ [-π,π]) by span{v_n(x;p)}. The possible exceptional p's in Assumption <ref> usually occur at the intersection of two graphs of μ_n(p)'s. Dirac points are among such intersection points.We assume that a Dirac point appears at p=0 and λ=λ_*>0. In particular, we assume that the following holds. There exist n_*,m_*∈𝐍, q_*∈ (0,π) and λ_*>0 such that(1)the dispersion curves of λ=μ_n_*(·) and λ=μ_m_*(·) intersect with the energy level λ=λ_* at p=-q_* and p=q_*, and at the Dirac point (p,λ)=(0,λ_*), i.e., λ_*=μ_n_*(0)=μ_m_*(0)=μ_n_*(-q_*)=μ_m_*(q_*);(2) the dispersion curves of μ_n_*(·) and μ_m_*(·) do not intersect with those of other μ_n(·)'s. More precisely, for any p∈ (-π,π] and n∈𝐍\{n_*,m_*}, μ_n(p)≠μ_n_*(p), μ_n(p)≠μ_m_*(p); for any p≠ (-π,π]\{0}, μ_n_*(p)≠μ_m_*(p);(3)μ_n_*^'(0)>0, μ_n_*^'(-q_*)<0. A band structure as described in Assumption <ref> and <ref> is depicted in Figure <ref>.The conditions in Assumption <ref> (1) break the so-called spectral no-fold condition <cit.>. Without this condition, a band gap cannot be opened under small perturbation (see Figure <ref>). The main focus of this paper is the bifurcation of the Dirac point in such a “no-gap” case. The condition in Assumption <ref> (2) is imposed to simplify the proof of Lemma <ref>. It is not essential for the validity of the main result Theorem <ref>.By Assumption <ref>, λ_* is not a point spectrum of ℒ. Indeed, the absolute continuity conjecture states that the point spectrum of any periodic elliptic operator is empty <cit.>. Now we introduce perturbations to the system (<ref>). Consider a family of operators {ℒ_ϵ=-1/n_ϵ^2Δ} (|ϵ|≪ 1), where n_ϵ(x) satisfies the following properties. (1) The function ϵ↦ n_ϵ(x) is C^2 for each fixed x; n_0(x)=n(x);(2) n_ϵ(x+e_1)=n_ϵ(x), n_ϵ(x)=(𝒫n_ϵ)(x) for all |ϵ|≪ 1;(3) Let A(p):=-2/n∂(n_ϵ)/∂ϵ|_ϵ=0·ℒ(p). Thent_*:=∫_YA(0)v_m_*(x;0)·v_n_*(x;0)n^2(x)dx≠ 0, ∫_YA(0)v_m_*(x;0)·v_m_*(x;0)n^2(x)dx=0,∫_YA(0)v_n_*(x;0)·v_n_*(x;0)n^2(x)dx =0. Assumption <ref> (3) can be relaxed to the following one2|t_*|>|∫_YA(0)v_n_*(x;0)·v_n_*(x;0)n^2(x)dx +∫_YA(0)v_m_*(x;0)·v_m_*(x;0)n^2(x)dx|,without affecting the main result of this paper. Indeed, (<ref>) implies that σ(ℒ_ϵ) and σ(ℒ_-ϵ) exhibit a common “local” band gap near λ=λ_* and p=0 for 0<|ϵ|≪ 1 (see the proof of Theorem <ref>), which is essential in our analysis. Nonetheless, we do not use Condition (<ref>) for ease of presentation.We are concerned with the bifurcation of the Dirac point in the following joint structure{ ℒ^⋆_ϵ u-λ u=0, x∈Ω ,∇ u (x)·n_x=0 , x∈∂Ω , .where(ℒ^⋆_ϵ u)(x_1,x_2):= { (ℒ_ϵu)(x_1,x_2), x_1>0, (ℒ_-ϵu)(x_1,x_2), x_1<0. . Let u∈ L^2_loc(Ω) solve (<ref>) with λ∈𝐂. Then u is called a resonant mode ifif Im(λ)<0 and u_L^2(Ω)=∞. Let ℐ_ϵ:={λ∈𝐂:|λ-λ_*|<c_0|t_*|ϵ}, where c_0 is any positive number such that c_0<1. Our main result is stated below: Under Assumptions <ref>,<ref>, <ref> and <ref>, there exists ϵ_0>0 such that for any |ϵ|<ϵ_0, (<ref>) has a solution u^⋆ with λ^⋆∈ℐ_ϵ and Im(λ^⋆)≤ 0. In particular, when Im(λ^⋆)=0, u^⋆ is an interface mode and λ^⋆ an embedded eigenvalue. When Im(λ^⋆)<0, u^⋆ is a resonant mode. A criterion for the function u^⋆ in Theorem <ref> to be a resonant mode is given in Proposition <ref>. Suppose ϕ^⋆:=(∂ u^⋆/∂ x_1)|_Γ, where Γ:=Ω∩ ({0}×𝐑). Then u^⋆ is a resonant mode if and only if either ⟨ϕ^⋆,u_𝔫_*,ϵ(· ;q_+,ϵ(λ^⋆))⟩≠ 0 or ⟨ϕ^⋆,u_𝔫_*,ϵ(· ;q_-,ϵ(λ^⋆))⟩≠ 0, where u_𝔫_*,ϵ(· ;q_±,ϵ(λ^⋆)) is the trace of Bloch mode at energy level λ^⋆ with quasi-momentum q_±,ϵ to Γ. The details are given in Section 4.§.§ NotationsHere we list notations that are used in the paper.§.§.§ Geometries Upper/lower half complex plane 𝐂_+={z ∈𝐂: Imz >0}, 𝐂_-={z ∈𝐂: Imz <0};Ω: the domain of the waveguide (introduced in Section 1.1);Ω^right:=Ω∩ (𝐑^+×𝐑), Ω^left:=Ω∩ (𝐑^-×𝐑);Interface Γ:=Ω∩ ({0}×𝐑);Γ^right:=∂ (Ω^right), Γ^left:=∂ (Ω^left);Primitive cell Y:=Ω∩ ((0,1)×𝐑). §.§.§ Function spacesL^2(Ω):={u(x):u_L^2(Ω)<∞}, where ·_L^2(Ω) is induced by the inner product (u,v)_L^2(Ω):=∫_Ωu·v;H^m(Ω):={u(x):∂_α u∈ L^2(Ω),|α|≤ m};L^2(Y):={u(x):u_L^2(Y)<∞}, where ·_L^2(Y) is induced by the inner product (u,v)_L^2(Y):=∫_Yu·v;L^2(Y;n_ϵ(x)):={u(x):u_L^2(Y;n_ϵ(x))<∞}, where ·_L^2(Y;n_ϵ(x)) is induced by the inner product (u,v)_L^2(Y;n_ϵ(x)):=∫_Yn^2_ϵ(x)u(x)·v(x) (the refractive index n_ϵ is introduced in Assumption <ref>);H^m(Y):={u(x):∂_α u ∈ L^2(Y),|α|≤ m};H_b^1(Δ, Ω):={u∈ H^1(Ω):Δ u∈ L^2(Ω), ∇ u(x)·n_x|_∂Ω=0};L_p^2(Ω):={u∈ L_loc^2(Ω):u(x+e_1)=e^ipu(x)} (p∈𝐂), which is equipped with L^2(Y)-norm;H^m_p(Ω):={u∈ H_loc^m(Ω):(∂_α u)(x+e_1)=e^ip(∂_α u)(x), |α|≤ m} (p∈𝐂), which is equipped with H^m(Y)-norm;H^m_p,b(Ω):={u∈ H_p^m(Ω):∇ u(x)·n_x|_∂Ω=0};H_p,b^1(Δ, Ω):={u∈ H^1_p,b(Ω):Δ u∈ L^2_loc(Ω)};H^1/2(Γ):={u=U|_Γ:U∈ H^1/2(Γ^right)}, where H^1/2(Γ^right) is defined in the standard way;H̃^-1/2(Γ):={u=U|_Γ:U∈ H^-1/2(Γ^right) and supp (U)⊂Γ}.§.§ Operators and others Equivalence ∼ between two functions: u∼ v if and only if ∃τ∈𝐂 such that |τ|=1 and u=τ· v; Dual product ⟨·,·⟩ (between H̃^-1/2(Γ) and H^1/2(Γ)): ⟨φ,ϕ⟩=∫_Γφ·ϕ, for φ∈H̃^-1/2(Γ), ϕ∈ H^1/2(Γ);Reflection operator 𝒫:u(x_1,x_2)↦ u(-x_1,x_2);Trace operator Tr:H^1(Y)→ H^1/2(Γ), u↦ u|_Γ;Extension operator ℳ:=Tr^*.§ PRELIMINARIES§.§ Floquet-Bloch theory and band structure near the Dirac pointsIn this section, we briefly recall the Floquet-Bloch theory, which is used to characterize the spectrum of the periodic operators {ℒ_ϵ} introduced in Section 1.1. Here we only consider the operator ℒ=ℒ_0. The discussion of ℒ_ϵ (ϵ≠ 0) is similar.To study the spectrum σ(ℒ), we consider a family of operators ℒ(p) (p∈ [-π, π]), which is defined asℒ(p):H_p,b^1(Δ,Ω)⊂ L^2_p(Ω)→ L^2_p(Ω),ϕ→ -1/n^2Δ.Then the Floquet-Bloch theory indicates thatσ(ℒ)=∪_p∈ [-π,π]σ(ℒ(p)).For each p∈ [-π,π], the spectral theory for self-adjoint operators (cf. <cit.>) states that σ(ℒ(p)) consists of a discrete set of real eigenvalues0≤λ_1(p)≤λ_2(p)≤⋯≤λ_n(p)≤⋯.Since {λ_n(p)} are labeled in the ascending order, we call it the ascending labeling of the Floquet-Bloch eigenvalues.It is clear that∀ p∈ [-π,π],{μ_n(p)}_n=1^∞={λ_n(p)}_n=1^∞.Moreover each λ_n(p) is piecewise smooth for p∈ [-π,π] <cit.>. The graph of λ_n(p) is called the n-th dispersion curve.We denote by u_n(x;p) the L^2-normalized eigenfunction associated with the eigenvalue λ_n(p). u_n(x;p) is called the n-th Floquet-Bloch mode at quasi-momentum p.The Bloch modes {u_n(x;p)} forms a basis of L_p^2(Ω). By Assumption <ref> and <ref>, we have the following proposition:there exists an integer 𝔫_*>0 such that* λ_*=λ_𝔫_*(0)=λ_𝔫_*+1(0);* λ_𝔫_*(p)<λ_𝔫_*+1(p) for all p∈ [-π,π]\{0};* λ_𝔫_*-1(p)<λ_𝔫_*(p), λ_𝔫_*+1(p)<λ_𝔫_*+2(p) for all p∈ [-π,π]. The two dispersion curves λ_*=λ_𝔫_*(p) and λ_*=λ_𝔫_*+1(p) are depicted in Figure <ref>.§.§ The Green's function and representation of solutions for the unperturbed structureWe introduce Green's function G(x, y; λ) for the waveguide at λ=λ_*, which is defined to be the physical solution to the following equations:{ (1/n^2(x)Δ_x +λ_*)G(x,y;λ_*)=1/n^2(x)δ(x-y), x,y ∈Ω,∇_x G(x,y;λ_*) (x)·n_x=0 , x∈∂Ω . .Here δ(·) denotes the Dirac delta function. The existence and uniqueness of G(x,y;λ_*) can be established using the limiting absorption principle <cit.>.Moreover, the following three properties of G(x,y;λ_*) hold.First, G(x,y;λ_*) admits the following spectral representation by using the Floquet-Bloch transform:G(x,y;λ_*)= lim_η→ 0^+G(x,y;λ_* + i η)= 1/2πlim_η→ 0^+∫_0^2π∑_n≥ 1v_n(x;p)v_n(y;p)/λ_*+iη-μ_n(p)dp. Second (see Remark 8 in <cit.>), G(x, y;λ_* ) =1/2π∫_-π^π∑_n≠ n_*,m_*v_n(x;p)v_n(y;p)/λ_*-μ_n(p)dp +(-i/2v_n_*(x;0)v_n_*(y;0)/|μ_n_*^'(0)| -i/2v_m_*(x;0)v_m_*(y;0)/|μ_m_*^'(0)| -i/2v_n_*(x;-q_*)v_n_*(y;-q_*)/|μ_n_*^'(-q_*)| -i/2v_m_*(x;q_*)v_m_*(y;q_*)/|μ_m_*^'(q_*)|+1/2πp.v.∫_-π^πv_n_*(x;p)v_n_*(y;p)/λ_*-μ_n_*(p)dp +1/2πp.v.∫_-π^πv_m_*(x;p)v_m_*(y;p)/λ_*-μ_m_*(p)dp ),where p.v. means that the integral is understood in the sense of Cauchy's principal value. Last, G(x,y;λ_*) has the following decomposition for fixed y (see Remark 9 in <cit.>):G(x,y;λ_*)=G_0^+(x,y;λ_*)-i·v_n_*(x;0)v_n_*(y;0)/|μ_n_*^'(0)| -i·v_m_*(x;q_*)v_m_*(y;q_*)/|μ_m_*^'(q_*)|,andG(x,y;λ_*)=G_0^-(x,y;λ_*)-i·v_m_*(x;0)v_m_*(y;0)/|μ_m_*^'(0)| -i·v_n_*(x;-q_*)v_n_*(y;-q_*)/|μ_n_*^'(-q_*)|,where G_0^+(x,y;λ) (G_0^-(x,y;λ)) decays exponentially as x_1→ +∞ (for x_1→ -∞). Physically, v_n_*(x;0) and v_m_*(x;q_*) in (<ref>) are right-propagating modes with frequency w=√(λ_*) in the waveguide. Analogously, v_m_*(x;0) and v_n_*(x;-q_*) in (<ref>) are left-propagating modes. The following properties hold for these propagating modes. Their proofs are similar to those of Lemma 2.2 and 2.3 in <cit.>. The following relations holdv_n_*(x;0)∼𝒫v_m_*(x;0), v_n_*(x;0)∼v_m_*(x;0), v_n_*(x;-q_*)∼𝒫v_m_*(x;q_*), v_n_*(x;-q_*)∼v_m_*(x;q_*).For each u,v∈ H^1_loc(Ω), we define the sesquilinear form 𝔮(u,v):=∫_Γ∂ u/∂ x_1vdx_2. Then 𝔮(v_n_*(x;p),v_n_*(x;p))=i/2μ_n_*^'(p),p=0 or -q_*, 𝔮(v_m_*(x;p),v_m_*(x;p))=i/2μ_m_*^'(p),p=0 or q_*,and𝔮(v_n_*(x;0),v_n_*(x;-q_*))=𝔮(v_n_*(x;0),v_m_*(x;q_*))=0, 𝔮(v_m_*(x;0),v_m_*(x;q_*))=𝔮(v_m_*(x;0),v_n_*(x;-q_*))=0. We next consider the Green's function G(x,y;λ_*).For x,y∈Ω and x≠ y,G(x,y;λ_*)=G(y,x;λ_*). On the other hand, when y∈Γ, there holdsG(x,y;λ_*)=(𝒫G)(x,y;λ_*),∀ x∈Ω.Similar to the proof of Lemma 2.4 in <cit.>. Finally, as an application of Green's function, we present a representation formula for the solution to the Helmholtz equation in the semi-infinite domain Ω^right. Its proof is similar to the one of Proposition 2.5 in <cit.>. Suppose u∈ H^1_loc(Ω^right) satisfies that(1/n^2Δ+λ_*)u(x)=0Ω^right, ∇ u·n_x=0(x∈∂Ω^right\Γ),and assume that there exists a,b∈𝐂 such thatu(x)-a· v_n_*(x;0)-b· v_m_*(x;q_*) decays exponentially as x_1→ +∞,thenu(x)=2∫_ΓG(x,y;λ_*)∂ u/∂ x_1(0^+,y_2)dy_2, x∈Ω^right . § ASYMPTOTIC EXPANSIONS FOR THE PERTURBED STRUCTURE §.§ Asymptotic expansions of Floquet-Bloch eigenvalues and eigenfunctionThe main result of this section is the following theorem: Under Assumption <ref>, <ref>, <ref> and <ref>, the following asymptotic expansions hold uniformly for |ϵ|≪ 1, |p|≪ 1:λ_𝔫_*,ϵ(p)=λ_*-√(α_n_*^2 p^2+|t_*|^2ϵ^2)(1+𝒪(|p|+|ϵ|)), λ_𝔫_*+1,ϵ(p)=λ_*+√(α_n_*^2 p^2+|t_*|^2ϵ^2)(1+𝒪(|p|+|ϵ|)),where t_* is defined in Assumption <ref> and α_n_*:=μ_n_*'(0).Moreover, for ϵ≠ 0, the Floquet-Bloch eigenfunctions admit the follows asymptotic expansions in H^1(Y):u_𝔫_*,ϵ(x;p)= { t_*·ϵ/α_n_*p+√(α_n_*^2 p^2+|t_*|^2ϵ^2)v_n_*(x;0)+v_m_*(x;0) +𝒪(|p|+|ϵ|), p>0, t_*·ϵ/-α_n_*p+√(α_n_*^2 p^2+|t_*|^2ϵ^2)(𝒫v_n_*)(x;0)+(𝒫v_m_*)(x;0) +𝒪(|p|+|ϵ|), p<0, .andu_𝔫_*+1,ϵ(x;p)= { v_n_*(x;0)-t_*·ϵ/α_n_*p+√(α_n_*^2 p^2+|t_*|^2ϵ^2)v_m_*(x;0) +𝒪(|p|+|ϵ|), p>0, (𝒫v_n_*)(x;0)- t_*·ϵ/-α_n_*p+√(α_n_*^2 p^2+|t_*|^2ϵ^2)(𝒫v_m_*)(x;0) +𝒪(|p|+|ϵ|), p<0, .where v_n_*(x;0) (v_m_*(x;0), resp.) is the right- (left-, resp.) propagating mode at the Dirac point (p,λ)=(0,λ_*). By Theorem <ref>, a “local” band gap (λ_*-c_0 |t_*ϵ|,λ_*-c_0 |t_*ϵ|) is opened near the Dirac point (0,λ_*), for both operators ℒ_ϵ and ℒ_-ϵ if t_*≠ 0. See Figure <ref>. In the next theorem, we present asymptotic expansions for the Floquet eigenvalues and eigenfunctions near p=± q_*. The proof follows from the standard perturbation theory for simple eigenvalue problems of slef-adjoint operators (See Chapter VII in <cit.>) and hence is omitted here.Under Assumption <ref>, <ref> and <ref>, the following asymptotic expansions hold near p=q_*:λ_𝔫_*,ϵ(p)=λ_*+𝒪(|p-q_*|+|ϵ|), u_𝔫_*,ϵ(x;p)=u_𝔫_*(x;q_*)+𝒪(|p-q_*|+|ϵ|)H^1(Y). Moreover,λ_𝔫_*,ϵ^'(p)-λ_𝔫_*^'(q_*)=𝒪(|p-q_*|+|ϵ|),(∂_pu_𝔫_*,ϵ)(x;p)- (∂_pu_𝔫_*)(x;q_*)_H^1(Y)=𝒪(|p-q_*|+|ϵ|).Similarly, near p=-q_*:λ_𝔫_*,ϵ(p)=λ_*+𝒪(|p+q_*|+|ϵ|), u_𝔫_*,ϵ(x;p)=u_𝔫_*(x;-q_*)+𝒪(|p+q_*|+|ϵ|)H^1(Y).Moreover,λ_𝔫_*,ϵ^'(p)-λ_𝔫_*^'(-q_*)=𝒪(|p+q_*|+|ϵ|),(∂_pu_𝔫_*,ϵ)(x;p)- (∂_pu_𝔫_*)(x;-q_*)_H^1(Y)=𝒪(|p+q_*|+|ϵ|). By the analytic perturbation theory, the Bloch eigenvalue λ_𝔫_*,ϵ(p) and eigenfunction u_𝔫_*,ϵ(x;p) is analytic in p for |p-q_*|≪ 1 and p∈𝐂. A consequence of Theorem <ref> is that the dispersion curve λ=λ_𝔫_*+1,ϵ(p) has intersection with λ=λ_* for |ϵ|≪ 1;Thus, a “global” band gap is not opened when we apply a small perturbation to the system (<ref>). See Figure <ref>. §.§ Proof of Theorem <ref>We shall apply perturbation theory to solve the eigenvalue problem of ℒ_ϵ(p). As a preparation, We write ℒ_ϵ(p)=e^ip x_1∘ℒ̃_ϵ(p)∘ e^-ip x_1,whereℒ̃_ϵ(p): H_0,b^1(Δ,Ω)⊂ L^2_0(Ω)→ L^2_0(Ω),u↦ -1/n_ϵ^2(Δ+2ip·∂/∂ x_1-p^2)u .Then ℒ̃_ϵ(p) is unitarily equivalent to ℒ_ϵ(p) when p∈ (-π,π].Therefore it is sufficient to solve the eigenpairs of ℒ̃_ϵ(p) to obtain those of ℒ_ϵ(p). For ϵ=0, we have ℒ̃(p)ṽ_n(x;p)=μ_n (p)ṽ_n(x;p),whereṽ_n(x;p)=e^-ip x_1v_n(x;p)∈ H_0,b^1(Δ,Ω). Let B̃(p):=-2i/n^2∂/∂ x_1+2p/n^2. Then the following identities hold:(B̃(0)ṽ_n_*(x;0), ṽ_n_*(x;0))_L^2(Y;n(x))=α_n_*, (B̃(0)ṽ_m_*(x;0), ṽ_m_*(x;0))_L^2(Y;n(x))=-α_n_*,and(B̃(0)ṽ_n_*(x;0), ṽ_m_*(x;0))_L^2(Y;n(x)) = (B̃(0)ṽ_m_*(x;0), ṽ_n_*(x;0))_L^2(Y;n(x)) =0. We only prove (<ref>). The proof of (<ref>) and (<ref>) are similar. By letting n=n_* in (<ref>), and taking inner product with ṽ_n_*, we haveμ_n_*(p)=(ℒ̃(p)ṽ_n_*(x;p),ṽ_n_*(x;p))_L^2(Y;n(x)).Differentiating (<ref>) with respect to p and evaluate at p=0 yieldsμ_n_*^'(0) =(B̃(0)ṽ_n_*(x;0),ṽ_n_*(x;0))_L^2(Y;n(x)) +(ℒ̃(0)(∂_pṽ_n_*)(x;0),ṽ_n_*(x;0)) _L^2(Y;n(x)) +(ℒ̃(0)ṽ_n_*(x;0),(∂_pṽ_n_*)(x;0))_L^2(Y;n(x)).Since ℒ̃(0) is self-adjoint, we haveμ_n_*^'(0)= (B̃(0)ṽ_n_*(x;0),ṽ_n_*(x;0))_L^2(Y;n(x)) +λ_* ((∂_pṽ_n_*(x;0),ṽ_n_*(x;0))_L^2(Y;n(x)) +(ṽ_n_*(x;0),∂_pṽ_n_*(x;0))_L^2(Y;n(x))) = (B̃(0)ṽ_n_*(x;0),ṽ_n_*(x;0))_L^2(Y;n(x)) +λ_*d/dp|_p=0(ṽ_n_*(x;p),ṽ_n_*(x;p))_L^2(Y;n(x))The normalization condition (ṽ_n_*(x;p),ṽ_n_*(x;p))_L^2(Y;n(x))= 1 implies thatα_n_*=μ_n_*^'(0)=(B̃(0)ṽ_n_*(x;0), ṽ_n_*(x;0))_L^2(Y;n(x)).This completes the proof of (<ref>). Now we are ready to prove Theorem <ref>.We derive (<ref>), (<ref>) and (<ref>) by solving the following eigenvalue problem for |ϵ|,|p|,|λ_ϵ-λ_*|≪ 1ℒ̃_ϵ(p)ũ_ϵ=λ_ϵũ_ϵ.Note that for each u∈ H^1_0,b(Δ,Ω), the following estimate holds in L^2_0(Ω):ℒ̃_ϵ(p)u-ℒ̃(p)u =n^2_ϵ-n^2/n^2_ϵℒ̃(p)u≲n_ϵ-n_L^∞(Ω)·ℒ̃(p)u.Theorem 2.24 in Chapter IV of <cit.> indicates that ℒ̃_ϵ(p) converges to ℒ̃(p) in the generalized sense, whence the solvability of (<ref>) for |ϵ|≪ 1 follows.On the other hand, for each p near p=0, ℒ̃(p) has two isolated eigenvalues λ_𝔫_*(p) and λ_𝔫_*+1(p) near λ=λ_*. Therefore ℒ̃_ϵ(p) has two isolated eigenvalues near λ=λ_*, which are bifurcated from λ_𝔫_*(p) and λ_𝔫_*+1(p) respectively.We now solve the eigenvalue problem (<ref>) by a perturbation argument. For |p|≪ 1, we writeλ_ϵ=λ_*+λ^(1), ũ_ϵ=ũ^(0)_ϵ+ũ^(1)_ϵwith|λ^(1)|≪ 1 ,ũ^(0)_ϵ=a·ṽ_n_*(x;0)+b·ṽ_m_*(x;0)∈ (ℒ̃(0)-λ_*),ũ^(1)_ϵ∈ ( (ℒ̃(0)-λ_*))^⊥,where ( (ℒ̃(0)-λ_*))^⊥ denotes the orthogonal complement of (ℒ̃(0)-λ_*) in L_0^2(Ω). Note that the following expansion holds in ℬ(H_0,b^2(Ω),L_0^2(Ω)):ℒ̃_ϵ(p)=ℒ̃(0)+ϵ·Ã(0)+p·B̃(0) +𝒪(p^2+ϵ^2),where Ã(p)=-2/n∂(n_ϵ)/∂ϵ|_ϵ=0·ℒ̃(p) and B̃(p)=-2i/n^2∂/∂ x_1+2p/n^2. By plugging (<ref>) and (<ref>) into (<ref>), we get (ℒ̃(0)-λ_*)ũ^(1)_ϵ = (λ^(1)-ϵ·Ã(0)-p·B̃(0) +𝒪(p^2+ϵ^2))ũ^(0)_ϵ+(λ^(1)-ϵ·Ã(0)-p·B̃(0) +𝒪(p^2+ϵ^2))ũ^(1)_ϵ. We next solve (<ref>) by following a Lyapunov-Schmidt reduction argument. To do so, we introduce the orthogonal projection Q_⊥:L_0^2(Ω)→ ( (ℒ̃(0)-λ_*))^⊥. By applying Q_⊥ to (<ref>), we obtain(ℒ̃(0)-λ_*)ũ^(1)_ϵ = Q_⊥(λ^(1)-ϵ·Ã(0)-p·B̃(0) +𝒪(p^2+ϵ^2))ũ^(0)_ϵ+Q_⊥(λ^(1)-ϵ·Ã(0)-p·B̃(0) +𝒪(p^2+ϵ^2))ũ^(1)_ϵ. The above equation can be rewritten as(I-T)ũ^(1)_ϵ =Tũ^(0)_ϵ,whereT=T(ϵ,p,λ^(1)):=(ℒ̃(0)-λ_*)^-1Q_⊥(λ^(1)-ϵ·Ã(0)-p·B̃(0) +𝒪(p^2+ϵ^2)).For ϵ,p,λ^(1) sufficiently small, (I-T)^-1∈ℬ(H_0,b^2(Ω)). It holds thatũ^(1)_ϵ =(I-T)^-1Tũ^(0)_ϵ =a· (I-T)^-1Tṽ_n_*(x;0)+b· (I-T)^-1Tṽ_m_*(x;0).Note that the map (ϵ,p,λ^(1))↦ (I-T)^-1Tṽ_n(x;0) (n=n_*,m_*) is smooth from a neighborhood of (0,0,0) to H_0,b^2(Ω) with the following esitimate(I-T)^-1Tṽ_n(x;0)≲ |ϵ|+|p|+|λ^(1)|.By taking L^2(Y;n(x))-inner product with ṽ_n_*(x;0) and ṽ_m_*(x;0) respectively on both sides of (<ref>), we obtain the following equationsℳ(ϵ,p,λ^(1)) [ a; b ] =0,where the components of ℳ(ϵ,p,λ^(1)) are given byM_11=λ^(1)-ϵ(Ã(0)ṽ_n_*(x;0),ṽ_n_*(x;0))_L^2(Y;n(x)) -p (B̃(0)ṽ_n_*(x;0),ṽ_n_*(x;0))_L^2(Y;n(x)) +𝒪((λ^(1))^2+p^2+ϵ^2),M_22=λ^(1)-ϵ(Ã(0)ṽ_m_*(x;0),ṽ_m_*(x;0))_L^2(Y;n(x)) -p (B̃(0)ṽ_m_*(x;0),ṽ_m_*(x;0))_L^2(Y;n(x)) +𝒪((λ^(1))^2+p^2+ϵ^2),M_12=-ϵ(Ã(0)ṽ_m_*(x;0),ṽ_n_*(x;0))_L^2(Y;n(x)) -p (B̃(0)ṽ_m_*(x;0),ṽ_n_*(x;0))_L^2(Y;n(x)) +𝒪((λ^(1))^2+p^2+ϵ^2),M_21=-ϵ(Ã(0)ṽ_n_*(x;0),ṽ_m_*(x;0))_L^2(Y;n(x)) -p (B̃(0)ṽ_n_*(x;0),ṽ_m_*(x;0))_L^2(Y;n(x)) +𝒪((λ^(1))^2+p^2+ϵ^2). We now simplify the expression of ℳ(ϵ,p,λ^(1)).By (<ref>) and (<ref>), for i,j∈{n_*,m_*},(ℒ_ϵ(0)v_i(x;0),v_j(x;0))_L^2(Y;n(x))=(ℒ̃_ϵ(0)ṽ_i(x;0),ṽ_j(x;0))_L^2(Y;n(x)).Taking derivative with respect to ϵ yields(A(0)v_i(x;0),v_j(x;0))_L^2(Y;n(x))=(Ã(0)ṽ_i(x;0),ṽ_j(x;0))_L^2(Y;n(x)),where A(0) and Ã(0) are introduced in Assumption <ref> and equation (<ref>), respectively. Then Assumption <ref> and Lemma <ref> yield:ℳ(ϵ,p,λ^(1))= [ λ^(1)-α_n_*p t_*ϵ; t_*ϵ λ^(1)+α_n_*p ] +𝒪((λ^(1))^2+p^2+ϵ^2). Thus, for each p, λ_ϵ=λ_*+λ^(1) solves the eigenvalue problem (<ref>) if and only if λ^(1) solves F(ϵ,p,λ^(1)):=ℳ(ϵ,p,λ^(1)) =(λ^(1))^2-α_n_*^2 p^2-|t_*|ϵ^2+ρ(ϵ,p,λ^(1))=0,whereρ(ϵ,p,λ^(1))=𝒪(|ϵ|^3+|p|^3+|λ^(1)|^3). We then solve λ^(1)=λ^(1)(ϵ,p) from (<ref>) for each p and ϵ. First, note that ±√(α_n_*^2 p^2+|t_*|ϵ^2) give two branches of solutions if we drop the remainder ρ from (<ref>). Thus, we seek a solution to (<ref>) in the following formλ^(1)(ϵ,p)=x·√(α_n_*^2 p^2+|t_*|ϵ^2)with |x| close to 1. By substituting (<ref>) into (<ref>), we obtain the following equation of x (with p and ϵ being regarded as parameters)H(x;ϵ,p) :=1/α_n_*^2 p^2+|t_*|ϵ^2F(ϵ,p,x·√(α_n_*^2 p^2+|t_*|ϵ^2)) =x^2-1+ρ_1(x;ϵ,p)=0,where ρ_1(x;ϵ,p):=ρ(ϵ,p,x·√(α_n_*^2 p^2+|t_*|ϵ^2))/α_n_*^2 p^2+|t_*|ϵ^2.Now we consider the solution to (<ref>) with |x-1|≪ 1. Note that, by (<ref>), the following estimate holds uniformly in x when |x-1|≪ 1ρ_1(x;ϵ,p)=𝒪(|ϵ|+|p|).We conclude that there exists a unique solution x_s(ϵ,p) to (<ref>) with x_s(ϵ,p)=1+𝒪(|ϵ|+|p|) for |ϵ|,|p|≪ 1. It follows from (<ref>) that there exists a unique solution λ^(1)_+(ϵ,p) to the equation (<ref>) near √(α_n_*^2 p^2+|t_*|ϵ^2). Moreover,λ^(1)_+(ϵ,p)=x_s(ϵ,p)·√(α_n_*^2 p^2+|t_*|ϵ^2) =√(α_n_*^2 p^2+|t_*|ϵ^2)·(1+𝒪(|ϵ|+|p|)).Similarly, the other solution to (<ref>) is given byλ^(1)_-(ϵ,p)=-√(α_n_*^2 p^2+|t_*|ϵ^2)·(1+𝒪(|ϵ|+|p|)).Note that λ_𝔫_*+1,ϵ(p)=λ_*+λ^(1)_+(ϵ,p) and λ_𝔫_*,ϵ(p)=λ_*+λ^(1)_-(ϵ,p). This proves (<ref>). Finally, by substituting λ^(1)=λ^(1)_±(p) inside (<ref>), we obtain the following two solutions (a_+,b_+)^T and (a_-,b_-)^T for p>0:[ a_+; b_+ ] = [1; -t_*·ϵ/α_n_*p+√(α_n_*^2 p^2+|t_*|^2ϵ^2)+𝒪(|ϵ|+|p|) ] ,[ a_-; b_- ] = [ t_*·ϵ/α_n_*p+√(α_n_*^2 p^2+|t_*|^2ϵ^2)+𝒪(|ϵ|+|p|); 1 ].Then (<ref>) and (<ref>) show that the two eigenfunctions ũ_𝔫_*,ϵ(x;p) and ũ_𝔫_*+1,ϵ(x;p) (corresponding to λ_𝔫_*,ϵ(p) and λ_𝔫_*+1,ϵ(p), respectively) that solve (<ref>) for p>0 are given byũ_𝔫_*,ϵ(x;p)=t_*·ϵ/α_n_*p+√(α_n_*^2 p^2+|t_*|^2ϵ^2)ṽ_n_*(x;0)+ṽ_m_*(x;0) +𝒪(|ϵ|+|p|), ũ_𝔫_*+1,ϵ(x;p)=ṽ_n_*(x;0)-t_*·ϵ/α_n_*p+√(α_n_*^2 p^2+|t_*|^2ϵ^2)ṽ_m_*(x;0) +𝒪(|ϵ|+|p|).Thus, by the relation u_n,ϵ(x;p)=e^ipx_1ũ_n,ϵ(x;p) (n=𝔫_*,𝔫_*+1) and (<ref>), we conclude the proof of (<ref>) and (<ref>) for p>0. For p<0, the reflectional symmetry of the system implies the equivalence u_n,ϵ(x;p)∼ (𝒫u_n,ϵ)(x;-p) (similar to Lemma <ref>); this proves (<ref>) and (<ref>).§ BIFURCATION OF THE DIRAC POINT UNDER PERTURBATIONIn this section, we prove Theorem <ref> by constructing a solution u(x;λ^⋆) to (<ref>) with λ^⋆∈ℐ_ϵ. The construction is based on the following single-layer potential operator for the perturbed structure:φ∈H̃^-1/2(Γ) ↦ u(x;λ)=∫_Γ G_ϵ(x,y;λ) φ(y)dσ(y).Here the Green's function G_ϵ(x,y;λ) is defined as the unique physical solution to the following equation{ (1/n_ϵ^2(x)Δ_x +λ)G_ϵ(x,y;λ)=1/n_ϵ^2(x)δ(x-y), x,y ∈Ω,∇_x G_ϵ(x,y;λ) (x)·n_x=0 , x∈∂Ω . .By the limiting absorption principle as presented in Section 2.2, G_ϵ(x,y;λ) can be rewritten asG_ϵ(x,y;λ)= lim_η→ 0^+G_ϵ(x,y;λ_* + i η)= 1/2πlim_η→ 0^+∫_-π^π∑_n≥ 1v_n,ϵ(x;p)v_n,ϵ(y;p)/λ+iη-μ_n,ϵ(p)dp.Here (μ_n,ϵ(p),v_n,ϵ(x;p)) denotes the Floquet-Bloch eigenpair of the perturbed operator ℒ_ϵ(p). However, as indicated in (<ref>), G_ϵ(x,y;λ) is not analytic for λ∈ℐ_ϵ (it has a nonzero jump when λ traverses the real line); thus the single layer potential operator (<ref>) is non-analytic, and the Gohberg-Sigal theory (see for instance <cit.>) for operator-valued analytical functions cannot be applied directly. To overcome this issue, we extend this operator analytically using an appropriate contour integral; see Section 4.1. The continued operator is denoted as 𝔾̃_ϵ(λ) and defined explicitly in (<ref>). The detailed properties of 𝔾̃_ϵ(λ) are summarized in Proposition <ref> and <ref>. In Section 4.2, we use 𝔾̃_ϵ(λ) to construct a solution u(x;λ) to (<ref>) with λ∈ℐ_ϵ; see (<ref>). In particular, we prove that u(x;λ) indeed solves (<ref>) if and only if the boundary integral equation (<ref>) has a solution; see Proposition <ref>. In Sections 4.3 and 4.4, we use the Gohberg-Sigal theory to solve (<ref>) and conclude the proof of Theorem <ref>. §.§ Analytic continuation of the single-layer potential operatorWe first present a lemma that is crucial for the analytic continuation of the single-layer potential operator.Define the following complex domain (see Figure <ref>):D_ϵ,ν:=B(q_*,|ϵ|^1/3)∪ B(-q_*,|ϵ|^1/3)∪{p∈𝐂:-(1+ν)π<Re(p)< (1+ν)π, |Im(p)|<ν (ν>0)}.There exists ϵ_0>0 such that for any ϵ with 0<|ϵ|<ϵ_0, ∃ ν=ν(ϵ)>0 such that the following statements hold:* p↦λ_𝔫_*,ϵ(p) and p↦ u_𝔫_*,ϵ(x;p) are analytical in D_ϵ,ν;* for each λ∈ℐ_ϵ, the equation λ_𝔫_*,ϵ(p)=λ has exactly two roots q_+,ϵ(λ) and q_-,ϵ(λ)=-q_+(λ,ϵ) in D_ϵ,ν with |q_±,ϵ(λ)∓ q_*|=𝒪(|ϵ|). Moreover,Im(q_+,ϵ(λ))>0,Im(q_-,ϵ(λ))<0 when Im(λ)>0,Im(q_+,ϵ(λ))<0,Im(q_-,ϵ(λ))>0 when Im(λ)<0; * Definedℙ_n,ϵ(p):=(·,u_n,ϵ(x;p))_L^2(Y;n_ϵ(x))u_n,ϵ(x;p) and ℚ_n,ϵ(p):=1-ℙ_n,ϵ(p). Then p↦ℙ_𝔫_*,ϵ(p) is analytical in D_ϵ,ν. Moreover, p↦ (ℒ_ϵ(p)ℚ_𝔫_*,ϵ(p)-λ)^-1∈ℬ((H^1_p,b(Ω))^*,H^1_p,b(Ω)) is analytical in {p∈𝐂:-(1+ν)π<Re(p)< (1+ν)π, |Im(p)|<ν} for any λ∈ℐ_ϵ.See Appendix A. Based on Lemma <ref>, we construct the analytic continuation of the single-layer operator (<ref>) for λ∈ℐ_ϵ. Let C_ϵ (See Figure <ref>(a)) be the complex contour defined asC_ϵ:=[-π,-q_*-|ϵ|^1/3]∪[-q_*+|ϵ|^1/3,q_*-|ϵ|^1/3]∪ [q_*+|ϵ|^1/3,π] ∪{-q_*+|ϵ|^1/3e^iθ:π≥θ≥ 0}∪{q_*+|ϵ|^1/3e^iθ:π≤θ≤ 2π}.We define 𝔾̃_ϵ(λ)∈ℬ(H̃^-1/2(Γ),H^1/2(Γ)) by𝔾̃_ϵ(λ)φ:= 1/2π∫_C_ϵu_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ-λ_𝔫_*,ϵ(p)dp+ 1/2π∫_-π^π∑_n≠𝔫_*u_n,ϵ(x;p)⟨φ(·),u_n,ϵ(· ;p)⟩/λ-λ_n,ϵ(p)dp.Here λ_n,ϵ(p) denotes the Floquet-Bloch eigenvalue of the perturbed structure, and u_n,ϵ(x;p) is the corresponding normalized eigenfunction. By Lemma <ref>, we see that λ≠λ_𝔫_*,ϵ(p) for λ∈ℐ_ϵ and p∈ C_ϵ. Thus, it's clear that: λ↦𝔾̃_ϵ(λ)∈ℬ(H̃^-1/2(Γ),H^1/2(Γ)) is an operator-valued analytical function in ℐ_ϵ.Moreover, the operator 𝔾̃_ϵ(λ) extends the single layer potential operator (<ref>) in the sense that the following holds. For λ∈ℐ_ϵ∩𝐑, we have𝔾̃_ϵ(λ)φ =∫_ΓG_ϵ(x,y;λ)φ(y)dy_2 =1/2π∫_-π^π∑_n≠𝔫_*u_n,ϵ(x;p)⟨φ(·),u_n,ϵ(·;p)⟩/λ-λ_n,ϵ(p)dp +1/2πp.v.∫_-π^πu_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(·;p)⟩/λ-λ_𝔫_*,ϵ(p)dp- (i⟨φ(·),u_𝔫_*,ϵ(· ;q_+,ϵ(λ))⟩/2|λ_𝔫_*,ϵ^'(q_+,ϵ(λ))|u_𝔫_*,ϵ(x ;q_+,ϵ(λ)) + i⟨φ(·),u_𝔫_*,ϵ(· ;q_-,ϵ(λ))⟩/2|λ_𝔫_*,ϵ^'(q_-,ϵ(λ))|u_𝔫_*,ϵ(x ;q_-,ϵ(λ)) ),where G_ϵ(x,y;λ) is the Green function defined in (<ref>). We first note that G_ϵ(x,y;λ)=1/2π∫_-π^π∑_n≠𝔫_*u_n,ϵ(x;p)u_n,ϵ(y;p)/λ-λ_n,ϵ(p)dp +1/2πp.v.∫_-π^πu_𝔫_*,ϵ(x;p)u_𝔫_*,ϵ(y;p)/λ-λ_𝔫_*,ϵ(p)dp- (iu_𝔫_*,ϵ(x ;q_+,ϵ(λ))u_𝔫_*,ϵ(y ;q_+,ϵ(λ))/2|λ_𝔫_*,ϵ^'(q_+,ϵ(λ))| + iu_𝔫_*,ϵ(x ;q_-,ϵ(λ))u_𝔫_*,ϵ(x ;q_-,ϵ(λ))/2|λ_𝔫_*,ϵ^'(q_-,ϵ(λ))|),which is proved in Theorem 6 in <cit.>.By comparing (<ref>) with (<ref>), the proposition follows if we can prove the following identity for λ∈ℐ_ϵ∩𝐑:1/2π∫_C_ϵu_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ-λ_𝔫_*,ϵ(p)dp=1/2πp.v.∫_-π^πu_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(·;p)⟩/λ-λ_𝔫_*,ϵ(p)dp- (i⟨φ(·),u_𝔫_*,ϵ(· ;q_+,ϵ(λ))⟩/2|λ_𝔫_*,ϵ^'(q_+,ϵ(λ))|u_𝔫_*,ϵ(x ;q_+,ϵ(λ)) + i⟨φ(·),u_𝔫_*,ϵ(· ;q_-,ϵ(λ))⟩/2|λ_𝔫_*,ϵ^'(q_-,ϵ(λ))|u_𝔫_*,ϵ(x ;q_-,ϵ(λ)) ).To this end, we introduce the following auxiliary contour (See Figure <ref>(a))C̃_τ,λ:= ([-π,q_-,ϵ(λ)-τ]∪[q_-,ϵ(λ)+τ,q_+,ϵ(λ)-τ]∪ [q_+,ϵ(λ)+τ,π] )∪{q_-,ϵ(λ)+τ e^iθ:π≥θ≥ 0}∪{q_+,ϵ(λ)+τ e^iθ:π≤θ≤ 2π}:=C̃_τ,λ^(0)∪C̃_τ,λ^(-)∪C̃_τ,λ^(+)with τ∈(0,|ϵ|^1/3).Since q_±,ϵ(λ) are real for λ∈𝐑,p↦u_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ-λ_𝔫_*,ϵ(p) is analytical in the closed region bounded by C_ϵ and C̃_τ,λ. The Cauchy theorem indicates that∫_C_ϵu_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ-λ_𝔫_*,ϵ(p)dp = ∫_C̃_τ,λu_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ-λ_𝔫_*,ϵ(p)dp =∫_C̃_τ,λ^(0)∪C̃_τ,λ^(-)∪C̃_τ,λ^(+)u_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ-λ_𝔫_*,ϵ(p)dp.Thus, to obtain (<ref>), it's sufficient to provelim_τ→ 0∫_C̃_τ,λ^(0)∪C̃_τ,λ^(-)∪C̃_τ,λ^(+)u_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ-λ_𝔫_*,ϵ(p)dp =p.v.∫_-π^πu_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(·;p)⟩/λ-λ_𝔫_*,ϵ(p)dp- (iπ⟨φ(·),u_𝔫_*,ϵ(· ;q_+,ϵ(λ))⟩/|λ_𝔫_*,ϵ^'(q_+,ϵ(λ))|u_𝔫_*,ϵ(x ;q_+,ϵ(λ)) +iπ⟨φ(·),u_𝔫_*,ϵ(· ;q_-,ϵ(λ))⟩/|λ_𝔫_*,ϵ^'(q_-,ϵ(λ))|u_𝔫_*,ϵ(x ;q_-,ϵ(λ)) )Note that the definition of Cauchy principal value integral implies thatlim_τ→ 0∫_C̃_τ,λ^(0)u_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ-λ_𝔫_*,ϵ(p)dp =p.v.∫_-π^πu_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(·;p)⟩/λ-λ_𝔫_*,ϵ(p)dp.On the other hand, since λ^'_𝔫_*,ϵ(q_+,ϵ(λ))≠ 0 by Theorem <ref>, q_+,ϵ(λ) is a simple pole of the map p↦u_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ-λ_𝔫_*,ϵ(p) (marked as the black cross in Figure <ref>). Therefore, the residue formula giveslim_τ→ 0∫_C̃_τ,λ^(+)u_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ-λ_𝔫_*,ϵ(p)dp =-iπ⟨φ(·),u_𝔫_*,ϵ(· ;q_+,ϵ(λ))⟩/|λ_𝔫_*,ϵ^'(q_+,ϵ(λ))|u_𝔫_*,ϵ(x ;q_+,ϵ(λ)).Similarly,lim_τ→ 0∫_C̃_τ,λ^(-)u_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ-λ_𝔫_*,ϵ(p)dp=-iπ⟨φ(·),u_𝔫_*,ϵ(· ;q_-,ϵ(λ))⟩/|λ_𝔫_*,ϵ^'(q_-,ϵ(λ))|u_𝔫_*,ϵ(x ;q_-,ϵ(λ)).Combining (<ref>)-(<ref>), we obtain (<ref>) and complete the proof.For λ∈ℐ_ϵ and φ∈H̃^-1/2(Γ), we define u(x;λ)=(𝔾̃_ϵ(λ)φ)(x) (x∈Ω). Then (ℒ_ϵ-λ)u(x)=0, x∈Ω^+,andu(x;λ)=u(𝒫x;λ), (∂ u/∂ x_1)|_Γ=φ/2.Moreover, lim_x_1→ +∞|u(x;λ) --i⟨φ(·),u_𝔫_*,ϵ(· ;q_+,ϵ(λ))⟩/λ^'_𝔫_*,ϵ(q_+,ϵ(λ))u_𝔫_*,ϵ(x ;q_+,ϵ(λ))|=0,where the convergence rate is exponential. We first prove (<ref>). Note that the map λ↦ (ℒ_ϵ-λ)(𝔾̃_ϵ(λ)φ) is analytical in ℐ_ϵ for each fixed φ∈H̃^-1/2(Γ). It's sufficient to prove (<ref>) for all λ∈ℐ_ϵ∩𝐑. Indeed, for λ∈ℐ_ϵ∩𝐑, Proposition <ref> shows that(ℒ_ϵ-λ)(𝔾̃_ϵ(λ)φ)(x) =∫_Γ(ℒ_ϵ-λ)G_ϵ(x,y;λ)φ(y)dy_2 =1/n^2_ϵ(x)φ(x_2)δ(x_1),where δ(x_1) denotes the Dirac-delta function. Therefore (ℒ_ϵ-λ)(𝔾̃_ϵ(λ)φ)(x)=0 for x∈Ω^+. Second, (<ref>) and (<ref>) follow from the same strategy. Note that (<ref>) and (<ref>) for λ∈ℐ_ϵ∩𝐑 can be proved using the same approach as that of Lemma 2.4 and 2.5 in <cit.> respectively. By analytical extension, we obtain (<ref>) and (<ref>).Finally, we prove (<ref>).We introduce the following auxiliary contour (See Figure <ref>(b))C_ϵ,ν^evan:= ([-π,-q_*-|ϵ|^1/3cos(θ_ϵ) ]+iν) ∪([-q_*+|ϵ|^1/3cos(θ_ϵ),q_*-|ϵ|^1/3cos(θ_ϵ) ]+iν) ∪([q_*+|ϵ|^1/3cos(θ_ϵ),π]+iν) ∪{-q_*+|ϵ|^1/3 e^iθ:π-θ_ϵ≥θ≥θ_ϵ}∪{q_*+|ϵ|^1/3 e^iθ:π-θ_ϵ≥θ≥θ_ϵ}∪{-π+it:0≤ t≤ν}∪{π+it:ν≥ t≥ 0},where θ_ϵ:=sin^-1(ν/|ϵ|^1/3). By following the proof of Theorem 7 of <cit.>, we can show that the following function decays exponentially as x_1 → +∞:w^evan,+(x):=1/2π∫_C_ϵ,ν^evanu_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ-λ_𝔫_*,ϵ(p)dp + 1/2π∫_0^2π∑_n≠𝔫_*u_n,ϵ(x;p)⟨φ(·),u_n,ϵ(· ;p)⟩/λ-λ_n,ϵ(p)dp.On the other hand, when we shift the integral of u_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(·;p)⟩/λ-λ_𝔫_*,ϵ(p) from C_ϵ to C_ϵ,ν^evan, it sweeps a simple pole of the integrand, which lies at p=q_+,ϵ(λ) (marked as the cross in Figure <ref>(b)). Thus, the residue formula gives that1/2π ∫_C_ϵu_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ-λ_𝔫_*,ϵ(p)dp =1/2π∫_C_ϵ,ν^evanu_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ-λ_𝔫_*,ϵ(p)dp -i⟨φ(·),u_𝔫_*,ϵ(· ;q_+,ϵ(λ))⟩/λ^'_𝔫_*,ϵ(q_+,ϵ(λ))u_𝔫_*,ϵ(x ;q_+,ϵ(λ)).By (<ref>), (<ref>) and (<ref>), we haveu(x;λ)=w^evan,+(x)-i⟨φ(·),u_𝔫_*,ϵ(· ;q_+,ϵ(λ))⟩/λ^'_𝔫_*,ϵ(q_+,ϵ(λ))u_𝔫_*,ϵ(x ;q_+,ϵ(λ)).This proves (<ref>) by noting that w^evan,+(x) decays exponentially as x_1→ +∞. Let u(x;λ) be defined as in Proposition <ref>. If Im(λ)>0, then u(x;λ) decays exponentially as |x_1|→ +∞. If Im(λ)=0, then u(x;λ) decays exponentially as |x_1|→ +∞ if and only if⟨φ(·),u_𝔫_*,ϵ(· ;q_±,ϵ(λ))⟩=0.If Im(λ)<0, then u(x;λ) decays exponentially as |x_1|→ +∞ if and only if⟨φ(·),u_𝔫_*,ϵ(· ;q_±,ϵ(λ))⟩=0.We only proof that if Im(λ)<0, then u(x;λ) decays exponentially as x_1→ +∞ if and only if ⟨φ(·),u_𝔫_*,ϵ(· ;q_+,ϵ(λ))⟩ = 0. The other cases can be treated similarly. By Proposition <ref>, the asymptotic limit of u(x;λ) as x_1→ +∞ is given by-i⟨φ(·),u_𝔫_*,ϵ(· ;q_+,ϵ(λ))⟩/λ^'_𝔫_*,ϵ(q_+,ϵ(λ))u_𝔫_*,ϵ(x ;q_+,ϵ(λ)).On the other hand, u_𝔫_*,ϵ(x ;q_+(λ)) satisfies the following quasi-periodic boundary conditionu_𝔫_*,ϵ(x+e_1 ;q_+,ϵ(λ))=e^iq_+,ϵ(λ)u_𝔫_*,ϵ(x ;q_+,ϵ(λ)).Since q_+,ϵ(λ) has a negative imaginary part for Im(λ)<0 by Lemma <ref>, (<ref>) implies that u_𝔫_*,ϵ(x ;q_+,ϵ(λ)) blows up exponentially as x_1→ +∞. Consequently, u(x;λ) decays exponentially as x_1→ +∞ if and only if ⟨φ(·),u_𝔫_*,ϵ(· ;q_+,ϵ(λ))⟩ = 0.§.§ Boundary-integral formulation for the mode bifurcated from the Dirac point We construct a solution to (<ref>) in the formu(x_1,x_2;λ):={ (𝔾̃_ϵ(λ)φ)(x),x_1>0, -(𝔾̃_-ϵ(λ)φ)(x),x_1<0,.for some φ∈H̃^-1/2(Γ). By Proposition <ref>,(ℒ_ϵ-λ)u=0(x_1>0)and (ℒ_-ϵ-λ)u=0(x_1<0).The continuity of Dirichlet and Neumann data givesu(0^-,x_2;λ)=u(0^+,x_2;λ), ∂ u/∂ x_1(0^-,x_2;λ)=∂ u/∂ x_1(0^+,x_2;λ).By (<ref>), condition (<ref>) holds for the constructed solution (<ref>). By substituting (<ref>) into (<ref>), we obtain the following boundary integral equation𝔾_ϵ^Γ(λ)φ :=(𝔾̃_ϵ(λ)φ+𝔾̃_-ϵ(λ)φ)|_Γ=0.Thus, the function u(x;λ) defined in (<ref>) is a H^1_loc(Ω)-solution of (<ref>) only if λ∈ℐ_ϵ is a characteristic value of 𝔾_ϵ^Γ(λ)∈ℬ(H̃^-1/2(Γ), H^1/2(Γ)). Conversely, when (<ref>) holds, u(x;λ) in (<ref>) solves (<ref>). Hence, the eigenvalue problem (<ref>) is equivalent to the characteristic value problem (<ref>). In addition, the following holds: Let λ∈ℐ_ϵ be a characteristic value of 𝔾_ϵ^Γ(λ) and φ∈H̃^-1/2(Γ) the associated eigenvector for (<ref>). Then the function u(x;λ) defined by (<ref>) satisfies the properties in Theorem <ref>. We first prove that Im(λ)≤ 0.Assume the contrary that Im(λ)>0. Then Proposition <ref> indicates that u(x;λ) exponentially decays as |x_1|→∞. An integral by parts shows that∫_Ω^+|∇ u|^2 dx-λ∫_Ω^+n^2_ϵ|u|^2 dx=-∫_Γ∂ u/∂ x_1u dx_2.Similarly,∫_Ω^-|∇ u|^2 dx-λ∫_Ω^-n^2_-ϵ|u|^2 dx=∫_Γ∂ u/∂ x_1u dx_2.Since the interface conditions (<ref>) and (<ref>) are satisfied when (<ref>) holds, we can sum (<ref>) and (<ref>) to get∫_Ω|∇ u|^2 dx-λ(∫_Ω^+n^2_ϵ|u|^2 dx+∫_Ω^-n^2_-ϵ|u|^2 dx)=0.Since Im(λ)>0 and n_±ϵ(x)>0 for any x∈Ω, (<ref>) holds only if u(x)≡ 0 for x∈Ω. Then (<ref>) indicates thatφ/2=(∂ u/∂ x_1)|_Γ_+=0,which contradicts the assumption that φ≠ 0. Therefore Im(λ)≤ 0. Next, we consider the two cases Im(λ)= 0 and Im(λ)<0, respectively. Case 1: Im(λ)= 0. We aim to prove (<ref>); then Proposition <ref> indicates that u(x;λ) decays exponentially as |x_1|→∞. By (<ref>), we write (<ref>) as1/2π∫_-π^π∑_n≠𝔫_*u_n,ϵ(x;p)⟨φ(·),u_n,ϵ(·;p)⟩/λ-λ_n,ϵ(p)dp +1/2πp.v.∫_-π^πu_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(·;p)⟩/λ-λ_𝔫_*,ϵ(p)dp +1/2π∫_-π^π∑_n≠𝔫_*u_n,-ϵ(x;p)⟨φ(·),u_n,-ϵ(·;p)⟩/λ-λ_n,-ϵ(p)dp +1/2πp.v.∫_-π^πu_𝔫_*,-ϵ(x;p)⟨φ(·),u_𝔫_*,-ϵ(·;p)⟩/λ-λ_𝔫_*,-ϵ(p)dp- (i⟨φ(·),u_𝔫_*,ϵ(· ;q_+,ϵ(λ))⟩/2|λ_𝔫_*,ϵ^'(q_+,ϵ(λ))|u_𝔫_*,ϵ(x ;q_+,ϵ(λ)) + i⟨φ(·),u_𝔫_*,ϵ(· ;q_-,ϵ(λ))⟩/2|λ_𝔫_*,ϵ^'(q_-,ϵ(λ))|u_𝔫_*,ϵ(x ;q_-,ϵ(λ)) )- (i⟨φ(·),u_𝔫_*,-ϵ(· ;q_+,-ϵ(λ))⟩/2|λ_𝔫_*,-ϵ^'(q_+,-ϵ(λ))|u_𝔫_*,-ϵ(x ;q_+,-ϵ(λ)) + i⟨φ(·),u_𝔫_*,-ϵ(· ;q_-,-ϵ(λ))⟩/2|λ_𝔫_*,-ϵ^'(q_-,-ϵ(λ))|u_𝔫_*,-ϵ(x ;q_-,-ϵ(λ)) ) =0.Taking dual product with φ to both sides of the above equation, the imaginary part of the resulted equation yields|⟨φ(·),u_𝔫_*,ϵ(·;q_-,ϵ(λ))|^2/|λ_𝔫_*,ϵ^'(q_-,ϵ(λ))| =|⟨φ(·),u_𝔫_*,ϵ(·;q_+,ϵ(λ))|^2/|λ_𝔫_*,ϵ^'(q_+,ϵ(λ))|=0,which gives (<ref>).Case 2: Im(λ)<0. If u(x;λ) decays exponentially as |x_1|→ +∞, then we draw the same contradiction as in (<ref>). As a result, Proposition <ref> implies that u(x;λ) blows up at least in one direction. Thus, u(x;λ)_L^2(Ω)=∞ and u is a resonant mode.§.§ Properties of boundary integral operatorsHere and henceforth, for each λ∈ℐ_ϵ, we parameterize λ as λ:=λ_*+ϵ· h for h∈𝒥:={h∈𝐂:|h|<c_0 |t_*|}.In this section, we investigate the boundary integral operator 𝔾^Γ_ϵ(λ_*+ϵ· h) for h∈𝒥. The results obtained here shall pave the way for applying the Gohberg-Sigal theory (see for instance <cit.>) to solve the characteristic value problem (<ref>). We first study the limit of 𝔾^Γ_ϵ(λ_*+ϵ· h) in ℬ(H̃^-1/2(Γ),H^1/2(Γ)) as ϵ→ 0. The result is summarized in the following proposition. 𝔾^Γ_ϵ(λ_*+ϵ· h) converges uniformly for h∈𝒥 as ϵ→ 0:lim_ϵ→ 0𝔾^Γ_ϵ(λ_*+ϵ· h) -(2𝕋+β(h)ℙ^Dirac) _ℬ(H̃^-1/2(Γ),H^1/2(Γ))=0,whereβ(h):=-1/|t_*|α_n_*h/√(1-h^2/|t_*|),ℙ^Dirac:=⟨·, v_n_*(x;0)⟩ v_n_*(x;0),and𝕋:= 1/2π∫_-π^π∑_n≠ n_*,m_*v_n(x;p) ⟨·,v_n(y;p)⟩/λ_*-μ_n(p)dp +(-i/2v_n_*(x;-q_*) ⟨·,v_n_*(y;-q_*)⟩/|μ_n_*^'(-q_*)| -i/2v_m_*(x;q_*) ⟨· ,v_m_*(y;q_*)⟩/|μ_m_*^'(q_*)|+1/2πp.v.∫_-π^πv_n_*(x;p) ⟨· ,v_n_*(y;p)⟩/λ_*-μ_n_*(p)dp +1/2πp.v.∫_-π^πv_m_*(x;p)⟨· ,v_m_*(y;p)⟩/λ_*-μ_m_*(p)dp ). See Appendix B.We then investigate the operator 𝕋.𝕋∈ℬ(H̃^-1/2(Γ),H^1/2(Γ)) is a Fredholm operator of index zero. Moreover, the kernel of 𝕋 is given by(𝕋)=span{∂ v_n_*(x;0)/∂ x_1|_Γ}.See Appendix C.By Proposition <ref>, we can show that the limit operator 𝔾^Γ_0(h):=2𝕋+β(h)ℙ^Dirac has the following properties. The proof is the same as Proposition 4.6 of <cit.>.For h∈𝒥, 𝔾^Γ_0(h) is a Fredholm operator with index zero, analytical for h∈𝒥, and continuous for h∈∂𝒥. As a function of h, it attains a unique characteristic value h=0 in 𝒥, whose null multiplicity is one. Moreover, 𝔾^Γ_0(h) is invertible for any h∈𝒥 with h≠ 0.As a consequence, we have For h∈𝒥, 𝔾_ϵ^Γ(λ_*+ϵ· h) is a Fredholm operator with index zero and is analytic as a function h. By Proposition <ref> and <ref>, and the fact that Fredholm index is stable under small perturbation <cit.>, we conclude that 𝔾^Γ_ϵ(λ_*+ϵ· h) is a Fredholm operator with zero index for h∈𝒥 and small ϵ. The analyticity of 𝔾^Γ_ϵ(λ_*+ϵ· h) follows from (<ref>) and Proposition <ref>.§.§ Proof of Theorem <ref>By Proposition <ref>, it's sufficient to show that 𝔾^Γ(λ_*+ϵ· h) has a characteristic value for h∈𝒥. Since 𝔾^Γ_ϵ(λ_*+ϵ· h)→𝔾^Γ_0(h) (by Proposition <ref>), and 𝔾^Γ_0(h) is invertible for h∈∂ J (by Proposition <ref>), we have(𝔾^Γ_0(h))^-1(𝔾^Γ_ϵ(λ_*+ϵ· h)-𝔾^Γ_0(h)) → 0 for h∈∂ J.Note that the convergence is uniform in h. As a consequence, the following inequality holds for h∈∂𝒥 and ϵ being sufficiently small(𝔾^Γ_0(h))^-1(𝔾^Γ_ϵ(λ_*+ϵ· h)-𝔾^Γ_0(h)) _ℬ(H̃^-1/2(Γ))<1.Then the generalized Rouché theorem (see Theorem 2.9 in <cit.>) indicates that, for sufficiently small ϵ>0, 𝔾^Γ_ϵ(λ_*+ϵ· h) attains a unique characteristic value λ^⋆ :=λ_*+h^⋆ with h^⋆∈𝒥. This concludes the proof. § APPENDIX §.§ Appendix A: Proof of Lemma <ref>Step 1: We first show that statement 1 holds for ν=ϵ^2 when ϵ is sufficiently small.The key is that λ_𝔫_*,ϵ(p) is an isolated eigenvalue of the self-adjoint operator ℒ_ϵ(p) for each p∈ [-π,π], as seen in Figure <ref>. Since {ℒ_ϵ(p)} is an analytic family indexed by p, the Kato-Rellich theorem <cit.> indicates that λ_𝔫_*,ϵ(p) is analytic in a neighborhood of [-π,π]. Hence we only need to show the maximal analytic domain of λ_𝔫_*,ϵ(p) contains D_ϵ, ν as supposed. For each p∈ [-π,π], λ_𝔫_*,ϵ is analytic in the neighborhood B(p,r(p;ϵ)) with the radius r(p;ϵ)>0. Moreover, Theorem 3.9 in Chapter VII of <cit.> implies thatr(p;ϵ)≳ d(p;ϵ):=min{|λ_𝔫_*,ϵ(p_0)-λ_𝔫_*-1,ϵ(p_0)|,|λ_𝔫_*,ϵ(p)-λ_𝔫_*+1,ϵ(p)|}.Observe that by Assumption <ref>,d(p;ϵ)=O(1) for p near ± q_* (see Figure <ref>). More precisely,d(p;ϵ)>d_0>0,∀ p∈ (q_*-|ϵ|^1/3,q_*+|ϵ|^1/3)∪ (-q_*-|ϵ|^1/3,-q_*+|ϵ|^1/3),for some constant d_0 that is independent of ϵ. Thus, by (<ref>),B(q_*,|ϵ|^1/3)∪ B(-q_*,|ϵ|^1/3) is contained in the analytic domain of λ_𝔫_*,ϵ(p).We next introduce the following rectangle ℛ_ν:={p∈𝐂:-(1+ν)π<Re(p)< (1+ν)π, |Im(p)|<ν}.Note that for ν= ϵ^2, the width of R_ϵ^2 is ϵ^2. In contrast, min_p∈ [-π,π]d(p;ϵ) =O(|ϵ|), which follows fromTheorem <ref>. Thus, by the estimate (<ref>), λ_𝔫_*,ϵ(p) is analytic inside R_ϵ^2 for ϵ sufficiently small. Therefore, statement 1 holds for ϵ sufficiently small and ν= ϵ^2. Step 2: We prove that for ϵ sufficiently small, statement 2 holds for ν= ϵ^2. Step 2.1: We first note that λ_𝔫_*,ϵ(p) is analytic in p for p near q_*.For ϵ small enough,equation (<ref>) implies thatλ_𝔫_*,ϵ^'(q_*) > 1/2λ_𝔫_*^'(q_*)>0.By the inverse function theorem for analytical functions, λ_𝔫_*,ϵ(·) maps the open disc B(q_*,|ϵ|^1/3) biholomorphically to an open neighborhood U(λ_𝔫_*,ϵ(q_*)) of λ_𝔫_*,ϵ(q_*) which contains an open disc B(λ_𝔫_*,ϵ(q_*), c_1|ϵ|^1/3) for some constant c_1 independent of ϵ. In addition, sinceλ_𝔫_*,ϵ(p) is real-valued for real-valued p, λ_𝔫_*,ϵ(·) maps the upper half-disc B(q_*,|ϵ|^1/3) ∩𝐂_+ to the upper neighborhood U(λ_𝔫_*,ϵ(q_*))∩𝐂_+ and the lower half-disc B(q_*,|ϵ|^1/3) ∩𝐂_- to the lower half-neighborhood U(λ_𝔫_*,ϵ(q_*))∩𝐂_-. On the other hand, the estimate (<ref>) indicates that λ_𝔫_*,ϵ(q_*)-λ_*=𝒪(|ϵ|).Then ℐ_ϵ⊂ U(λ_𝔫_*,ϵ(q_*)) for ϵ is small enough.Therefore, for any λ∈ℐ_ϵ, we can conclude that for ϵ is small enough there exists a unique root p=q_+,ϵ(λ)∈ B(q_*,|ϵ|^1/3) with the estimate |q_+,ϵ(λ)-q_*|=𝒪(ϵ).Moreover,Im(q_+,ϵ(λ))>0 for Im(λ)>0 and Im(q_+,ϵ(λ))<0 for Im(λ)<0.Next, observe thatλ_𝔫_*,ϵ(p)=λ_𝔫_*,ϵ(-p), p∈[-π,π].The uniqueness of analytic continuation implies that (<ref>) holds for all p∈ D_ϵ,ϵ^2. Consequently, q_-(λ,ϵ):=-q_+(λ,ϵ) is also a root of λ_𝔫_*,ϵ(p)=λ. It's then straightforward to check that q_-,ϵ(λ) satisfies all the desired properties in statement 2.Step 2.2: We prove that for ϵ small enough, λ_𝔫_*,ϵ(p)=λ (λ∈ℐ_ϵ) has no root for p∈ D_ϵ,ϵ^2\ (B(q_*,|ϵ|^1/3)∪ B(-q_*,|ϵ|^1/3). By Theorem <ref>, we have|λ_𝔫_*,ϵ(Re(p))-λ_*|≥ |t_*||ϵ|,∀ p∈ D_ϵ,ϵ^2\ (B(q_*,|ϵ|^1/3)∪ B(-q_*,|ϵ|^1/3).On the other hand, |λ_𝔫_*,ϵ(p)-λ_𝔫_*,ϵ(Re(p))|≲ |Im(p)|≲ϵ^2 ,∀ p∈ D_ϵ,ϵ^2\ (B(q_*,|ϵ|^1/3)∪ B(-q_*,|ϵ|^1/3).As a result, (<ref>) and (<ref>) imply that for all λ∈ℐ_ϵ,|λ_𝔫_*,ϵ(p)-λ|≥ |λ_𝔫_*,ϵ(Re(p))-λ_*|-|λ-λ_*|-|λ_𝔫_*,ϵ(p)-λ_𝔫_*,ϵ(Re(p))| ≥ (1-c_0)|t_*||ϵ|-ϵ^2,where c_0<1 is fixed in Theorem <ref>. Thus for ϵ small enough, |λ_𝔫_*,ϵ(p)-λ| ≥ (1-c_0)|t_*||ϵ|-ϵ^2>0 ∀λ∈ℐ_ϵ.Hence λ_𝔫_*,ϵ(p)=λ has no root for p∈ D_ϵ,ϵ^2\ (B(q_*,|ϵ|^1/3)∪ B(-q_*,|ϵ|^1/3). In conclusion, we proved that statement 2 holds for ϵ sufficiently small and ν=ϵ^2.Step 3: We prove that statement 3 holds for ϵ sufficiently small and properly chosen ν=ν_1(ϵ). Since u_𝔫_*,ϵ(x;p) is analytic in p, so is u_𝔫_*,ϵ(x;p) and ℙ_𝔫_*,ϵ(p). Now we consider the analyticity of(ℒ_ϵ(p)ℚ_𝔫_*,ϵ(p)-λ)^-1. Note that for p∈ [-π,π], we haveσ(ℒ_ϵ(p)ℚ_𝔫_*,ϵ(p)) ={0}∪{λ_n,ϵ(p)}_n≠𝔫_*.Thus, for λ∈ℐ_ϵ and p∈ [-π,π], it holds thatdist(λ,σ(ℒ_ϵ(p)ℚ_𝔫_*,ϵ(p)))≥dist(Re(λ),σ(ℒ_ϵ(p)ℚ_𝔫_*,ϵ(p))) =min{min_p∈ [-π,π] |Re(λ)-λ_𝔫_*-1,ϵ(p)|,min_p∈ [-π,π] |Re(λ)-λ_𝔫_*+1,ϵ(p)|}.By Assumption <ref>, Re(λ)≠λ_𝔫_*-1(p) for any p∈ [-π,π]\{0}. Thus, the perturbation theory implies thatϵ small enough, min_p∈ [-π,π] |Re(λ)-λ_𝔫_*-1,ϵ(p)|>0.On the other hand, by equation (<ref>) in Theorem <ref>, we havemin_p∈ [-π,π] |Re(λ)-λ_𝔫_*+1,ϵ(p)|=|Re(λ)-λ_𝔫_*+1,ϵ(0)| ≳ (1-c_0)|t_*||ϵ|>0,for any λ∈ℐ_ϵ.In conclusion, (<ref>) indicates that dist(λ,σ(ℒ_ϵ(p)ℚ_𝔫_*,ϵ(p)))>c_2|ϵ| for any λ∈ℐ_ϵ and p∈ [-π,π], where c_2>0 is a constant independent of ϵ. Thus (ℒ_ϵ(p)ℚ_𝔫_*,ϵ(p)-λ)^-1 is well-defined for λ∈ℐ_ϵ and p∈ [-π,π].Furthermore, since p↦ℒ_ϵ(p)ℚ_𝔫_*,ϵ(p) is analytic, the analytic mapping p↦ℒ_ϵ(p)ℚ_𝔫_*,ϵ(p) can be extended to a complex neighborhood of [-π,π]. We denote this neighborhood by ℛ_ν, with ν=ν_1(ϵ)>0. Hence we prove that p↦ℒ_ϵ(p)ℚ_𝔫_*,ϵ(p) is analytic in ℛ_ν, which completes the proof of statement 3.Step 4. Finally, we conclude that for ϵ sufficiently small, say |ϵ|<ϵ_0 for some constant ϵ_0>0, ν(ϵ)=min{ν_1(ϵ),ϵ^2} satisfies all the required properties in the lemma.This concludes the proof of Lemma <ref>. §.§ Appendix B: Proof of Proposition <ref>We write 𝔾^Γ_ϵ=𝔾^Γ,evan_ϵ+𝔾^Γ,prop_ϵ and 𝕋=𝕋^evan+𝕋^prop, where𝔾^Γ,evan_ϵ(λ_*+ϵ h):=1/2π∫_-π^π∑_n≠𝔫_*,𝔫_*+1u_n,ϵ(x;p)⟨φ(·),u_n,ϵ(· ;p)⟩/λ_*+ϵ h-λ_n,ϵ(p)dp+1/2π∫_-π^π∑_n≠𝔫_*,𝔫_*+1u_n,-ϵ(x;p)⟨φ(·),u_n,-ϵ(· ;p)⟩/λ_*+ϵ h-λ_n,-ϵ(p)dp :=𝔾̃^evan_ϵ(λ_*+ϵ h) +𝔾̃^evan_-ϵ(λ_*+ϵ h), 𝔾^Γ,prop_ϵ(λ_*+ϵ h):=1/2π∫_C_ϵu_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*,ϵ(p)dp + 1/2π∫_-π^πu_𝔫_*+1,ϵ(x;p)⟨φ(·),u_𝔫_*+1,ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*+1,ϵ(p)dp+1/2π∫_C_ϵu_𝔫_*,-ϵ(x;p)⟨φ(·),u_𝔫_*,-ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*,-ϵ(p)dp+ 1/2π∫_-π^πu_𝔫_*+1,-ϵ(x;p)⟨φ(·),u_𝔫_*+1,-ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*+1,-ϵ(p)dp,𝕋^evan :=1/2π∫_-π^π∑_n≠ n_*,m_*v_n(x;p) ⟨·,v_n(y;p)⟩/λ_*-μ_n(p)dp , 𝕋^prop :=-i/2v_n_*(x;-q_*) ⟨·,v_n_*(y;-q_*)⟩/|μ_n_*^'(-q_*)| -i/2v_m_*(x;q_*) ⟨· ,v_m_*(y;q_*)⟩/|μ_m_*^'(q_*)|+1/2πp.v.∫_-π^πv_n_*(x;p) ⟨· ,v_n_*(y;p)⟩/λ_*-μ_n_*(p)dp +1/2πp.v.∫_-π^πv_m_*(x;p)⟨· ,v_m_*(y;p)⟩/λ_*-μ_m_*(p)dp.Then𝔾^Γ_ϵ(λ_*+ϵ h)-(2𝕋+β(h)ℙ^Dirac) =(𝔾^Γ,evan_ϵ(λ_*+ϵ h) -2𝕋^evan) +(𝔾^Γ,prop_ϵ(λ_*+ϵ h) -2𝕋^prop-β(h)ℙ^Dirac) =(𝔾̃^evan_ϵ(λ_*+ϵ h) +𝔾̃^evan_-ϵ(λ_*+ϵ h) -2𝕋^evan) +(𝔾^Γ,prop_ϵ(λ_*+ϵ h) -2𝕋^prop-β(h)ℙ^Dirac).Therefore Proposition <ref> follows from Lemma <ref> and <ref> below. Note that all convergence in those lemmas is uniform in h∈𝒥.lim_ϵ→ 0𝔾̃^evan_±ϵ(λ_*+ϵ h) - 𝕋^evan_ℬ(H̃^-1/2(Γ),H^1/2(Γ)) =0.lim_ϵ→ 0𝔾^Γ,prop_ϵ(λ_*+ϵ h) -2𝕋^prop-β(h)ℙ^Dirac_ℬ(H̃^-1/2(Γ),H^1/2(Γ)) =0. We prove (<ref>) for 𝔾̃^evan_ϵ(λ_*+ϵ h), while the proof of 𝔾̃^evan_-ϵ(λ_*+ϵ h) is the same. We define𝔾̃_+,ϵ^evan(λ_*+ϵ h):=1/2π∫_-π^π∑_n>𝔫_*+1u_n,ϵ(x;p)⟨φ(·),u_n,ϵ(· ;p)⟩/λ_*+ϵ h-λ_n,ϵ(p)dp, 𝔾̃_-,ϵ^evan(λ_*+ϵ h):=1/2π∫_-π^π∑_n<𝔫_*u_n,ϵ(x;p)⟨φ(·),u_n,ϵ(· ;p)⟩/λ_*+ϵ h-λ_n,ϵ(p)dp, 𝕋_+:=1/2π∫_-π^π∑_n>𝔫_*+1u_n(x;p)⟨φ(·),u_n(· ;p)⟩/λ_*-λ_n(p)dp,𝕋_-:=1/2π∫_-π^π∑_n<𝔫_*u_n(x;p)⟨φ(·),u_n(· ;p)⟩/λ_*-λ_n(p)dp.Note that (<ref>) follows from the decomposition 𝔾̃^evan_ϵ=𝔾̃_+,ϵ^evan+𝔾̃_-,ϵ^evan, 𝕋=𝕋_++𝕋_-, and the following identitieslim_ϵ→ 0𝔾̃_+,ϵ^evan(λ_*+ϵ h)-𝕋_+_ℬ(H̃^-1/2(Γ),H^1/2(Γ)) =0, lim_ϵ→ 0𝔾̃_-,ϵ^evan(λ_*+ϵ h)-𝕋_-_ℬ(H̃^-1/2(Γ),H^1/2(Γ)) =0.We prove (<ref>) and (<ref>) in the following two steps.Step 1: we first prove (<ref>). Define ℙ_+,ϵ(p)f:=∑_n≤𝔫_*+1(f,u_n,ϵ(x;p))_L^2(Y;n_ϵ(x))u_n,ϵ(x;p),ℚ_+,ϵ(p):=1-ℙ_+,ϵ(p).Then we can rewrite 𝔾̃^evan_+,ϵ(λ_*+ϵ h) and 𝕋_+^evan as𝔾̃^evan_+,ϵ(λ_*+ϵ h)=1/2π∫_-π^π( Tr∘(ℒ_ϵ(p)ℚ_+,ϵ(p)-λ_*-ϵ h)^-1ℚ_+,ϵ(p)∘(1/n_ϵ^2(y)ℳ) )dp,𝕋_+^evan =1/2π∫_-π^π( Tr∘(ℒ(p)ℚ_+(p)-λ_*)^-1ℚ_+(p)∘(1/n^2(y)ℳ) )dp.By Lemma <ref>,the map p↦Tr∘(ℒ_ϵ(p)ℚ_+,ϵ(p)-λ_*-ϵ h)^-1ℚ_+,ϵ(p)∘ (1/n_ϵ^2(y)ℳ) is analytic. We claim that* Tr∘(ℒ_ϵ(p)ℚ_+,ϵ(p)-λ_*-ϵ h)^-1ℚ_+,ϵ(p)∘ (1/n_ϵ^2(y)ℳ)∈ℬ(H̃^-1/2(Γ),H^1/2(Γ)) has uniformly bounded norm for p∈ [-π,π] and ϵ≪ 1;* Tr∘(ℒ_ϵ(p)ℚ_+,ϵ(p)-λ_*-ϵ h)^-1ℚ_+,ϵ(p)∘ (1/n_ϵ^2(y)ℳ) converges to Tr∘(ℒ(p)ℚ_+(p)-λ_*)^-1ℚ_+(p)∘ (1/n^2(y)ℳ) in operator norm for each p∈ [-π,π]. Then (<ref>) follows from the integral expression (<ref>) and the dominated convergence theorem. The proof of claim (1)-(2) are presented in Steps 1.1 and 1.2 below, respectively.Step 1.1: Note that σ(ℒ_ϵ(p)ℚ_+(p))={0}∪{λ_n(p)}_n>𝔫_*+1.Thus, for h∈𝒥, λ_*+ϵ h∉σ(ℒ_ϵ(p)ℚ_+(p)) and the resolvent (ℒ_ϵ(p)ℚ_+(p)-λ_*-ϵ h)^-1 is well-defined. Moreover,(ℒ_ϵ(p)ℚ_+,ϵ(p)-λ_*-ϵ h)^-1_ℬ((H^1_p,b(Ω)^*,H^1_p,b(Ω)))≤1/dist(λ_*+ϵ h,σ(ℒ_ϵ(p)ℚ_+(p)))=𝒪(1),where the estimate is uniform for p∈ [-π,π]. Thus,Tr∘(ℒ_ϵ(p)ℚ_+,ϵ(p) -λ_*-ϵ h)^-1ℚ_+,ϵ(p)∘ (1/n_ϵ^2(y)ℳ)_ℬ(H̃^-1/2(Γ),H^1/2(Γ))=𝒪(1),and claim (1) follows.Step 1.2: Note that ℒ_ϵ(p)ℚ_+,ϵ(p) converges to ℒ(p)ℚ_+(p) in the generalized sense as ϵ→ 0 for each p∈[-π,π]. As a consequence,lim_ϵ→ 0(ℒ_ϵ(p)ℚ_+,ϵ(p)-λ_*)^-1-(ℒ(p)ℚ_+(p)-λ_*)^-1_ℬ((H^1_p,b(Ω)^*,H^1_p,b(Ω)))=0,for each p∈[-π,π]. On the other hand, for each n≤𝔫_*+1, lim_ϵ→ 0u_n,ϵ(·;p)-u_n(·;p)_H^1_p,b→ 0. So lim_ϵ→ 0ℙ_+,ϵ(p)-ℙ_+(p)_ℬ((H^1_p,b(Ω)^*,H^1_p,b(Ω)))→ 0. Thus,lim_ϵ→ 0ℚ_+,ϵ(p) -ℚ_+(p)_ℬ((H^1_p,b(Ω)^*,H^1_p,b(Ω))) =lim_ϵ→ 0ℙ_+,ϵ(p) -ℙ_+(p)_ℬ((H^1_p,b(Ω)^*,H^1_p,b(Ω)))=0.(<ref>) and (<ref>) imply that(ℒ_ϵ(p)ℚ_+,ϵ(p)-λ_*)^-1ℚ_+,ϵ(p)-(ℒ(p)ℚ_+(p)-λ_*)^-1ℚ_+(p)_ℬ((H^1_p,b(Ω)^*,H^1_p,b(Ω)))≤(ℒ_ϵ(p)ℚ_+,ϵ(p)-λ_*)^-1-(ℒ(p)ℚ_+(p)-λ_*)^-1_ℬ((H^1_p,b(Ω)^*,H^1_p,b(Ω)))·ℚ_+,ϵ(p)_ℬ((H^1_p,b(Ω)^*,H^1_p,b(Ω))) +(ℒ(p)ℚ_+(p)-λ_*)^-1_ℬ((H^1_p,b(Ω)^*,H^1_p,b(Ω)))·ℚ_+,ϵ(p) -ℚ_+(p)_ℬ((H^1_p,b(Ω)^*,H^1_p,b(Ω)))→ 0,ϵ→ 0.Next, note that(ℒ_ϵ(p) ℚ_+,ϵ(p)-λ_*)^-1ℚ_+,ϵ(p) -(ℒ_ϵ(p)ℚ_+,ϵ(p)-λ_*-ϵ h)^-1ℚ_+,ϵ(p)_ℬ((H^1_p,b(Ω)^*,H^1_p,b(Ω)))=|ϵ h|·(ℒ_ϵ(p)ℚ_+,ϵ(p)-λ_*)^-1∘ (ℒ_ϵ(p)ℚ_+,ϵ(p)-λ_*-ϵ h)^-1ℚ_+,ϵ(p)_ℬ((H^1_p,b(Ω)^*,H^1_p,b(Ω))).Since (ℒ_ϵ(p)ℚ_+,ϵ(p)-λ_*-ϵ h)^-1 is uniformly bounded (proved in Step 1.1), lim_ϵ→ 0(ℒ_ϵ(p)ℚ_+,ϵ(p)-λ_*)^-1ℚ_+,ϵ(p) -(ℒ_ϵ(p)ℚ_+,ϵ(p)-λ_*-ϵ h)^-1ℚ_+,ϵ(p)_ℬ((H^1_p,b(Ω)^*,H^1_p,b(Ω)))=0.Combing (<ref>) with (<ref>), we obtainlim_ϵ→ 0(ℒ_ϵ(p)ℚ_+,ϵ(p)-λ_*-ϵ h)^-1ℚ_+,ϵ(p) - (ℒ(p)ℚ_+(p)-λ_*)^-1ℚ_+(p)_ℬ((H^1_p,b(Ω)^*,H^1_p,b(Ω)))=0,whence the point-wise convergence in claim (2) follows. Step 2: The proof of (<ref>) is also based on the dominated convergence theorem. With the projection operator ℙ_n,ϵ(p) defined in Lemma <ref>, we write𝔾̃_-,ϵ^evan(λ_*+ϵ h) =1/2π∑_n=1^𝔫_*-1∫_-π^πTr∘ℙ_n,ϵ(p)∘(1/n^2_ϵ(y)ℳ)/λ_*+ϵ h-λ_n,ϵ(p)dp,𝕋_- =1/2π∑_n=1^𝔫_*-1∫_-π^πTr∘ℙ_n(p)∘ (1/n^2(y)ℳ)/λ_*-λ_n(p)dp.For 1≤ n≤𝔫_*-1, we haveTr∘ℙ_n,ϵ(p)∘ (1/n^2_ϵ(y)ℳ)/λ_*+ϵ h-λ_n(p)_ℬ(H̃^-1/2(Γ),H^1/2(Γ))≲1/|λ_*+ϵ h-λ_n(p)|=𝒪(1).On the other hand, sincelim_ϵ→ 0ℙ_n,ϵ(p)-ℙ_n(p)_ℬ((H^1_p,b(Ω)^*,H^1_p,b(Ω)))=0,lim_ϵ→ 0|λ_n,ϵ(p)-λ_n(p)|=0(∀ p∈[-π,π]),Tr∘ℙ_n,ϵ(p)∘ (1/n^2_ϵ(y)ℳ)/λ_*+ϵ· h-λ_n,ϵ(p) converges to Tr∘ℙ_n(p)∘ (1/n^2(y)ℳ)/λ_*-λ_n(p) in operator norm for each p∈[-π,π]. So the dominated convergence theorem implies that for each n with 1≤ n≤𝔫_*-1,∫_-π^πTr∘ℙ_n,ϵ(p)∘ (1/n^2_ϵ(y)ℳ)/λ_*+ϵ h-λ_n,ϵ(p)dp -∫_-π^πTr∘ℙ_n(p)∘ (1/n^2(y)ℳ)/λ_*-λ_n(p)dp _ℬ(H̃^-1/2(Γ),H^1/2(Γ))→ 0.Since the summation in (<ref>) is finite, the convergence of 𝔾̃_-,ϵ^evan(λ_*+ϵ h) follows. Let γ_-,ϵ:={-q_*+ϵ^1/3e^iθ:π≥θ≥ 0}, γ_+,ϵ:={q_*+ϵ^1/3e^iθ:π≤θ≤ 2π}. We decompose the contour C_ϵ asC_ϵ=[-π,-q_*-ϵ^1/3]∪γ_-,ϵ∪[-q_*+ϵ^1/3,-ϵ^1/3]∪ [-ϵ^1/3,ϵ^1/3]∪[ϵ^1/3,q_*-ϵ^1/3]∪γ_+,ϵ∪ [q_*+ϵ^1/3,π].Our strategy of proving Lemma <ref> is to first decompose the operators in (<ref>) into several parts according to the decomposition of the contour C_ϵ, and then prove the convergence of each part. More precisely, we shall prove the following identities in ℬ(H̃^-1/2(Γ),H^1/2(Γ)): ∫_[-π,-q_*-ϵ^1/3]∪ [-q_*+ϵ^1/3,-ϵ^1/3]u_𝔫_*,±ϵ(x;p)⟨·,u_𝔫_*,±ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*,±ϵ(p) dp + ∫_ϵ^1/3^πu_𝔫_*+1,±ϵ(x;p)⟨·,u_𝔫_*+1,±ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*+1,±ϵ(p)dp - p.v.∫_-π^πv_n_*(x;p) ⟨· ,v_n_*(y;p)⟩/λ_*-μ_n_*(p)dp → 0.∫_[ϵ^1/3,q_*-ϵ^1/3]∪ [q_*+ϵ^1/3,π]u_𝔫_*,±ϵ(x;p)⟨·,u_𝔫_*,±ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*,±ϵ(p) dp + ∫_-π^-ϵ^1/3u_𝔫_*+1,±ϵ(x;p)⟨·,u_𝔫_*+1,±ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*+1,±ϵ(p)dp - p.v.∫_-π^πv_m_*(x;p) ⟨· ,v_m_*(y;p)⟩/λ_*-μ_m_*(p)dp → 0. 1/2π∫_γ_-,ϵu_𝔫_*,±ϵ(x;p)⟨·,u_𝔫_*,±ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*,±ϵ(p) dp + i/2v_n_*(x;-q_*) ⟨·,v_n_*(y;-q_*)⟩/|μ_n_*^'(-q_*)|→ 0.1/2π∫_γ_+,ϵu_𝔫_*,±ϵ(x;p)⟨·,u_𝔫_*,±ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*,±ϵ(p) dp + i/2v_m_*(x;q_*) ⟨·,v_m_*(y;q_*)⟩/|μ_m_*^'(q_*)|→ 0.𝕁_1, ϵ+𝕁_2, ϵ+𝕁_1, -ϵ +𝕁_2, -ϵ -β(h)ℙ^Dirac→ 0.where𝕁_1, ϵ:= 1/2π∫_-ϵ^1/3^ϵ^1/3u_𝔫_*,ϵ(x;p)⟨·,u_𝔫_*,ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*,ϵ(p)dp,𝕁_2, ϵ:=1/2π∫_-ϵ^1/3^ϵ^1/3u_𝔫_*+1,ϵ(x;p)⟨·,u_𝔫_*+1,ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*+1,ϵ(p)dp,𝕁_1, -ϵ:=1/2π∫_-ϵ^1/3^ϵ^1/3u_𝔫_*,-ϵ(x;p)⟨·,u_𝔫_*,-ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*,-ϵ(p)dp,𝕁_2, -ϵ:=1/2π∫_-ϵ^1/3^ϵ^1/3u_𝔫_*+1,-ϵ(x;p)⟨·,u_𝔫_*+1,-ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*+1,-ϵ(p)dp.Note that the proof of (<ref>) and (<ref>) are the same as the proof of Proposition 4.5 of <cit.>. Here we skip it and refer the reader to <cit.> for details. In what follows, we only prove (<ref>) and (<ref>). The proof of (<ref>) is similar to (<ref>).Step 1: We prove (<ref>). Observe that(<ref>) follows from the following identities: 1/2π∫_γ_-,ϵu_𝔫_*,ϵ(x;p)⟨·,u_𝔫_*,ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*,ϵ(p)dp -1/2π∫_γ_-,ϵu_𝔫_*,ϵ(x;p)⟨·,u_𝔫_*,ϵ(· ;p)⟩/λ_*-λ_𝔫_*,ϵ(p)dp → 0, 1/2π∫_γ_-,ϵu_𝔫_*,ϵ(x;p)⟨·,u_𝔫_*,ϵ(· ;p)⟩/λ_*-λ_𝔫_*,ϵ(p)dp -1/2π∫_γ_-,ϵu_𝔫_*(x;p)⟨·,u_𝔫_*(· ;p)⟩/λ_*-λ_𝔫_*(p)dp → 0, 1/2π∫_γ_-,ϵu_𝔫_*(x;p)⟨·,u_𝔫_*(· ;p)⟩/λ_*-λ_𝔫_*(p)dp + i/2v_n_*(x;-q_*) ⟨·,v_n_*(y;-q_*)⟩/|μ_n_*^'(-q_*)|→ 0.Since (<ref>) follows from the smoothness of λ_𝔫_*,ϵ and u_𝔫_*,ϵ in ϵ, and (<ref>) from the Residue theorem, we only need to prove (<ref>) to conclude the proof. For p∈γ_-,ϵ, we have |q+q_*|=|ϵ|^1/3. Then the application of the inverse function theorem in the proof of statement 2 in Lemma <ref> gives that |λ_*-λ_𝔫_*,ϵ(p)|≳ |ϵ|^1/3. Consequently,|λ_*+ϵ h-λ_𝔫_*,ϵ(p)| ≥|λ_𝔫_*,ϵ(q_*)-λ_𝔫_*,ϵ(p)| - |λ_*+ϵ h-λ_𝔫_*,ϵ(q_*)| ≳ϵ^1/3-ϵ≳ϵ^1/3.for p∈γ_-,ϵ, h∈𝒥. It follows that|1/λ_*+ϵ h-λ_𝔫_*,ϵ(p) - 1/λ_*-λ_𝔫_*,ϵ(p)| =|ϵ h/(λ_*+ϵ h-λ_𝔫_*,ϵ(p))·(λ_*-λ_𝔫_*,ϵ(p))|≤ϵ^1/3|h|.On the other hand, for any φ∈H̃^-1/2(Γ), |⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩| ≲φ_H̃^-1/2(Γ).Therefore∫_γ_-,ϵu_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*,ϵ(p)dp - ∫_γ_-,ϵu_𝔫_*,ϵ(x;p)⟨φ(·),u_𝔫_*,ϵ(· ;p)⟩/λ_*-λ_𝔫_*,ϵ(p)dp _H^1/2(Γ)≲ϵ^1/3|h|φ_H̃^-1/2(Γ),whence (<ref>) follows. Step 2: To prove (<ref>), we first note the parity of the perturbed eigenfunction u_n,ϵ(x;p)∼ (𝒫u_n,ϵ)(x;-p) (similar to Lemma <ref>). In particular, when x∈Γ,u_n,ϵ(x;p)∼ u_n,ϵ(x;-p). Thus𝕁_1, ϵ =1/π∫_0^ϵ^1/3u_𝔫_*,ϵ(x;p)⟨·,u_𝔫_*,ϵ(· ;p)⟩/λ_*+ϵ h-λ_𝔫_*,ϵ(p)dp.In light of Theorem <ref>, we can extract the leading order term of 𝕁_1, ϵ using (<ref>) and (<ref>). More precisely, we write𝕁_1, ϵ = 𝕁_1, ϵ^(0) + 𝕁_1, ϵ^(1),where 𝕁_1, ϵ^(0) is associated with the kernelJ_1, ϵ^(0)(x,y)=1/π∫_0^ϵ^1/3(f_ϵ(p)v_n_*(x;0)+v_m_*(x;0))·(f_ϵ(p)v_n_*(y;0)+v_m_*(y;0))/(ϵ h+√(α_𝔫_*^2p^2+|t_*|^2ϵ^2))· N^2_𝔫_*,ϵ(p)dp.Heref_ϵ(p):=t_*·ϵ/α_n_*p+√(α_n_*^2p^2+|t_*|^2ϵ^2), N_n,±ϵ(p)=(1+|f_ϵ(p)|^2+𝒪(p+ϵ))^1/2,(n=𝔫_*,𝔫_*+1).We claim thatlim_ϵ→ 0𝕁_1, ϵ^(1)=0.Indeed, by Theorem <ref>, we can show that𝕁_1, ϵ^(1)=𝕁_1, ϵ^(0)·𝒪(ϵ^1/3).On the other hand, using the explicit expression of the kernel of the operator 𝕁_1, ϵ^(0), we have 𝕁_1, ϵ^(0) ≤∫_0^ϵ^1/3f_ϵ(p)v_n_*(x;0)+v_m_*(x;0)_H^1/2(Γ)·f_ϵ(p)v_n_*(y;0)+v_m_*(y;0)_H^1/2(Γ)/(ϵ h+√(α_n_*^2p^2+|t_*|^2ϵ^2))(1+|f_ϵ(p)|^2)dp≲∫_0^ϵ^1/3|f_ϵ(p)|^2 + |f_ϵ(p)|+1/(ϵ h+√(α_n_*^2p^2+|t_*|^2ϵ^2)) · (1+|f_ϵ(p)|^2)dp≲∫_0^ϵ^1/3|1/ϵ h+√(α_n_*^2p^2+|t_*|^2ϵ^2)|dp =|t_*|/α_*∫_0^tan^-1 (α_*/|t_*|ϵ^-2/3)|^2(θ)/h+|t_*|(θ)|dθ =𝒪(log (ϵ)).Therefore (<ref>) follows.In a similar way, we can show that lim_ϵ→ 0𝕁_2, ϵ^(1)= lim_ϵ→ 0𝕁_1, -ϵ^(1)=lim_ϵ→ 0𝕁_2, -ϵ^(1)=0.Thenlim_ϵ→ 0(𝕁_1, ϵ+𝕁_2, ϵ+𝕁_1, -ϵ +𝕁_2, -ϵ)=lim_ϵ→ 0(𝕁_1, ϵ^(0)+𝕁_2, ϵ^(0)+𝕁_1, -ϵ^(0) +𝕁_2, -ϵ^(0)).On the other hand, by direct computing, we haveJ_1, ϵ^(0)+J_2, ϵ^(0)+J_1, -ϵ^(0) +J_2, -ϵ^(0) =∫_0^ϵ^1/3|f_ϵ(p)|^2v_n_*(x;0)v_n_*(y;0)+v_m_*(x;0)v_m_*(y;0)/(ϵ h+√(α_n_*^2p^2+|t_*|^2ϵ^2))(1+|f_ϵ(p)|^2)dp + ∫_0^ϵ^1/3v_n_*(x;0)v_n_*(y;0)+|f_ϵ(p)|^2v_m_*(x;0)v_m_*(y;0)/(ϵ h-√(α_n_*^2p^2+|t_*|^2ϵ^2))(1+|f_ϵ(p)|^2)dp.Using Lemma <ref>, v_n_*(x;0)v_n_*(y;0)=v_m_*(x;0)v_m_*(y;0). Thus,lim_ϵ→ 0(J_1, ϵ^(0)+J_2, ϵ^(0)+J_1, -ϵ^(0) +J_2, -ϵ^(0)) =lim_ϵ→ 0(1/π∫_0^ϵ^1/31/ϵ h+√(α_n_*^2p^2+|t_*|^2ϵ^2)dp +1/π∫_0^ϵ^1/31/ϵ h-√(α_n_*^2p^2+|t_*|^2ϵ^2)dp)v_n_*(x;0)v_n_*(y;0)=-1/|t_*|α_*h/√(1-h^2/|t_*|^2)v_n_*(x;0)v_n_*(y;0) =β(h)v_n_*(x;0)v_n_*(y;0),and consequently,lim_ϵ→ 0(𝕁_1, ϵ^(0)+𝕁_2, ϵ^(0)+𝕁_1, -ϵ^(0) +𝕁_2, -ϵ^(0))=β(h)ℙ^Dirac.It follows that lim_ϵ→ 0(𝕁_1, ϵ+𝕁_2, ϵ+𝕁_1, -ϵ +𝕁_2, -ϵ)=β(h)ℙ^Dirac. This completes the proof of (<ref>). §.§ Appendix C: Proof of Proposition <ref>We show that (𝕋)=span{∂ v_n_*(x;0)/∂ x_1|_Γ}. The proof of the Fredholmness of 𝕋 is the same as Proposition 4.4 in <cit.>. By (<ref>) and the definition of 𝕋, ∫_ΓG(y,x;λ_* )φ(x)dx_2 =𝕋(φ)(y)-i/2v_n_*(y;0)⟨φ(x),v_n_*(x;0)⟩/|μ_n_*^'(0)| -i/2v_m_*(y;0)⟨φ(x),v_m_*(x;0)⟩/|μ_m_*^'(0)|.Then the representation formula (<ref>) indicates that1/2 v_n_*(y;0) =∫_Γ G(y,x;λ_* )∂ v_n_*(x;0)/∂ x_1dx_2=𝕋(∂ v_n_*(x;0)/∂ x_1|_Γ)(y) -i/2v_n_*(y;0)/|μ_n_*^'(0)|∫_Γ∂ v_n_*(x;0)/∂ x_1v_n_*(x;0)dx_2 +i/2v_m_*(y;0)/|μ_m_*^'(0)|∫_Γ∂ v_m_*(x;0)/∂ x_1v_m_*(x;0)dx_2.By Lemma <ref>,v_m_*(x_1, x_2;0)=cv_n_*(-x_1, x_2;0) for some complex number c with |c|=1. On the other hand, recall that μ_n_*^'(0)>0 (Assumption <ref>). Since the periodic operator ℒ is time-reversal symmetric, its dispersion curves are symmetric with respect to the line p=0. Therefore, we can show thatμ_m_*^'(0)= -μ_n_*^'(0).It follows that 1/2v_n_*(y;0)=𝕋(∂ v_n_*(x;0)/∂ x_1|_Γ)(y)-iv_n_*(y;0)/μ_n_*^'(0)·∫_Γ∂ v_n_*(x;0)/∂ x_1v_n_*(x;0)dx_2.Then, by Lemma <ref>, 𝕋(∂ v_n_*(x;0)/∂ x_1|_Γ)(y) =1/2v_n_*(y;0)+iv_n_*(y;0)/μ_n_*^'(0)·i/2μ_n_*^'(0)=0,which indicates that (𝕋)⊃span{∂ v_n_*(x;0)/∂ x_1|_Γ}. We next show that (𝕋)⊂span{∂ v_n_*(x;0)/∂ x_1|_Γ}. Note that the following decomposition holds for all φ∈H̃^-1/2(Γ)φ=⟨φ,v_n_*(x;0)⟩/iμ_n_*^'(0)/2∂ v_n_*(x;0)/∂ x_1|_Γ+φ̃with⟨φ̃,v_n_*(x;0)⟩=0.Indeed, if we set φ̃=φ-⟨φ,v_n_*(x;0)⟩/iμ_n_*^'(0)/2∂ v_n_*(x;0)/∂ x_1|_Γ, then Lemma <ref> indicates that ⟨φ̃,v_n_*(x;0)⟩=⟨φ,v_n_*(x;0)⟩-⟨φ,v_n_*(x;0)⟩ =0. Thus, to prove (𝕋)⊂span{∂ v_n_*(x;0)/∂ x_1|_Γ}, it's sufficient to show that φ=0 if we have φ∈(𝕋) and ⟨φ,v_n_*(x;0)⟩=0. Assume that this is the case. Then(<ref>)-(<ref>) and (<ref>) imply that∫_Γ G_0^+(y,x;λ_* )φ(x)dx_2=iv_m_*(y;q_*)/|μ_m_*^'(q_*)|∫_Γφ(x)v_m_*(x;q_*)dx_2+i/2v_n_*(y;0)/|μ_n_*^'(0)|∫_Γφ(x)v_n_*(x;0)dx_2 -i/2v_m_*(y;0)/|μ_m_*^'(0)|∫_Γφ(x)v_m_*(x;0)dx_2.By a similar argument as in Case 1 of the proof of Proposition <ref>, we see that φ∈(𝕋) implies that⟨φ,v_n_*(x;-q_*)⟩= ⟨φ,v_m_*(x;q_*)⟩=0. In addition, since ⟨φ(·),v_n_*(·;0)⟩ =0, Lemma <ref> gives ⟨φ(·),v_m_*(·;0)⟩=0. Thus,∫_Γ G_0^+(y,x;λ_* )φ(x)dx_2=0, y∈Γ.By defining v(y):=∫_Γ G_0^+(y,x;λ_* )φ(x)dx_2 for y∈Ω^right, we see v|_Γ=0. We claim that v(x)≡ 0 for x∈Ω^right. Indeed, if we consider the odd extension of v(x) to x∈Ωṽ(x_1,x_2)={ v(x_1,x_2), x_1≥ 0, -v(-x_1,x_2), x_1<0, .then both ṽ and ∇ṽ are continuous across the interface Γ, and ṽ(x) decays exponentially as |x_1|→ +∞. Thus ṽ_L^2(Ω)<∞ and (ℒ-λ_*)ṽ=0. However, Assumption <ref> implies that λ_* is not an eigenvalue of ℒ. Therefore v(x)≡ 0. Moreover,(<ref>) and the identity ⟨φ(·),v_n_*(·;0)⟩=⟨φ(·),v_m_*(·;q_*)⟩=0 imply that ∫_Γ G(y,x;λ_* )φ(x)dx_2=0, y∈Ω^right.By applying the operator ℒ-λ_* to both sides of the above equation,we get φ=0. Hence we conclude that (𝕋)=span{∂ v_n_*(x;0)/∂ x_1|_Γ}.plain
http://arxiv.org/abs/2310.17964v1
{ "authors": [ "Jiayu Qiu", "Hai Zhang" ], "categories": [ "math-ph", "math.MP" ], "primary_category": "math-ph", "published": "20231027082607", "title": "On the bifurcation of a Dirac point in a photonic waveguide without band gap openning" }
Optimality of a refraction strategy subject to Parisian ruin]Optimality of a refraction strategy in the optimal dividends problem with absolutely continuous controls subject to Parisian ruinLocas]Félix Locas Département de mathématiques, Université du Québec à Montréal (UQAM), 201 av. Président-Kennedy, Montréal (Québec) H2X 3Y7, Canada [email protected], [email protected]]Jean-François RenaudWe consider de Finetti's optimal dividends problem with absolutely continuous strategies in a spectrally negative Lévy model with Parisian ruin as the termination time. The problem considered is essentially a generalization of both the control problems considered by Kyprianou, Loeffen & Pérez <cit.> and by Renaud <cit.>. Using the language of scale functions for Parisian fluctuation theory, and under the assumption that the density of the Lévy measure is completely monotone, we prove that a refraction dividend strategy is optimal and we characterize the optimal threshold. In particular, we study the effect of the rate of Parisian implementation delays on this optimal threshold. [ [ January 14, 2024 ==================== § INTRODUCTION In 2007, Avram, Palmowski & Pistorius <cit.> kickstarted a string of literature concerned with Bruno de Finetti's control problem <cit.> for spectrally negative Lévy processes (SNLPs). Using results from fluctuation theory for SNLPs, they studied this optimal dividends problem consisting in finding the optimal strategy maximizing the withdrawals made up to ruin. In particular, and inspired by results obtained in simpler models, they wanted to answer the following question: when is a barrier strategy optimal? In a pioneering follow-up work, Loeffen <cit.> gave a clear and satisfactory sufficient condition on the Lévy measure for a barrier strategy to be optimal. In words, if the Lévy measure admits a completely monotone density, then a barrier strategy is optimal. Later, this condition was improved first by Kyprianou, Rivero & Song <cit.> and then further improved by Loeffen & Renaud <cit.>. To the best of our knowledge, the condition used in <cit.> is the mildest condition known to date; it says that, if the tail of the Lévy measure is a log-convex function, then a barrier strategy is optimal.The stochastic control problem discussed in the previous paragraph is a singular control problem. It is one of the three versions of this classical control problem, the other two being the impulse problem and the absolutely continuous problem. See <cit.> for a study of these three problems in a Brownian model. In the impulse problem, the analog of a barrier strategy is an (a,b)-strategy. It is interesting to note that again, if the tail of the Lévy measure is a log-convex function, then a certain (a,b)-strategy is optimal; see in sequence <cit.>, <cit.> and <cit.> for details and discussions on this matter. The third classical variation consists in restricting the set of admissible controls to absolutely continuous (with respect to the Lebesgue measure) strategies. In that case, the question becomes: when is a refraction strategy optimal? This problem has been studied, in a spectrally negative Lévy model, by Kyprianou, Loeffen & Pérez <cit.> and the strongest condition of having a Lévy measure with a completely monotone density was again needed for a refraction strategy to be optimal. Once more, to the best of our knowledge, no one has been able yet to improve this condition.In the last fifteen years or so, fluctuation theory for SNLPs was further developed by the addition of so-called Parisian ruin identities; see for example landriault-et-al_2011, loeffen-et-al_2013,albrecher-et-al_2016, lkabous-renaud_2019. Simply said, in Parisian fluctuation theory, a barrier is said to have been crossed once it has been crossed for a given amount of time, known as a Parisian implementation delay. Naturally, studies have started to appear on the impact of these delays, used in the recognition of ruin, on the maximization of dividends, especially on the optimality of a barrier strategy. The singular version of this problem has been studied in <cit.> and in <cit.> using a Parisian termination time with deterministic delays and exponential delays, respectively. Very recently, in <cit.>, the impulse control problem with Parisian exponential ruin has been considered. When using exponential Parisian delays, the condition of log-convexity of the tail of the Lévy measure was used as a sufficient condition for a barrier strategy (resp. an (a,b)-strategy) to be optimal in the singular problem (resp. impulse problem).In this paper, we analyze de Finetti's control problem for absolutely continuous strategies in a SNLP model with exponential Parisian ruin. To solve this problem, we walk in the footsteps of <cit.> and <cit.> by using (and generalizing) methodologies and results used in those two papers. More precisely, we will show that a threshold strategy is optimal if the Lévy measure of the underlying SNLP admits a completely monotone density. §.§ Model, control problem and main result We consider a model in which the uncontrolled surplus process X = X_t : t ≥ 0 is a spectrally negative Lévy process (SNLP), i.e., X is a Lévy process with no positive jumps. More details on SNLPs are provided in Section <ref>.For a given control π, characterized by a process L^π = L_t^π : t ≥ 0 where L^π_t is the cumulative amount of dividends paid up to time t, the controlled process is given by U^π = U_t^π : t ≥ 0 and defined by U_t^π = X_t - L_t^π.Let us now define the Parisian termination time used in our model. First, fix a Parisian rate p > 0. Then, the ruin time corresponding to a strategy π is the following Parisian first-passage time:κ_p^π := inft > 0 : t - g_t^π > 𝐞_p^g_t^π andU_t^π < 0,where g_t^π := sup0 ≤ s ≤ t : U_s^π≥ 0, and where 𝐞_p^g_t^π is an exponential random variable with mean 1/p and independent of the sigma-algebra ⋁_t ≥ 0_t. Note that there is a new exponential random variable associated to each excursion (of U^π) below zero.As mentioned above, we are interested in absolutely continuous strategies. Therefore, we will consider strategies π such thatL_t^π = ∫_0^t l_s^πds ,t ≥ 0,where l_t^π : t ≥ 0 is an adapted and nonnegative stochastic process. Note that, as a consequence, we have that L_t^π : t ≥ 0 is an adapted and nondecreasing stochastic process such that L^π_0=0. Let us fix a maximal dividend rate K > 0.A strategy π is said to be admissible if it is such that 0 ≤ l_t^π≤ K, for all 0 ≤ t ≤κ_p^π, and if t ↦ l_t^π1_U_t^π < 0≡ 0. Let us denote by Π_K the set of these admissible strategies. The value of K must be less than the drift of the SNLP X when this process has paths of bounded variation. More details in Section <ref>. Finally, let us fix a discount factor q>0 and let the performance function of a strategy π be given byV_π(x) = x[∫_0^κ_p^π e^-qtdL_t^π] = x[∫_0^κ_p^π e^-qt l^π_t dt ] ,x ∈ .Consequently, the value function of this problem isV(x) = sup_π∈Π_K V_π(x),x ∈ .We want to find an optimal strategy π^* ∈Π_K, that is a dividend strategy such that V_π^*(x) = V(x) for all x ∈.In order to state our main result, we need to introduce the sub-family of refraction strategies. For a given b ≥ 0, let π_b be the refraction strategy at level b, i.e., the strategy paying dividends at the maximum rate K when the (controlled) surplus process is above level b and not paying dividends when the surplus process is below level b. More precisely, for this strategy, we writedU_t^b = dX_t - K1_U_t^b > bdt ,for the controlled process, from which we deduce the dividend rate l^b_t = K 1_U_t^b > b, and we write κ_p^b insted of κ_p^π_b for the ruin time. Note that, for any b ≥ 0, the refraction strategy at level b is admissible, i.e., π_b ∈Π_K.We are now ready to state the main result of this paper.If the Lévy measure of X has a completely monotone density, then there exists an optimal threshold b^* ≥ 0 such that π_b^* is an optimal strategy. In particular, if the Parisian implementation delays are large, then b^*=0. In fact, this is a preliminary version of our full solution to this control problem. In Theorem <ref>, we provide an explicit expression for the value function V, including a characterization of the optimal threshold b^*, and we explain what is meant by large delays. §.§ Contributions and outline of the paper The control problem described above can be considered as a generalization of the problem studied in <cit.> and the one studied in <cit.>. Indeed, in principle, if p →∞, then we should recover the problem (and results) in <cit.>, while if K →∞, then we should recover the problem in <cit.>; more on this at the end of Section <ref>.To prove Theorem <ref>, we combine ideas from the abovementioned two papers, using a now standard methodology for optimal dividends problems in spectrally negative Lévy models. More precisely, we compute first the performance function of an arbitrary refraction strategy. Second, we find the optimal refraction level b^* ≥ 0, i.e., identify the best refraction strategy π_b^* within the sub-family of refraction strategies. Third, using a verification lemma, we prove that this refraction strategy π_b^* is in fact an optimal admissible strategy for our control problem. Even if we use the same problem-solving methodology, there are several technical difficulties that arise in our problem, which lead to new results. For example: * in Proposition <ref>, we compute the performance function of an arbitrary refraction strategy subject to Parisian ruin, which is a function defined on , and to do so a new technical result is needed (see Lemma <ref>); * in Lemma <ref>, when caracterizing the optimal refraction level, we derive important analytical properties for a particular function (defined in Equation (<ref>)), which go further than the case with classical ruin; * the proof of Proposition <ref> is more involved than the corresponding one in <cit.> and, furthermore, we provide an alternative proof in the case b^* = 0. The rest of the paper is organized as follows. In the next section, we present mathematical objects and some technical results needed to solve our control problem. Then, in Section <ref>, we compute the performance function of an arbitrary refraction strategy and identify the optimal refraction level. Also, we relate our results to those obtained in <cit.> and in <cit.>. Finally, in Section <ref>, we provide a verification lemma for the control problem and then use it to finish the proofs of Theorem <ref> and Theorem <ref>.§ PRELIMINARY RESULTSNow, let us be a little more specific about our model and present some of the mathematical objects needed for our solution.First, some terminology and notation. For a function f →, we write f(∞) := lim_x →∞ f(x) if this limit exists. Also, we say that f is completely monotone if (-1)^n f^(n)(x) ≥ 0, for all x, where f^(n) stands for the n-th derivative of f, if n ≥ 1, and where f^(0)=f.Implicitly, a completely monotone function f is nonnegative and infinitely differentiable. Finally, f is said to be log-convex if log(f) is convex. §.§ Spectrally negative Lévy processes Let (Ω, , (_t)_t ≥ 0, 𝐏) be a filtered probability space on which X = X_t : t ≥ 0 is a spectrally negative Lévy process (SNLP) with Lévy triplet (γ, σ, ν), where γ∈, σ≥ 0 and where the Lévy measure ν is such that ν(-∞, 0) = 0 and∫_(0, ∞) (1 ∧ z^2) ν(dz) < ∞ .Recall that, for an SNLP, we have the existence of a Laplace exponent given byψ(λ) = γλ + 1/2σ^2 λ^2 - ∫_(0, ∞) (1 - e^-λ z - λ z 0 < z ≤ 1) ν(dz) , λ≥ 0 .As X is a strong Markov process, we denote by 𝐏_x the law of X when starting from X_0 = x and by 𝐄_x the corresponding expectation. When x = 0, we write 𝐏 and 𝐄, as in the definition of the Laplace exponent above. It is well known that ψ is a strictly convex function such that ψ(0) = 0 and ψ(∞) := lim_λ→∞ψ(λ) = ∞, with right-inverse function given byΦ(q) = supλ≥ 0 : ψ(λ) = q. Recall also that if ν≡ 0, then X is a Brownian motion with drift, while if σ=0 and ν(0, ∞) < ∞, then X is a compound Poisson process with drift. In this direction, we know that X has paths of bounded variation if and only if σ = 0 and ∫_0^1 z ν(dz) < ∞. More precisely, if X has paths of bounded variation, then we can writeX_t = ct - S_t ,where c := γ +∫_0^1 z ν(dz)>0 and S = S_t : t ≥ 0 is a driftless subordinator.This parameter c is what we have called the drift of X in Remark <ref>. For a given K > 0, we define another SNLP Y = Y_t : t ≥ 0 by Y_t = X_t - Kt. It is easy to see that the Laplace exponent of Y is given by ψ_K(λ) = ψ(λ) - K λ. Accordingly, we denote its right-inverse by Φ_K. As a standing assumption throughout this paper, we assume that, if X has paths of bounded variation, then K ∈ (0,c). This is to avoid that the paths of Y, which are also of bounded variation, be monotone decreasing.Finally, recall that, for any β > 0, it is known that the process ℰ(β) = ℰ_t(β) : t ≥ 0, defined asℰ_t(β) = exp{β X_t - ψ(β) t } ,is a unit-mean 𝐏-martingale with respect to the filtration (ℱ_t)_t ≥ 0. Hence, it may be used to define the following Esscher transformation:.d𝐏^β/d𝐏|_ℱ_t = ℰ_t(β),t ≥ 0.Under the measure 𝐏^β, it can be shown that the process (X, 𝐏^β) is also a SNLP.For more details on spectrally negative Lévy processes, we refer the reader to <cit.>.§.§ Scale functions and fluctuation identities For any a ∈, we define the following first-passage times:τ_a^- = inft ­> 0 : X_t < aandτ_a^+ = inft > 0 : X_t > a .Correspondingly, for Y, we will write ν_a^- and ν_a^+. Also, let κ_p be the Parisian ruin time of X, i.e., the random time defined in (<ref>) with the (null) strategy π̂ such that L^π̂_t = 0 for all t ≥ 0.To study those first-passage times, we will need different families of scale functions. First, for q ≥ 0, the q-scale function of X is defined as the continuous function on [0,∞ ) with Laplace transform ∫_0^∞ e^-θ y W^(q)(y) dy = 1/ψ(θ)-q , for θ>Φ(q).From this definition, we can show thatW^(q)(0) = 1/c,X has paths of bounded variation,0,X has paths of unbounded variation.It is known that this function is positive, strictly increasing and differentiable almost everywhere on (0, ∞). We extend W^(q) to the whole real line by setting W^(q)(x)=0 for x<0. We will write W=W^(0) when q=0. Second, for q,θ≥ 0, setZ_q (x,θ) = e^θ x( 1 - (ψ (θ)-q) ∫_0^x e^-θ y W^(q)(y) dy ) ,x ∈.Note that, for x ≤ 0, we have Z_q (x,θ)=e^θ x. In the following, we will use these functions only with θ = Φ(p+q) and θ = 0. Consequently, let us defineZ_q,p(x) := Z_q(x, Φ(p+q)) = e^Φ(p+q) x(1 - p ∫_0^x e^-Φ(p+q) y W^(q)(y) dy ) ,and Z^(q)(x) := Z_q(x, 0) = 1 + q ∫_0^x W^(q)(y) dy. Similarly, let 𝕎^(q) and ℤ^(q) be the corresponding functions for Y.All these scale functions appear in various fluctuation identities for X and Y. For example, it is well known that, for x ∈ (-∞,b], we havex[e^-q τ_b^+τ_b^+ < τ_0^-] = W^(q)(x)/W^(q)(b)andx[e^-q τ_b^+τ_b^+ < κ_p] = Z_q,p(x)/Z_q,p(b) ,as well asx[e^-q τ_0^-τ_0^- < τ_b^+] = Z^(q)(x) - Z^(q)(b)/W^(q)(b)W^(q)(x)andx[e^-q τ_0^-τ_0^- < ∞] = Z^(q)(x) - q/Φ(q) W^(q)(x).Clearly, we have equivalent identities for the stopping times related to Y in terms of the corresponding scale functions. §.§ Convexity properties of scale functions Some analytical properties of scale functions are going to play a fundamental role in the following sections.It is known that, if the Lévy measure ν has a density, then W^(q) is continuously differentiable on (0, ∞); see, e.g., <cit.>. In Section <ref>, we will need this density to be completely monotone. Suppose that the Lévy measure of X has a completely monotone density. For q > 0, there exists a completely monotone function f such thatW^(q)(x) = 1/ψ^'(Φ(q)) e^Φ(q)x - f(x),x ≥ 0.Moreover, W^(q) ' is a log-convex function on (0, ∞). If the Lévy measure ν has a completely monotone density, then W^(q) ' is infinitely differentiable and it is a strictly convex function on (0, ∞) that is decreasing on (0, a^*) and increasing on (a^*, ∞), wherea^* := supx ≥ 0 : W^(q) '(x) ≤ W^(q) '(y),for ally ≥ 0 . Recall from avram-palmowski-pistorius_2007,loeffen_2008 that a^* is the optimal barrier level in the classical singular version of de Finetti's optimal dividends problem.As we assume that p>0, we can writeZ_q,p (x) = p ∫_0^∞ e^-Φ(p+q) y W^(q)(x+y) dy ,x ∈ .Then, for x > 0, we haveZ_q,p^' (x) = p ∫_0^∞ e^-Φ(p+q) y W^(q)'(x+y) dy .Clearly, x ↦ Z_q,p (x) is a nondecreasing continuous function.The following result is lifted from <cit.> and will also be needed in Section <ref>. If W^(q) ' is log-convex, then Z_q,p^' is log-convex.Combining Lemma <ref> and Lemma <ref>, we deduce that, if the Lévy measure ν has a completely monotone density, then Z_q,p^' is infinitely differentiable and it is a log-convex function on (0, ∞) that is decreasing on (0, c^*) and increasing on (c^*, ∞), wherec^* = supx ≥ 0 : Z_q,p'(x) ≤ Z_q,p'(y),for ally ≥ 0 .Note that c^* < ∞ since Z_q,p^'(∞) = ∞, the latter being inherited from W^(q) ' via (<ref>). For more details on these arguments, see <cit.>. Note that these properties for Z_q,p^' are also true under the mildest condition that the tail of the Lévy measure ν is log-convex. However, in Section <ref>, we will also need the representation in (<ref>) of Lemma <ref>. § REFRACTION STRATEGIES In this section, as we are guessing that a refraction strategy should be optimal, we are first going to compute the performance function of an arbitrary refraction strategy. Let the performance function of π_b, that is the refraction strategy at level b, be given byV_b(x) = x[∫_0^κ_p^b K e^-qtU_t^b > bdt ],x ∈.For b ≥ 0, we haveV_b(x) = Z_q,p(x)/h_p(b)x ≤ b,Z_q,p(x) + K∫_b^x 𝕎^(q)(x-y) ( Z_q,p'(y)-h_p(b) ) dy/h_p(b)x ≥ b,whereh_p(b) = Φ_K(q) ∫_0^∞ e^-Φ_K(q)y Z_q,p'(b+y) dy . For x ≤ b, we getV_b(x) = x[e^-q τ_b^+τ_b^+ < τ_p^b] V_b(b) = Z_q,p(x)/Z_q,p(b) V_b(b),where we used the strong Markov property, the fact that X is a SNLP and the Parisian fluctuation identity given by (<ref>). Now, for x ≥ b. We have, using the strong Markov property again, V_b(x)= x[∫_0^ν_b^- Ke^-qtdt ] + x[e^-q ν_b^-V_b(Y_ν_b^-) ν_b^- < ∞] = K/q(1 - x[e^-q ν_b^-ν_b^- < ∞]) + x[e^-q ν_b^- Z_q,p(Y_ν_b^-) ν_b^- < ∞]/Z_q,p(b)V_b(b). Using the classical fluctuation identity given by (<ref>) for Y and the identity given by (<ref>) of Lemma <ref>, and noticing thatK 𝕎^(q)(x-b) ∫_0^∞ e^-Φ_K(q) z Z_q,p^'(b+z) dz = K/Φ_K(q)𝕎^(q)(x-b) h_p(b),it results thatV_b(x) = Z_q,p(x)/Z_q,p(b) V_b(b)x ≤ b,K/q(1 - ℤ^(q)(x-b) + q/Φ_K(q)𝕎^(q)(x-b)) + G_b(x)V_b(b)/Z_q,p(b)x ≥ b,whereG_b(x) := p ∫_0^∞ e^-Φ(p+q)y w_b^(q)(x;-y) dy -K/Φ_K(q)𝕎^(q)(x-b) h_p(b),x ≥ b ≥ 0.and wherew_b^(q)(x;y) := W^(q)(x-y) + Kx ≥ b∫_b^x 𝕎^(q)(x-z) W^(q) '(z-y) dz,x, y ∈, b ≥ 0.Note that we have w_b+y^(q)(b+y; 0) = w_b^(q)(x;-y), for all x,y ∈ and for all b ≥ 0.Now, we must compute V_b(b). First, let us assume X has paths of bounded variation. We can use (<ref>) to solve for V_b(b) in (<ref>). Before doing so, note that we have ℤ^(q)(0) = 1 andG_b(b) = Z_q,p(b) - K/Φ_K(q)𝕎^(q)(0) h_p(b).Consequently, V_b(b) = (K/Φ_K(q)𝕎^(q)(0)) (1 - G_b(b)/Z_q,p(b))^-1 = Z_q,p(b)/Φ_K(q)∫_0^∞ e^-Φ_K(q) z Z_q,p^'(b+z) dz = Z_q,p(b)/h_p(b). Now, assume X has paths of unbounded variation. In this case, we get the same expression for V_b(b) by taking the following limiting procedure. It is well known (see <cit.>) that there exists a strongly approximating sequence X^n, of SNLPs with paths of bounded variation, that converges to X. More precisely, for all t > 0, lim_n →∞sup_0 ≤ s ≤ t|X_s^n - X_s| = 0 .In this case, it is also well known that, as n tends to infinity, the Laplace exponent of X^n converges to the Laplace exponent of X, which means, by the continuity theorem for Laplace transforms, that the corresponding q-scale functions also converge.In conclusion, for x ≥ b, we haveV_b(x)= K/q(1 - ℤ^(q)(x-b)) + p ∫_0^∞ e^-Φ(p+q)y w_b^(q)(x;-y) dy/h_p(b)= -K∫_0^x-b𝕎^(q)(y) dy + Z_q,p(x) + K∫_b^x 𝕎^(q)(x-y) Z_q,p^'(y) dy/h_p(b).Putting all the pieces together, the result follows. In particular, if b=0, then we haveV_0(x) = Φ_K(q) - Φ(p+q)/Φ_K(q) (Φ(p+q) - p/K ) e^Φ(p+q) x if x ≤ 0,-K ∫_0^x𝕎^(q)(y) dy+ Φ_K(q) - Φ(p+q)/Φ_K(q) (Φ(p+q) - p/K )( Z_q,p(x) + K∫_0^x 𝕎^(q)(x-y)Z_q,p'(y) dy )if x ≥ 0,sinceh_p(0) = Φ_K(q) ( Φ(p+q) - p/K/Φ_K(q) - Φ(p+q)) .This equality in (<ref>) follows readily from the following Laplace transform: for θ>Φ(q),∫_0^∞ e^-θ y Z_q,p(y) dy = ( 1/ψ(θ)-q) ψ(θ)-(p+q)/θ-Φ(p+q) .The details are left to the reader.§.§ The optimal threshold We now want to find the refraction strategy π_b^* that will outperform all other refraction strategies. Looking at (<ref>) and the definitions of both a^* in (<ref>) and c^* in (<ref>), a good candidate for the optimal threshold isb^* := supx ≥ 0 : h_p(x) ≤ h_p(y),for ally ≥ 0 . Before analyzing this last definition, we need to find out more about the analytical properties of h_p. If the Lévy measure of X has a completely monotone density, then h_p is a nonnegative, infinitely differentiable and strictly convex function on (0, ∞) such that h_p(∞) = ∞. In particular, a minimizer of h_p exists and is unique.Under our assumption, h_p is nonnegative and infinitely differentiable because Z_q,p' is nonnegative and infinitely differentiable. Next, recall that combining Lemma <ref> and Lemma <ref>, we get that Z_q,p^' is log-convex on (0, ∞). As a consequence, we can prove that h_p is also log-convex on (0, ∞), using the properties of log-convex functions, as in <cit.> for Z_q,p^'. In particular, we have that h_p is strictly convex on (0, ∞). Next, we prove that h_p(∞) = ∞. Using integration by parts for the first inequality, we can writeh_p(b)≥ (Φ_K(q))^2 ∫_0^∞ e^- Φ_K(q)u(Z_q,p(u+b) - Z_q,p(b)) du = p (Φ_K(q))^2 ∫_0^∞ e^- Φ_K(q) u{∫_0^∞ e^-Φ(p+q)y[ W^(q)(u+b+y) - W^(q)(b+y) ] dy }du =p (Φ_K(q))^2 ∫_0^∞ e^- Φ_K(q) u×{∫_0^∞ e^-Φ(p+q)y e^Φ(q)(b+y)[e^Φ(q)uW_Φ(q)(u+b+y) - W_Φ(q)(b+y)] dy }du ≥ p (Φ_K(q))^2 ∫_0^∞ e^- Φ_K(q) u{∫_0^∞ e^-Φ(p+q)y e^Φ(q)(b+y) W_Φ(q)(b+y)[e^Φ(q)u - 1] dy }du = (Φ_K(q))^2 [ ∫_0^∞ e^-Φ_K(q)u(e^Φ(q)u - 1) du] Z_q,p(b) = Φ_K(q) Φ(q)/Φ_K(q) - Φ(q) Z_q,p(b),where W_Φ(q) is the 0-scale function for the SNLP X with respect to the probability measure 𝐏_x^Φ(q) given by the Esscher transformation in (<ref>). Since Φ_K(q) > Φ(q) and Z_q,p(∞) = ∞, we can conclude that h_p(∞) = ∞.In particular, this last lemma says that b^*, as given by (<ref>), is a nonnegative and finite quantity. Consequently, if the Lévy measure ν has a completely monotone density, then h_p is strictly decreasing on (0, b^*) and strictly increasing on (b^*, ∞).In the next proposition, we give some important properties of this (candidate) optimal threshold. Assume the Lévy measure of X has a completely monotone density. We have 0 ≤ b^* ≤ c^*. If p ≤ p_min, where this critical Parisian rate value p_min is given byΦ_K(q) - Φ(p_min+q)=0 ,then b^* = 0. Otherwise, b^* > 0 if and only if(Φ(p+q))^2/p - Φ(p+q)W^(q)(0) > Φ_K(q)(1/K - W^(q)(0)). First, note that h_p(b) can be written as the expectation of the random variable Z_q,p^'(𝐞+b), where 𝐞 is an (independent) exponential random variable with mean 1/Φ_K(q). Since Z_q,p^' is increasing on (c^*, ∞) under our assumption, then so is h_p. This proves that b^* ≤ c^*.Next, let us prove that b^* > 0 if and only ifh_p(0) < Z_q,p'(0).Ash_p(b) = Φ_K(q) ∫_0^∞ e^-Φ_K(q) y Z_q,p'(b+y) dy = Φ_K(q) e^Φ_K(q) b∫_b^∞ e^- Φ_K(q) z Z_q,p'(y) dy ,then we haveh_p'(b) = Φ_K(q)(h_p(b) - Z_q,p'(b)) .By the strict convexity of h_p, it is clear that b^* > 0 if and only h_p'(0) < 0. We deduce from (<ref>) that b^* > 0 if and only if (<ref>) is verified. Finally, when b^* > 0, a solution to (<ref>) exists and is unique, because from (<ref>), the solutions to (<ref>) coincides with the values b such that h_p'(b) = 0, and we know from the properties of h_p given in Lemma <ref> that such a point exists and is unique. Now, let us define C(p) := Φ_K(q) - Φ(p+q). Note that C(p) is strictly decreasing on (0, ∞), thanks to the fact that p ↦Φ(p+q) is strictly increasing. Since C(0+) = Φ_K(q) - Φ(q) > 0 and since C(-∞) = -∞, by the Intermediate Value Theorem, there exists a unique value p_min > 0 such that C(p_min) = 0, which is equivalent to the definition given in (<ref>).Now, on one hand, if p > p_min, then C(p)=Φ_K(q) - Φ(p+q) < 0 by the definition of p_min. Also, we have from Equation (<ref>) thath_p(0) = Φ_K(q) ( Φ(p+q) - p/K/Φ_K(q) - Φ(p+q)) .Note that Φ(p+q) - p/K and Φ_K(q) - Φ(p+q) have the same sign. On the other hand, using the definition of Z_q,p in (<ref>), we deduce that Z_q,p^'(0)=Φ(p+q)-p W^(q)(0). Elementary algebraic manipulations yield an equivalence between h_p(0) < Z_q,p^'(0) and(Φ(p+q))^2/p - Φ(p+q)W^(q)(0) > Φ_K(q)(1/K - W^(q)(0)). In the last proof, we have obtained that, if b^*>0, then it is the unique solution ofh_p(b) = Z_q,p'(b),b > 0. In conclusion, we have the following corollary to Proposition <ref>:If b^*>0, thenV_b^*(x) = Z_q,p(x)/Z_q,p^'(b^*)x ≤ b^*,Z_q,p(x) + K ∫_b^*^x 𝕎^(q)(x-y) ( Z_q,p'(y)-Z_q,p^'(b^*) ) dy/Z_q,p^'(b^*)x ≥ b^*.It is interesting to note that V_b^* is continuously differentiable at x=b^* and that V_b^*^'(b^*)=1. §.§ Relationships with the limiting control problems We conclude this section by showing that the previous results generalize those obtained for the two limiting control problems considered in <cit.>) and in <cit.>. On one hand, if p →∞, then the delays become negligible, which means we are back to classical ruin. Mathematically, when p →∞, we have(Φ(p+q))^2/p - Φ(p+q)W^(q)(0) = Φ(p+q)/pZ_q,p^'(0) ⟶ W^(q) '(0),which means that (<ref>) becomesW^(q) '(0) > Φ_K(q) (1/K - W^(q)(0)) .The latter coincides with Lemma 3 in <cit.>.On the other hand, if K →∞, then the dividend rates are allowed to take very large values as in the singular control problem. Mathematically, when K →∞, note that h_p(0) increases (resp. decreases) to Z_p,q^'(0) if and only if Z_q,p”(0) < 0 (resp. Z_q,p”(0) > 0). In that case, (<ref>) becomesZ_q,p”(0) < 0 .The latter coincides with Proposition 2 in <cit.>.§ VERIFICATION OF OPTIMALITY In Section <ref>, we identified a candidate optimal strategy for our control problem, namely the refraction strategy at optimal level b^*. As described in the introduction, we will now prove that π_b^* is indeed an optimal admissible strategy using a verification lemma.In what follows, we say that a function g is sufficiently smooth if it is continuously differentiable onwhen X has paths of bounded variation, and if it is twice continously differentiable onwhen X has paths of unbounded variation. Let Γ be the following operator defined on sufficiently smooth functions:Γ g(x) = γ g'(x) + σ^2/2 g”(x) + ∫_(0, ∞)(g(x-z) - g(x) + g'(x)z 1_(0, 1])ν(dz). Here is the verification lemma of our control problem.If π∈Π_K is such that its performance function V_π is sufficiently smooth and such thatsup_0 ≤ u ≤ K[ ( Γ - q - p 1_(-∞, 0)(x) ) V_π(x) + u(1 - V_π^'(x))1_[0, ∞)(x)] ≤ 0 ,for all x ∈, then π is an optimal strategy.This verification lemma can be proved by borrowing ideas from both the proofs of Lemma 4 in <cit.> and Lemma 1 in <cit.>. The details are left to the reader.The last step consists in proving that V_b^*, the performance function of the refraction strategy at level b^*, satisfies the (sufficient) conditions given in the verification lemma. It will be achieved by the next lemma and the next proposition. If the Lévy measure of X has a completely monotone density, then V_b^* is sufficiently smooth. Also, we have the following equalities:(Γ - q - p)V_b^*(x) = 0,x < 0,(Γ - q)V_b^*(x) = 0,0 ≤ x ≤ b^*,(Γ - q)V_b^*(x) + K(1 - V_b^*^'(x)) = 0,x > b^*. To prove that V_b^* is sufficiently smooth, it suffices to follow the steps of the proof of Lemma 5 in <cit.> in which W^(q) is replaced by Z_q,p. The first two equations of (<ref>) have already been verified in <cit.>. To prove the third equality, it suffices to follow the steps of the proof of Lemma 6 in <cit.> in which the time of classical ruin is replaced by our Parisian ruin time. As the final step of our verification procedure, we show that V_b^* is a concave function on [0,∞), which is stronger than what is needed.This is where the assumption of complete monotonicity of the density of the Lévy measure is of paramount importance. Indeed, we will use a result of Lemma <ref>, saying that the q-scale function can be written as the difference of an exponential function and a completely monotone function, together with Bernstein's theorem. If the Lévy measure of X has a completely monotone density, then V_b^* is a concave function on [0,∞).In conclusion, here is the full solution to our stochastic control problem. Recall that p_min is given by (<ref>). Fix constants p, q, K > 0. If the Lévy measure of X has a completely monotone density, then: * if p ≤ p_min or if (Φ(p+q))^2/p - Φ(p+q)W^(q)(0) ≤Φ_K(q)(1/K - W^(q)(0)), then the refraction strategy at level b^*=0 is optimal andV(x) = Φ_K(q) - Φ(p+q)/Φ_K(q) (Φ(p+q) - p/K ) e^Φ(p+q) x if x ≤ 0,-K ∫_0^x𝕎^(q)(y) dy+ Φ_K(q) - Φ(p+q)/Φ_K(q) (Φ(p+q) - p/K )( Z_q,p(x) + K∫_0^x 𝕎^(q)(x-y)Z_q,p'(y) dy )if x ≥ 0; * if (Φ(p+q))^2/p - Φ(p+q)W^(q)(0) > Φ_K(q)(1/K - W^(q)(0)), then the refraction strategy at level b^*>0 given by (<ref>) is optimal andV(x) = Z_q,p(x)/Z_q,p^'(b^*)x ≤ b^*,Z_q,p(x) + K ∫_b^*^x 𝕎^(q)(x-y) ( Z_q,p'(y)-Z_q,p^'(b^*) ) dy/Z_q,p^'(b^*)x ≥ b^*. § ACKNOWLEDGEMENTS This work was supported by a Discovery Grant from the Natural Sciences and Engineering Research Council of Canada (NSERC) and a PhD Scholarship from the Fonds de Recherche du Québec - Nature et Technologies (FRQNT). abbrv § NEW IDENTITY The following is a generalization of Lemma 1 in <cit.>.For all x ≥ b ≥ 0, we havex [e^-q ν_b^- Z_q,p(Y_ν_b^-) ν_b^- < ∞]=p ∫_0^∞ e^-Φ(p+q)y w_b^(q)(x;-y) dy -K 𝕎^(q)(x-b) ∫_0^∞ e^-Φ_K(q) z Z_q,p^'(b+z) dz. We have, using Fubini's theorem and the fact that Y_t + y : t ≥ 0 | Y_0 = x has the same law as Y_t : t ≥ 0 | Y_0 = x + y,x [e^-q ν_b^- Z_q,p(Y_ν_b^-) ν_b^- < ∞] = x[e^-q ν_b^-(p ∫_0^∞ e^-Φ(p+q)y W^(q)(Y_ν_b^- + y) dy) ν_b^- < ∞] = p ∫_0^∞ e^-Φ(p+q)yx+y[e^-q ν_b+y^- W^(q)(Y_ν_b+y^-) ν_b+y^- < ∞] dy. From Lemma 1 in <cit.>, it is known that, for b ≤ x ≤ c, we havex[e^-q ν_b^- W^(q)(Y_ν_b^-) ν_b^- < ν_c^+] = w_b^(q)(x;0) - 𝕎^(q)(x-b)/𝕎^(q)(c-b) w_b^(q)(c;0). By letting c →∞, using the monotone convergence theorem, we havex [e^-q ν_b^- W^(q)(Y_ν_b^-) ν_b^- < ∞] = w_b^(q)(x;0) - K 𝕎^(q)(x-b) ∫_0^∞ e^-Φ_K(q) y W^(q) ' (z+b) dz. Indeed, when c > b, we havew_b^(q)(c;0)/𝕎^(q)(c-b) = W^(q)(c)/𝕎^(q)(c-b) + K ∫_b^c 𝕎^(q)(c-z)/𝕎^(q)(c-b) W^(q) '(z) dz = W^(q)(c)/𝕎^(q)(c-b) + K ∫_0^c-b𝕎^(q)(c-b-z)/𝕎^(q)(c-b) W^(q) '(z+b) dz = e^Φ(q)cW_Φ(q)(c)/e^Φ_K(q)(c-b)𝕎_Φ_K(q)(c-b) + K ∫_0^c-b e^-Φ_K(q)y𝕎_Φ_K(q)(c-b-z)/𝕎_Φ_K(q)(c-b) W^(q) '(z+b) dz,where W_Φ(q) and 𝕎_Φ_K(q) represent respectively the 0-scale functions for the SNLPs (X, 𝐏_x^Φ(q)) and (Y, 𝐏_x^Φ_K(q)) given by the Esscher transformation (see (<ref>)). Substituting into (<ref>), we get, using Fubini's theorem,x[e^-q ν_b^- Z_q,p(Y_ν_b^-) ν_b^- < ∞] = p ∫_0^∞ e^-Φ(p+q)y{w_b+y^(q)(x+y;0) - K 𝕎^(q)(x-b) ∫_0^∞ e^-Φ_K(q)z W^(q) '(b+y+z) dz}dy = p ∫_0^∞ e^-Φ(p+q)y w_b^(q)(x;-y) dy - pK 𝕎^(q)(x-b) ×∫_0^∞ e^-Φ_K(q) z(∫_0^∞ e^-Φ(p+q)y W^(q) '((b+z)+y) dy ) dz = p ∫_0^∞ e^-Φ(p+q)y w_b^(q)(x;-y) dy -K 𝕎^(q)(x-b) ∫_0^∞ e^-Φ_K(q) z Z_q,p^'(b+z) dz,which is exactly what we want. § PROOF OF PROPOSITION <REF>As discussed in Section <ref>, under our assumption on the Lévy measure, we have that Z_q,p^' is decreasing on [0, b^*]. The representation of V_b^* given in Corollary <ref> yields the same property for V_b^*^', i.e., it is decreasing on [0, b^*]. Recalling that V_b^*^'(b^*) = 1, we see that all is left to show is that V_b^*^' is decreasing from 1 for x ≥ b^*.For x ≥ b^*, using Leibniz rule, we can writeV_b^*^'(x) = -K𝕎^(q)(x-b^*) + (1+K𝕎^(q)(0))Z_q,p^'(x)/h_p(b^*) + K∫_b^*^x 𝕎^(q) '(x-y) Z_q,p^'(y) dy/h_p(b^*). Using (<ref>) and Fubini's theorem, we haveK∫_b^*^x 𝕎^(q) '(x-y) Z_q,p^'(y) dy = ∫_0^∞ Kpe^-Φ(p+q)z[∫_0^x𝕎^(q) '(x-y) W^(q) '(y+z)dy . - . ∫_0^b^*𝕎^(q) '(x-y) W^(q) '(y+z)dy ] dz. In <cit.>, the following identity is obtained: for a > 0,K∫_0^a 𝕎^(q) '(a-y) W^(q) '(y) dy = (1 - KW^(q)(0)) 𝕎^(q)'(a) - (1+K𝕎^(q)(0))W^(q)'(a) .Using this identity, we can writeK ∫_0^x𝕎^(q) '(x-y) W^(q) '(y+z)dy = K ∫_z^x+z𝕎^(q) '(x+z-y) W^(q) '(y)dy = (1-KW^(q)(0)) 𝕎^(q)'(x+z) - (1+K𝕎^(q)(0))W^(q)'(x+z) - K∫_0^z𝕎^(q) '(x+z-y) W^(q) '(y)dy . Putting the pieces back in (<ref>) and using Fubini's theorem, we getV_b^*^'(x) = -K𝕎^(q)(x-b^*) + p(1 - KW^(q)(0))/h_p(b^*)∫_0^∞ e^-Φ(p+q)z𝕎^(q) '(x+z) dz - K ∫_0^∞∫_y^∞ pe^-Φ(p+q)z𝕎^(q) '(x+z-y) W^(q) '(y) dz dy/h_p(b^*)- K ∫_0^b^*𝕎^(q) '(x-y) Z_q,p^'(y) dy/h_p(b^*). Applying Lemma <ref> to 𝕎^(q), we have that there exists a completely monotone function f such that𝕎^(q)(x) = 1/ψ_K^'(Φ_K(q)) e^Φ_K(q) x - f(x)and consequently there exists a constant C such thatV_b^*^'(x) = C e^Φ_K(q) x + Kf(x-b^*) - (1 - KW^(q)(0))∫_0^∞ pe^-Φ(p+q)yf'(x+y) dy/h_p(b^*)+ K ∫_0^∞∫_y^∞ pe^-Φ(p+q)z f'(x+z-y) W^(q) '(y) dz dy/h_p(b^*) + K ∫_0^b^* f'(x-y) Z_q,p^'(y) dy/h_p(b^*). By Bernstein's theorem, there exists a Borel measure μ such that f(x) = ∫_0^∞ e^-xtμ(dt). Therefore, using Fubini's theorem and elementary algebraic manipulations, we can further write V_b^*'(x) = C e^Φ_K(q) x + ∫_0^∞ e^-xt u_b^*(t) μ(dt) ,whereu_b(t) = K e^bt - K ∫_0^b te^tyZ_q,p^'(y) dy/h_p(b) + A_bt/Φ(p+q) + t,withA_b = p(1 - KW^(q)(0)) - KZ_q,p^'(0+)/h_p(b) .Note that u_b is at least twice differentiable.In the definition of u_b, we must have C = 0. Indeed, if C ≠ 0, then the representation in (<ref>) yields that V_b^*'(x) →∞, as x →∞, which is a contradiction with the fact that V_b^*(x) ≤ K/q for all x ∈ by the formulation of our control problem. In conclusion, we have obtained the following representation:V_b^*'(x) = ∫_0^∞ e^-xt u_b^*(t) μ(dt). The rest of the proof is split in two parts. First, if b^* = 0, then from (<ref>) in Proposition <ref>, we have h_p(0) ≥ Z_q,p'(0). Consequently, for all t ≥ 0,u_0(t) = K(1 - (Z_q,p'(0)/h_p(0))(t/Φ(p+q) + t)) + (p(1 - KW^(q)(0))/h_p(0))(t/Φ(p+q) + t) ≥ 0 .Indeed, we know that h_p(0)>0 and, from (<ref>), it is known that either W^(q)(0)=0 or K < c = (W^(q)(0))^-1, whether X has paths of unbounded or bounded variation, with the inequality being a standing assumption in our problem (see Remark <ref>). It follows that, for all x > 0,V_0”(x) = ∫_0^∞ (-t) e^-xt u_0(t) μ(dt) ≤ 0.In other words, V_0 is concave on (0,∞).Second, let us start by noting that u_b^* is infinitely differentiable and such that u_b^*(0) = K. Now, if b^* > 0, then assume there exists 0 < t_0 ≤∞ such that u_b^*(t) ≥ 0, for 0 ≤ t < t_0, and that u_b^*(t) ≤ 0 for t > t_0.Under this assumption, we can use (<ref>) to deduce that, for all x > b^*, V_b^*^''(x) = ∫_0^∞ -te^-(x-b^*)t e^-b^* tu_b^*(t) μ (dt)≤ e^-(x-b^*)k∫_0^∞ -te^-b^* tu_b^*(t) μ(dt) = e^-(x-b^*)kV_b^*^''(b^*+) ,whereV_b^*^''(b^*+) = (1+K𝕎^(q)(0)) Z_q,p^''(b^*)/h_p(b^*) .As discussed at the beginning of the proof, Z_q,p^' is decreasing on [0, b^*] because it is decreasing on [0, c^*] and b^* ≤ c^* by Proposition <ref>. Consequently, Z_q,p”(b^*) ≤ 0 and thus V_b^*”(b^*+) ≤ 0, and by (<ref>), we have V_b^*”(x) ≤ 0, for all x > b^*. The rest of the proof consists in proving that there exists 0 < t_0 ≤∞ such that u(t) ≥ 0, for 0 ≤ t < t_0, and that u_b^*(t) ≤ 0 for t > t_0.Note that u_b(0)=K.Computing the first and second derivatives of u_b^*, and then using the fact that y ↦ V_b^*^' (y) = Z_q,p^'(y)/Z_q,p^'(b^*) is decreasing on [0, b^*] (for both derivatives), we deduce that, for all t ≥ 0,u_b^*'(t) ≤ A_b^*d/dt(t/Φ(p+q) + t)andu_b^*”(t) ≤ A_b^*d^2/dt^2(t/Φ(p+q) + t). If A_b^* < 0, then u_b^*'(t) < 0, for all t > 0, and the existence of t_0 such that u_b^*(t) ≥ 0, for 0 ≤ t ≤ t_0, and that u_b^*(t) ≤ 0 for t ≥ t_0 is guaranteed. If A_b^*≥ 0, then u_b^*”(t) ≤ 0, and, in particular, u_b^* is concave, and there exists 0 ≤ t' < ∞ such that u_b^* is increasing for 0 ≤ t ≤ t' and decreasing for t ≥ t'. In particular, the existence of t_0 is also guaranteed in this case. In both cases, the proof is complete.
http://arxiv.org/abs/2310.18164v1
{ "authors": [ "Félix Locas", "Jean-François Renaud" ], "categories": [ "math.PR", "math.OC" ], "primary_category": "math.PR", "published": "20231027141733", "title": "Optimality of a refraction strategy in the optimal dividends problem with absolutely continuous controls subject to Parisian ruin" }
Practical application of quantum neural network to materials informatics: prediction of the melting points of metal oxides Hirotoshi Hiraie-mail: [email protected] Central R&D Labs., Inc.,41-1, Yokomichi, Nagakute, Aichi 480-1192, Japan=============================================================================================================================================Semantic text similarity plays an important role in software engineering tasks in which engineers are requested to clarify the semantics of descriptive labels (e.g., business terms, table column names) that are often consists of too short or too generic words and appears in their IT systems. We formulate this type of problem as a task of matching descriptive labels to glossary descriptions. We then propose a framework to leverage an existing semantic text similarity measurement (STS) and augment it using semantic label enrichment and set-based collective contextualization where the former is a method to retrieve sentences relevant to a given label and the latter is a method to compute similarity between two contexts each of which is derived from a set of texts (e.g., column names in the same table). We performed an experiment on two datasets derived from publicly available data sources. The result indicated that the proposed methods helped the underlying STS correctly match more descriptive labels with the descriptions.§ INTRODUCTION In general IT projects, such as database and business process migration, IT engineers invest significant effort in verifying consistency between various models, such as table schemata and diagrams that depict object relations. Maintaining a glossary of domain terms is a best practice that helps alleviate their workload. The glossary serves as a reference that maps descriptive labels (such as business terms and table column names) to corresponding glossary descriptions, which are typically English sentences that provide explanations and contexts. §.§ Target Problem and Challenges In this paper, we address such a common mapping problem between descriptive labels and glossary descriptions. We refer to this problem as Descriptive Labels to Descriptions (DLD). In DLD, we are given multiple datasets and glossaries. Each dataset contains a list of descriptive labels, while each glossary contains a list of glossary descriptions. The goal is to establish mappings from each label to its corresponding description.However, there are several technical challenges in mapping descriptive labels to the glossary descriptions due to the nature of the descriptive labels such as too short and too generic words included by the descriptive names. §.§ Our Approach and Research Questions To tackle the technical challenges, in this paper, we propose a novel framework to solve DLD problems effectively by enriching the short and/or generic words and capturing the context of the generic and/or ambiguous words. Our framework is designed to be flexible enough to employ variations of underlying semantic text similarity (STS) models, enrichment methods, and contextualization methods whereas our implementation is limited toreasonable combinations of a traditional TFIDF model, the PromCSE model <cit.> (BERT-based STS model), the Flan-T5 <cit.> large language model (LLM), and Wikidata.With our implementation we performed an experiment on two glossaries: a business glossary derived from the financial industry business ontology and more than 1000 pairs of column names and corresponding descriptions obtained from the Kaggle webpages. We organized our experiment to verify our hypothesis that there are many descriptive labels that cannot be mapped to corresponding descriptions by commonly used text similarity models due to cryptic words and our label enrichment and contextualization methods help the underlying STS models work better for such problematic labels. More specially, the research questions we address in this paper are as follows. * How much improvement is observed in the metrics Mean Reciprocal Rank (MRR) and Hits@k with the label enrichment methods?* How much improvement is observed in MRR and Hits@k with the label contextualization methods?* What kind of labels meet the out-of-vocabulary issue and the ambiguity issue? And what kind of labels can be solved by the label enrichment methods, and can be disambiguated by label contextualization methods?The reason why we use the Mean Reciprocal Rank (MRR) and Hits@k is that the DLD problem can be seen as a task of ranking descriptions corresponding to given descriptive labels and these metrics are commonly used for recommender systems.§.§ Contributions Our contributions in this paper are as follows. * We formulate the DLD problem, and identified the technical issues: out-of-vocabulary issue and ambiguity issue of cryptic words. We provide two practical benchmark datasets: Kaggle and FIBO including these issues.* To solve out-of-vocabulary issue, we propose Label Semantic Enrichment method which leverages external knowledge.* To solve ambiguity issue, we propos Set-based Collective Contextualization method by leveraging Large Language Model.* In our experiments, we clarified how effectively our approaches solve the issues.The rest of this paper is organized as follows. In the next two sections, we discuss related works, and provide motivating examples using practical datasets. We then present our approach to address them effectively, and formulate the DLD task. Subsequently, we demonstrate how the DLD task is tackled using several approaches, presenting experimental results. Afterward, we provide a comprehensive analysis of our experimental results. Finally, we conclude the paper. § RELATED WORK In general, DLD is a task of linking a chunk of text in a group to one in another group. In this sense, it could be considered as a natural language processing problem, specifically semantic text similarity or entity linking <cit.>.Also, we could consider it as a problem of aligning between domain-specific labels or descriptive names, as a set of domain concepts, to another ontology of a common glossary.§.§ Semantic Text Similarity There have been many studies on STS <cit.>. Some exploited general knowledge bases <cit.>, as in our study. Such knowledge-based methods measure the similarity of two terms on the basis of the structural properties of the knowledge bases, such as the number of edges. Our method, however, uses the knowledge bases only to enrich descriptive names. Other methods are corpus based such as word2vec <cit.> and BERT <cit.>. Such methods leverage large corpora to compute word-embeddings useful for measuring the similarity between terms on the basis of the idea that similar words occur together. The same idea is also applied for capturing the characteristics of sentences, as with Sentence-BERT <cit.> and PromCSE <cit.>. Many search engines use this type of method to retrieve and rank relevant sentences and webpages. We designed our method in such a way that it benefits from the advances of the corpus-based methods and pre-trained models. §.§ Entity Linking and Ontology Learning Entity linking <cit.>, such as ColNet <cit.> and TabEL <cit.>, is a task of linking terms described in a document to entities defined in a knowledge graph such as Wikidata. Ontology learning <cit.>, matching <cit.>, or alignment <cit.> are similar tasks that automatically or semi-automatically gather terms and relations from documents to create an ontology and discover correspondence between ontologies. Background knowledge is known to significantly improve the performance of ontology matching systems <cit.>. Compared with these problems, the DLD problem mentions neither knowledge graph nor ontologies. The descriptive names in a DLD problem are not within contextual sentences or linked to other concepts, entities, or values. Glossaries as ontologies in a DLD problem are also not linked but just descriptions of known concepts. §.§ Named Entity Disambiguation Named entity disambiguation on knowledge graphs including ontologies is a key component for the success of semantic text similarity, entity linking, and ontology learning. Likewise, our label enrichment method and set-based collective contextualization are both considered as methods to disambiguate the descriptive labels for solving the DLD problems. As in the case of our label enrichment method, recent literatures <cit.> also leverage the triples obtained from Wikidata to improve the performance of pre-trained models for the named entity disambiguation on Wikipedia whereas we leverage the sentences obtained from Wikidata. In addition, unlike our contextualization, they do not compute the context of a set of entities and rely only on the context of each single entity.The idea of using a knowledge base for disambiguation is also presented in the literature <cit.>. It proposes the use of a knowledge base to measure the similarity between texts for semantic text similarity and integrate it with a corpus-based measurement. However, it does not include any process for enriching and/or contextualizing given texts.§ MOTIVATING EXAMPLESIn software engineering tasks such as database and business process migrations, engineers are often requested to clarify the semantics of descriptive labels (e.g., business terms, table column names). For example, during the migration from a legacy system, IT engineers thoroughly analyze the original descriptive labels present in the existing data models. They then create a new logical data model and establish mappings between the original labels and the new ones. Understanding the semantics of the original labels in the context of the new logical data model is crucial, and the glossary serves as a valuable resource. However, the original data model often suffers from incompleteness and inconsistencies, as its element names may differ from those in the new logical data model, and an up-to-date or accessible dictionary may be lacking. Moreover, the mapping process typically requires human involvement and may not have an initial mapping in the first iteration.We could leverage a state-of-the-art semantic text similarity (STS) measurement to map each descriptive labels to a corresponding description. However, the following nature of the descriptive names makes it difficult to make the mapping using the STS measurement in high accuracy. * The descriptive labels (e.g., LOAN_AMT, ACS_DT) often contain cryptic (too short, too generic, and too ambiguous) words such as AMT (amount), ACS (access), and DT (date). These cryptic words are out-of vocabulary unlike aliases or nicknames, and often makes it difficult to understand their meanings.* The descriptive labels often appear only in database table schemata or program variables. This prevents us from associating the descriptive labels with documents. * Many descriptive labels used in the IT systems are specific to those IT systems. We could not rely on mappings between the descriptive labels and descriptions created for other IT systems. The same situation arises even in standardized or commonly used domain-specific ontologies such as financial industry business ontology (FIBO). FIBO is an ontology for financial business applications. It defines a named entity “ALL” the description of which is “the currency identifier for Lek (the currency of Albania)”. However, it is a very general word and often used as in the case of “all types of bank loans”. Likewise, we found many descriptions that include the word “all”. Therefore, no STS model is effective for matching the descriptive name with the correct definition.As a preliminary experiment, we collected the named entities and corresponding descriptions from FIBO, and ran PromCSE to compute the similarity scores between all the pairs of the named entities and descriptions. The top-ranked descriptions corresponding to “ALL” were * “collection representing the total membership, or úniverse,́ of people, resources, products, services, events, or entities of interest for some question, experiment, survey or statistical program” (0.309),* “location in physical space” (0.291),* “a collection of managed investments that are all managed by a single investment institution” (0.274),where the values enclosed with the parentheses are the similarity scores reported from PromCSE. The correct description of “ALL” was ranked 490th (0.063). § APPROACH Our approach is as follows: * We employ a STS model to measure a similarity of two sentences.* We apply Label Semantics Enrichment module to enrich descriptive labels.* We take into account the context of both of descriptive labels and glossary descriptions.§.§ Label Semantics Enrichment (LSE)A basic strategy to measure a similarity between a descriptive label and a glossary description is using STS models such as TFIDF, PromCSE and LLM. TFIDF suffer from aliases because it's built on word-level exact matching. PromCSE and LLM also may suffer from minor aliases and cryptic words in descriptive labels.Furthermore we employ Label Semantics Enrichment (LSE) to solve this problem. LSE module retrieves sentences relevant to the given the descriptive label by using a external knowledge database such as Wikidata and Bing. If the retrieved sentences include sufficiently various aliases and relevant phrases of cryptic words, STS model will work more effectively.§.§ Set-based Collective Contextualization (SCC)As mentioned earlier, descriptive labels can often be ambiguous. In the case that two tables have columns with the same label but semantically different meanings, the same glossary description will be assigned to columns that are semantically different.To address this problem, we propose incorporating the context of a set of descriptive labels and a set of glossary descriptions. For example, when matching column names from multiple tables to various glossaries, we consider the collective context provided by a set of column names within the same table and a set of glossary descriptions within the same glossary. By leveraging this broader context, we can more effectively identify and disambiguate the intended meanings of the columns. § PROBLEM SETTINGNow we are given a dataset 𝒟 which contains several semantic groups D_i ∈𝒟. The i-th group D_i = (L_i, G_i) has a set of descriptive labels L_i = {l_i,1, l_i,2, ⋯, l_i,n} and a glossary (a set of glossary descriptions) G_i = {g_i,1, g_i,2, ⋯, g_i,n}. l_i,p∈ L_i is p-th descriptive label of L_i. g_i,p∈ G_i is p-th glossary description of G_i. g_i,p describes a meaning of l_i,p.The ultimate goal is to establish a complete mapping between (l_i,p, L_i) and its corresponding (g_i,p, G_i). However, the difficulty of this problem is heavily influenced by the number of groups D_i and the size of the label set L and the glossary G in each group D.To standardize the difficulty, we consider N-choice problem here. In this scenario, given a target label set L and a target label l ∈ L, the objective is to identify the corresponding pair of the target description and its glossary (g, G) from N candidates {(g_k, G_k)}_k=1^N. This is done by measuring the similarity between the label side (l, L) and the glossary side (g_k, G_k).§ DLD FRAMEWORK In our framework, DLD-Similarity Score Ψ between label side (l, L) and glossary side (g, G) is defined asΨ(l, g, L, G | θ)= Ψ_ T(l, g, L | θ_ LSE, θ_ STS) × Ψ_ C(L, G | θ_ SCC).Here Ψ_ T is Text-Similarity Score, and Ψ_ C is Context-Similarity Score. In Ψ_ T, θ_ LSE∈{ on,off} is a switch to enable or disable LSE module, and θ_ STS∈{T, P, L} is a switch indicating which STS model is used from three variations: TFIDF, PromCSE, and LLM. In Ψ_ C, θ_ SCC∈{ on,off} is a switch to enable or disable SCC module. §.§ Text-Similarity ScoreText-Similarity Score Ψ_ T measures a similarity between l and g. It's defined asΨ_ T(l, g, L | θ_ LSE, θ_ STS) =max_s ∈ LSE(l | θ_ LSE) STS(s, g, L | θ_ STS).Ψ_ T collects relevant sentences {s_i} by invoking LSE, and computes similarity score between each sentence s_i and g, and outputs max of them.We have two variations of LSE function: enabled version and disabled version. The enabled version LSE(l |on) collects sentences {s_i} relevant to l from external knowledge, and returns them. The disabled version LSE(l |off) just returns l.We also have three variations of STS function: TFIDF version STS(· | T), PromCSE version STS(· | P), and LLM version STS(· | L). See <ref> for more details.§.§ Context-Similarity ScoreContext-Similarity Score Ψ_ C measures a contextual similarity between L and G. We have two variations: SCC enabled version and disabled version.The SCC enabled version Ψ_ C(L, G |on) directly ask to LLM about a probability of L and G being the same or different. See <ref> for more details.The disable version Ψ_ C(L, G |off) is always outputs 1.§ IMPLEMENTATION DETAILS We implemented our method using Python. It leverages Wikidata as external knowledge in LSE module. It runs TFIDF, PromCSE and Flan-T5 <cit.> in STS module to measure a similarity score between two sentences, and also uses Flan-T5 in SCC module to measure Context-Similarity Score between a set of labels and a set of descriptions. The use of TFIDF and PromCSE is straightforward, therefore, in the following sections, we describe about how to use Wikidata in LSE module, and how to use Flag-T5 in STS and SCC module. §.§ Using Wikidata for LSE In LSE module, we used Wikidata as external resource. The webpage of Wikidata provides a search interface that we usually access using a Web browser. We leverage this Web interface for the implementation. It sends queries of descriptive labels to the search interface using the HTTP protocol, and parses resulting webpages to extract entity IDs.Wikidata also provides a SPARQL <cit.> endpoint as a query service. We leverage this query service to collect label names and descriptions of the collected entity IDs. We also use theproperty[http://www.w3.org/2000/01/rdf-schema#label] andproperty[https://schema.org/description] to generate sentences relevant to the query phrase.§.§ LLM-based Semantic Text Similarity ModelWe employed three STS models (TFIDF, PromCSE, and LLM) to compute the Text-Similarity Score Ψ_ T. TFIDF and PromCSE can directly generate similarity scores for the given pair of sentences. On the other hand, LLM requires a natural language prompt as input, and also produces a natural language answer.We performed prompt engineering to create suitable inputs for LLM and developed a method to extract scores from the generated answer. Figure <ref> shows the typical prompt example.To translate the LLN answer to numerical score, STS(· | L) function collects top N tokens 𝒯 = {t_i}_i=1^N with their probability p(t_i) from LLM answer. It classifies the N tokens into a set of “yes” tokens 𝒯_ y, “no” tokens 𝒯_ n, and others. And computes probability ratio of “yes” and “no” as s = p_ y/(p_ y + p_ n). Here p_ y and p_ n is sum of probability which token is “yes” and “no”. They can be computed as follows:p_ y = ∑_t ∈𝒯_ y p(t).§.§ LLM-based Context-Similarity AlgorithmWe also employed LLM to measure the Context-Similarity between a set of labels and a set of descriptions. Figure <ref> shows the typical prompt example. A scoring logic is same to the logic mentioned in <ref>.§ EXPERIMENTAL SETUPAs we described in Section 1, our experiment is organized to verify the hypothesis that (1) there are many descriptive labels that cannot be mapped to corresponding descriptions by commonly used text similarity models due to cryptic words and (2) our label enrichment and contextualization methods help the underlying STS models work better for such problematic labels. In this section, we first describe how we prepared the datasets based on the publicly available data sources. We then describe what metrics and why we used and how we compared the different combinations of the STS models, the label enrichment method, and the contextualization method.§.§ Dataset and BenchmarkOur experiment is performed on the two datasets derived from the Kaggle webpages and the financial industry business ontology (FIBO). Table <ref> shows the statistics of these datasets.The Kaggle dataset consists of 85 semantic groups that includes 1347 descriptive labels and corresponding descriptions in total where we consider tables and column names we extracted from the Kaggle webpages as the semantic groups and the descriptive labels. The smallest semantic group contains only 10 descriptive labels while the largest semantic group contains 36 descriptive labels. We further investigated how many semantic groups include each descriptive label to see if the dataset is suitable to evaluate the effectiveness of the contextualization method. Table <ref> summarizes the descriptive labels and the numbers of corresponding semantic groups, which are referred to as frequencies of the descriptive labels, where we selected only the descriptive labels whose frequency is more than 7. For example, the descriptive label “type” appears in 7 semantic groups and has different meanings such as “type of wine”, “media type of animation film” and “flag if company is private or public”.FIBO defines concepts and relations used in financial domain, using the Web Ontology Language (OWL) <cit.>. It consists of 2086 named entities each of which has its label and description specified by particular XML tags such as theand . We collected these labels and descriptions as descriptive labels and descriptions of the dataset. We then partitioned the pairs of the descriptive labels and descriptions into 44 semantic groups based on IRIs of the corresponding named entities. Unlike the Kaggle dataset, there is no descriptive label that is included by multiple semantic groups.From these two datasets, we created 4 problems: Kaggle-10-choice, Kaggle-50-choice, FIBO-10-choice, FIBO-50-choice as mentioned in Section <ref> and performed the experiment on these 4 problems. §.§ Evaluation Method In our evaluation, we compare the 12 combinations of models based on the 3 underlying STS models (TFIDF, PromCSE, LLM-based) and the presence or absence of the label semantics enrichment (SLE) and the set-based collective contextualization (SCC), which are represented by the following naming rule: {T, P, L}`-{ϕ, LSE}`-{ϕ, SCC}, where T,P, and L represent TFIDF, PromCSE, and the LLM-based STS. For example, `T-LSE-SCC' represents the combination of TFIDF, LSE, and SCC. `L' represents the LLM-based STS model not augmented with LSE or SCC.To measure the success of matching the descriptive labels with the descriptions, we employ the Mean Reciprocal Rank (MRR) and Hits@k as performance metrics which are commonly used for evaluating the performance of search engines and recommendation systems. This is because the DLD problem can be seen as a task of ranking descriptions corresponding to given descriptive labels.§ EXPERIMENTAL RESULTS We present our comprehensive results against the 10-choice problems of the Kaggle and FIBO datasets in Table <ref> and against the 50-choice problems in Table <ref>.As a whole, regarding the first and second research questions, we proved that the label semantics enrichment (LSE) and the set-based collective contextualization (SCC) helps the underlying STS models produce better scores in both MRR and Hits@k except that LSE often gave negative impact to PromCSE and the LLM-based STS model in many cases.Regarding the third research question, we observed that there were reasonably many labels not correctly mapped only by the underlying STS models were correctly mapped by the STS model augmented with LSE and SCC. In particular, SCC successfully disambiguated generic words, and helped the underlying STS model correctly mapped out-of-vocabulary labels which include highly cryptic words. In addition, the L-SCC combination achieved over 99% in MRR for the 10-choice problem. The failed descriptive labels were difficult to be correctly mapped even by humans.We describe more details in the following sections one by one.§.§ Effectiveness of LSE We observed that LSE was effective only for TFIDF models as shown in Table <ref>, but the negative impact to the other models are very limited. We think that the pre-trained models, PromCSE and LLM, already have better or equivalent capability compared with LSE.Table <ref> shows some examples of descriptive labels whose rankings were improved or degraded by LSE from the results of Kaggle-10-choice problem. It shows that LSE properly address the out-of-vocabulary problem encountered in the TFIDF model, thereby successfully enriched cryptic or domain specific words. Additionally, we provide some examples of descriptive labels from the Kaggle result that were accurately ranked as top 1 by the LSE-enhanced model (T-LSE), but failed to achieve the same ranking by the baseline model (T) in Table <ref>.On the other hand, there were certain descriptive labels that were successfully ranked by the baseline model (T), but failed by the LSE-enhanced model (T-LSE). Table <ref> shows some examples. For more general words, the incorporation of LSE may introduce noise and hinder the accurate identification of the corresponding glossary description.§.§ Effectiveness of SCC Table <ref> shows the improvement of the MRR scores by SCC where SCC improved the MRR score in all the cases. In particular, it contributed to the improvement of over 10% for the PromCSE models.We analyzed the descriptive labels that were successfully ranked as top 1 as the result of the improvement by SCC (P-LSE-SCC), but failed by the baseline model (P-LSE). We found these descriptive labels can be classified into two groups: general words and highly cryptic words. For general words, PromCSE collected several descriptions from different glossaries with high confidences. SCC worked as screening indicators in this situation. Table <ref> shows some examples of the general labels whose rankings were improved by SCC. In the case of highly cryptic words, PromCSE often struggled to find similar descriptions from all candidates, because the highly cryptic words are out-of-vocabulary even for PromCSE. Consequently, the Text-Similarity Scores of the top-ranked descriptions were relatively low. However, the Context-Similarity Score of the corresponding glossary was significantly high. SCC elevated the confidence of these correct but low-ranked descriptions, pushing them to the top of the list. Table <ref> also shows some examples of the cryptic labels improved by SCC.§ CONCLUDING REMARKSWe formulated the DLD problem as important and practical task, and identified the technical issues: out-of-vocabulary issue and ambiguity issue of cryptic words. We proposed a framework to solve the issues. To solve out-of-vocabulary issue, we proposed Label Semantic Enrichment method by leveraging external knowledge. To solve ambiguity issue, we proposed Set-based Collective Contextualization method by leveraging Large Language Model. We provided two practical benchmark datasets: Kaggle and FIBO including the issues, and designed N-choice problem on the datasets. In our experiments, we clarified how our approach are effective to solve the issues. We plan to release the benchmark datasets under a reasonable license for future advances in this technical area.§ LIMITATIONS The empirical evaluation of our methods is mainly done on the datasets derived from the publicly available data sources whereas we used pre-trained models of Flan-T5 and PromCSE in the evaluation. Therefore, there might be overlapping data sources, and hence the risk of data leakage. Even so, our evaluation showed that both the label enrichment and the contextualization contributed to the improvement of the TFIDF-based STS, which never rely on any external data sources.In our experiment, the LLM-based STS model outperformed the other models in all the cases. However, we need to care about its inference time when we use it in a practical situation, since the estimated total inference time for completing the experiment was roughly 150 hours. On the other hand, the TFIDF and PromCSE models were obviously more efficient than the LLM-based STS model. Those total inference times based on our observation were 1.6 hours and 25 hours, respectively.acl_natbib
http://arxiv.org/abs/2310.18385v1
{ "authors": [ "Toshihiro Takahashi", "Takaaki Tateishi", "Michiaki Tatsubori" ], "categories": [ "cs.CL", "cs.AI", "cs.SE" ], "primary_category": "cs.CL", "published": "20231027070904", "title": "Matching of Descriptive Labels to Glossary Descriptions" }
Stability and Accuracy analysis of the θ Method and 3-Point Time filterThe research was partially supported by NSF grant DMS-2110379.Nicholas HurlDepartment of Mathematics, Duquesne University, Pittsbugh, PA-15282 ([email protected]). Farjana SiddiquaDepartment of Mathematics, University of Pittsburgh, Pittsburgh, PA-15260([email protected] ). Shuxian XuDepartment of Mathematics, University of Pittsburgh ([email protected]).January 14, 2024 =========================================================================================================================================================================================================================================================================================================== We explore the idea of aligning an AI assistant by inverting a model of users' (unknown)preferences from observed interactions. To validate our proposal, we run proof-of-concept simulations in the economic ultimatum game,formalizing user preferences as policies that guide the actions of simulated players.We find that the AI assistant accurately aligns its behavior to match standard policies from the economic literature (e.g., selfish, altruistic). However, the assistant’s learned policies lack robustness and exhibit limited generalization in an out-of-distribution setting when confronted with a currency (e.g., grams of medicine) that was not included in the assistant's training distribution. Additionally, we find that when there is inconsistency in the relationship between language use and an unknown policy (e.g., an altruistic policy combined with rude language), the assistant's learning of the policy is slowed. Overall, our preliminary results suggest that developing simulation frameworks in which AI assistants need to infer preferences from diverse users can provide a valuable approach for studying practical alignment questions.[https://github.com/janphilippfranken/scai/tree/releaseCode and Prompts] § INTRODUCTION Developing scalable methods for effectively steering AI systems is a key challenge for alignment research <cit.>. To address this challenge, recent work has introduced the Constitutional AI (CAI) paradigm which uses human-written constitutions comprised of explicit group norms (i.e., “do not be hateful”) as guiding principles for AI assistants [see <ref>a; ]. While these methods provide effective means to align AI assistants, they also face challenges.For example, assessing the robustness of a constitutional principle can be challenging in real-world applications of language models, especially when a user's request is consistent with more than one task <cit.>, or when the user requests the assistant to perform a task that is outside of the assistant's training distribution <cit.>. Furthermore, constitutional principles may reflect an inadvertent bias towards the creator's preferences, which can lead to systematic inequalities in the assistant's behavior <cit.>.Given the inherent ambiguity and diversity in real-world applications of language models, it is desirable to have an AI assistant capable of dynamically adapting its local governing principles to align with varying group norms or preferences <cit.>. Motivated by this observation, we explore Social Contract AI (SCAI): a method for aligning AI assistants with implicit group norms (<ref>b). Unlike CAI, which operates on a set of fixed, formal rules or constitutional principles, SCAI aims to infer group norms from observed interactions among users. As such, the only fixed principle in SCAI is the meta-principle of finding out what the group norms or preferences are in order to align the AI assistant's behavior with users. To evaluate the potential of SCAI, we conduct proof-of-concept simulations using the ultimatum game[Due both to its simplicity and its ability to capture much of the psychology of negotiation, the ultimatum game has been a mainstay of cooperative game theory since at least the mid-twentieth century <cit.>] (see <ref>), formalizing group norms (i.e., user preferences) as policies that guide the actions of simulated players. We ground SCAI in the context of Bayesian (inverse) reinforcement learning <cit.> and introduce a verbal reinforcement learning algorithm <cit.> which uses game interactions to revise the AI assistant's policy. Overall, our contributions are as follows: (1) We introduce Social Contract AI (SCAI), a method for aligning AI assistants with implicit group norms; (2) we present a simulator for implementing SCAI using verbal reinforcement; and (3) we validate SCAI by comparing the alignment between the shares offered by the AI assistant and those proposed by simulated users in the ultimatum game.§ RELATED WORK Social Simulation. Large Language Models (LLMs) are increasingly used in simulation-based research and social games <cit.>. For example, <cit.> introduced a sandbox environment inhabited by generative agents that simulate daily human activities, allowing for the study of emergent social behaviors. Such simulation-based approaches provide a useful framework for side-stepping issues related with reinforcement learning from human feedback (RLHF) <cit.> such as reward misspecification <cit.> or reward hacking <cit.> by shifting the responsibility of supervising AI to simulated human agents whose capabilities and incentives are defined within the simulation. Moreover, simulation-based approaches can generate synthetic datasets which can be leveraged for downstream fine-tuning of models. For example, <cit.> introduced StableAlign, an algorithm which is trained on data generated through a sandbox environment where simulated language agents are tasked with providing preference ratings when discussing controversial societal questions sourced from https://github.com/anthropics/hh-rlhfHH-RLHF. This approach has resulted in competitive performance on alignment benchmarks such as helpful, honest, and harmless (HHH) <cit.>. Our work builds on these findings and uses simulated social interactions to study the alignment of an AI assistant.Social Contracts and Virtual Bargaining. Much of human interaction is guided by implicit norms or informal agreements (i.e., social contracts) rather than a set of fixed, formal rules or constitutional principles <cit.>. Recent work has formalized some of these observations within the context of virtual bargaining, a process in which implicit agreements are revised in ways similar to actual bargaining between people <cit.>. Specifically, rather than having a predefined set of preferences or agreement, people construct their agreements and preferences dynamically based on the context and actions of others. This involves mental simulations that consider not only individual preferences but also those of other parties, facilitating a form of “virtual” negotiation even before any actual interaction occurs. Building on this idea, <cit.> proposed that humans construct their preferences by inverting a model of agreement, that is, inferring environmental conditions and other people's preferences from observed or simulated interactions <cit.>. Motivating SCAI as a form of inversion of agreement, we explore the possibility of aligning an AI assistant with a group by inverting a model of users' preferences from observed game interactions. § ALIGNING AI ASSISTANTS WITH IMPLICIT GROUP NORMS Preliminaries.To empirically explore the potential of SCAI, we developed a simulator that uses verbal reinforcement (“metaprompt”) <cit.> to dynamically rewrite the AI assistant's local governing principles to align with users' preferences. We ground this inference problem in the context of Bayesian (inverse) reinforcement learning<cit.>, where the environment is provided by the task at hand—here, a modified version of the ultimatum game (see <ref>).We represent users' preferences (i.e., the shared group norm(s)) as a shared policy, such as “be selfish when making offers” or “be altruistic when making offers”. Each user is instantiated as a separate language model whose actions are determined by the shared policy. The AI assistant's goal is to learn this shared policy from observed game interactions. Unlike users,whose policy is set at the beginning of the game and remains fixed across training epochs, the AI assistant is seeded with a random policy and refines its policy after each training epoch to meet the meta-principle's objective. See <ref>, for technical details. Evaluation MetricsWe run simulations with three standard policies from economics and evolutionary game theory <cit.>: selfish, altruistic, and fair. Our primary evaluation metric is the offered share[We also collected data on accept/reject behaviors and computed the overall utility for both users and the AI assistant. We will present these evaluation metrics in further extensions of the present work.], measured as a percentage of the total amount that an agent (user, AI assistant), acting as player 1 (the proposer), offers to share with player 2 (the decider). Using this metric, we can first assess whether a policy such as “be selfish when making offers” results in selfish offers that benefit the proposer more than the responder (e.g., a 9:1 split of $10) by observing the offers made by users. This sanity check is important for determining whether users' observed offers align with the (latent) policy the assistant aims to learn. Further, we can use the assistant's offered shares to explore the following research questions: (1) alignment: Can the AI assistant learn a policy from observed game interactions that results in offers matching the offers made by users? (2) generalization: Does the AI assistant's learned policy generalize to an out-of-distribution (OOD) setting in which the assistant is exposed to a potentially controversial currency not present during training (e.g., grams of medicine instead of dollars)? (3) inconsistency: Does inconsistent use of language (e.g., an altruistic policy combined with rude language) affect the assistant's learning of users' shared policy?Simulation Setup. We ran 20 independent simulations using<cit.> with a temperature of 0 for each of the unique settings explored below. Each simulation ran for five training epochs. We varied the number of user and assistant interactions within each run of the ultimatum game and present results from simulations with 8 user–user interactions and 2 assistant–user interactions (i.e., one interaction in which the assistant is the proposer, and one interaction in which the assistant is the responder) in <ref> (<ref> includes an additional example of 8 assistant–assistant and 2 assistant–user interactions). Unless otherwise specified, we vary currencies and amounts randomly between simulations.§.§ Simulation Results Sanity Checks. We find that the shares offered by users correspond to the expected behavior under a given policy. For instance, users following a selfish policy consistently make offers in which they propose to share nothing (i.e., 0%) of the total amount, while altruistic users show the opposite behavior, proposing to share 100% (see <ref>a, left panel). We note that the lack of variation in users' offers can be attributed to a temperature of 0 which lead to deterministic actions across users. This choice was intentional to control for potential effects of simulation noise on the assistant's ability to learn the latent policy. We will explore the impact of noise in users' actions in future extensions of our work. Alignment. To examine whether the assistant's offered shares align with the offers of users, we explored settings with both one (i.e., every user has the same policy) and mixed group norms (i.e., proportions of selfish versus altruistic norms varied between users). For the one-group norm setting (<ref>a, left panel), we observe that the assistant's offered shares closely align with users' offers after just one revision of the assistant’s initial (random) policy. An example of a learned policy that represents an altruistic group norm is displayed in the right panel of <ref>a. [The AI assistant’s offered shares start close to fair due to the random seed combined with GPT-4's tendency to default towards fair offers unless explicitly prompted otherwise.]Overall, findings from our first simulation suggests that, in the present setting, the AI assistant accurately learns the latent policy guiding users' interactions. The results from our mixed-group norm showed that the assistant's offered shares converged to the distribution of offers expected from the distribution of policies present in the group. Specifically, we find that for a group with 80% selfish and 20% altruistic norms, approximately 80% of runs yield selfish policies, while 20% result in altruistic policies for the AI assistant (<ref>a, middle panel; see right panel for example policies learned in two of the 20 runs). We observe a similar convergence pattern for groups with 20% selfish and 80% altruistic norms, as well as 50% selfish and 50% altruistic norms. These findings suggest that the assistant can learn a distribution over policies (across simulation runs) that aligns with the distribution of policies observed in the user group. An important extension could be to prompt the assistant to learn multiple policies within a given run (instead of learning a single policy) to see if the assistant can recover the distribution of user policies within a run rather than only matching the distribution across runs. Generalization. Next, we investigated if the AI assistant's learned policies generalize to out-of-distribution (OOD) scenarios in which the assistant is exposed to a potentially controversial currency not present during training (in the example shown in <ref>b, we train on dollars and test on grams of medicine).[We further explored whether varying out-of-distribution amounts (e.g., training with amounts < 1,000 and testing with amounts such as 2. Billion) affected generalization behavior and found similar effects on offered-shares. For exploratory purposes, we also ran a condition in which we asked the assistant to provide a reason for its offered shares, both in in-distribution and out-of-distribution test runs; see <ref>, for an example.] The left panel in <ref>b shows that testing a selfish policy results in selfish offers in-distribution (i.e., testing on dollars), whereas OOD offers were strongly influenced by the assistant's prior, which we here arbitrarily set to altruistic. This finding is interesting because the only difference in the assistant's prompts between in-distribution and OOD runs was the use of a different currency not present during training (i.e., grams of medicine instead of dollars). Inconsistency. To examine the effect of inconsistency, we explored two specific cases of inconsistent use of language (<ref>c). Here, we observed that when the manner in which users communicate their proposals (e.g., rude) conflicts with the expectations set by a given policy (e.g., altruistic), the assistant still learns a policy that results in similar offers to those of users; however, convergence is slower and fails to fully match the offered shares of users within five training epochs (<ref>c, left panel). Changing from rude to sycophantic manners and setting users' policies to selfish had a similar effect on the assistant's learning of the selfish policy (<ref>c, right panel). § DISCUSSION In this paper, we proposed Social Contract AI (SCAI), a method that combines simulation <cit.> with verbal reinforcement techniques <cit.> to align an AI assistant with user preferences. By grounding our work within the formal context of the ultimatum game <cit.>, we formalized preferences (i.e., the shared group norm(s)) as policies that guide the actions of simulated players and measured alignment through the shares offered by the proposing player. Through our proof-of-concept simulations, we showed that the AI assistant can accurately learn policies to align its behavior with users.Additionally, we showed that the assistant’s learned policies lack robustness and exhibit limited generalization in an out-of-distribution setting when confronted with a currency that was not included in the assistant's training distribution; moreover, learning from users using inconsistent (or contradictory) language slowed learning of the group's policy.Social Impacts Statement. While our work is at an early stage, we believe that SCAI addresses an important non-technical alignment challenge highlighted in previous work: “figuring out what the group preferences are” <cit.>. Specifically, rather than having a team of researchers write a model's content policy or constitution, we propose to have an AI assistant learn group norms and preferences through observation and active participation in interactions with simulated users. This approach allows for (1) the study of the kinds of group norms that emerge under varying conditions; (2) assessing the flexibility of learning such group norms across potentially inconsistent (or ambiguous) tasks; and (3) studying the robustness of group norms as guiding principles for the AI assistant in out-of-distribution settings. More generally, scaling up simulation frameworks—where an AI assistant must infer the (unknown) preferences of diverse users—may provide insights into designing more democratic and representative guiding norms for AI assistants <cit.>.plainnat§ PROBLEM FORMULATION As this paper focuses on the use of LLM-based assistants to help uncover implicit user/group norms in tasks via natural language dialogue, we expect states and actions of the corresponding decision-making problem to represent natural language prompts/queries and responses. For simplicity, if V denotes a fixed, finite vocabulary of tokens, then L = V^+ denotes the space of all possible natural language utterances consisting of at least one token in V that may be consumed as input or produced as output to the LLM. Consequently, the state space and action space of any user task are both in terms of natural language: S, A⊆L. While singular tasks have traditionally been studied in the reinforcement-learning literature <cit.> and formalized via the classic Markov Decision Process (MDP) <cit.>, the notion of agents striving to achieve success across multiple tasks or goals is also well-studied <cit.> and is traditionally captured by the Contextual MDP (CMDP) formalism <cit.>. Specifically, a CMDP is given by M = ⟨C, χ, S, A, R, T, μ, γ⟩ where each possible goal or task of interest is characterized by a context c ∈C which is sampled at the start of each episode according to the distribution χ∈Δ(C); it may be helpful to think of C⊆L×^n such that a context c ∈C can be interpreted as some natural language description coupled with numerical features about the task and users. Naturally, one expects the nature of the task and the behavior of the user(s) interacting with the agent to influence its experiences. Formally, this is captured by context-sensitive variant of the traditional MDP components, allowing context to create variation in rewards R: C×S×A, transitions T: C×S×AΔ(S), and initial states μ: CΔ(S). Within a single episode where a context c ∼χ is randomly sampled, it may be easier to simply think in terms of the resulting MDP the agent interacts with for the duration of the episode: M_c ≜⟨S, A, R_c, T_c, μ_c, γ⟩. An agent's interaction within MDP M_c unfolds as described above with the caveat that the agent itself employs a contextual policy π: S×CΔ(A) where action selections depend on both the current context and state. Denoting the class of all contextual policies as Π≜{S×CΔ(A)}, the learning objective within a CMDP is to identify an optimal policy π^⋆∈Π which achieves maximal returns: sup_π∈Π𝔼[∑_t=0^∞γ^t ℛ_c(s_t,a_t)], where the expectation integrates over randomness in the context c ∼χ, initial state s_1 ∼μ, action selections a_t ∼π(·| s_t, c) and transition dynamics s_t+1∼𝒯_c(·| s_t, a_t).Before delving into the details of an agent interacting online with users to incrementally synthesize group norms and preferences, we first entertain a simpler offline setting wherein an agent takes no action but instead aims to derive users' norms or preferences solely through passive observation of human gameplay. Such a scenario naturally lends itself to the inverse reinforcement learning (IRL) problem <cit.> which inverts the traditional reinforcement learning setting by consuming a partially-specified decision-making problem and expert demonstrations as input in order to recover the underlying reward function that encodes the agent's preferences over behaviors <cit.>. For the ultimatum game studied in this work, the corresponding reward function captures shared group norms about how to behave (selfishly, altruistically, or fairly) when issuing or deciding upon an ultimatum. A common practice is to iteratively interleave steps of IRL and traditional reinforcement learning to compute an optimal policy for the inferred reward function, a process widely known as apprenticeship learning <cit.>. As the previous section outlines, the ultimatum game is defined as a CMDP where the context differentiates between the task of issuing an ultimatum versus deciding on an ultimatum already issued. It then follows that the so-called inversion of agreement <cit.> proceeds by performing IRL within this CMDP <cit.>.A naive approach to designing an online agent for synthesizing group preferences would simply consist of letting each user within the group interact with a version or copy of the LLM and engage in a dialogue to elicit responses consistent with the individual's preferences. Unfortunately, this methodology runs counter to the goal of distilling group-level preferences and norms that maximally benefit the community at large. In order to promote helpfulness and harmlessness for the overall population of users, we utilize two LLMs: a MetaLM (whose objective is defined in the meta-principle) and an Actor/AssistantLM (whose objective is defined by the policy generated by the MetaLM). Specifically, the first MetaLM is given a meta prompt, which articulates the overall goal of synthesizing shared preferences, as well as the history of user-assistant interactions generated thus far. Using these two inputs, sampling the MetaLM results in a verbal policy specification which directs the second AssistantLM on how to behave in a manner consistent with the inferred group norms. This is sufficient to intialize and prime the AssistantLM for interaction with a single user or group of users via a standard dialogue interaction; as any single directive from the MetaLM can strongly influence the nature of how the AssistantLM interacts with users, the AssistantLM can itself be interpreted as a mapping π_assist: LΠ from directives (natural language) to contextual behaviors (an element of the contextual policy class). Meanwhile, if the set of all possible user-assistant histories is denoted as H (formally, this is set of all possible sequences of CMDP trajectories), the MetaLM can analogously be viewed as a policy π_meta: L×HΔ(L). While standard reinforcement-learning algorithms rely on incremental and parametric updates of policies or value functions in order to drive learning <cit.>, we recognize the richness of knowledge already present within pre-trained LLMs and instead situate SCAI in the context of Bayesian reinforcement learning <cit.>. Briefly, Bayesian reinforcement learning methods for a single-task MDP M proceed over K ∈ episodes and begin with a prior p(M| H_1) that reflects an agent's preliminary beliefs about the underlying environment based on the initial null history H_1 = ∅. In each episode k ∈ [K] ≜{1,2,…,K}, the agent uses current beliefs about the world p(M| H_k) to compute a policy, resulting in a trajectory of ground-truth data sampled from the true environment M which then induces a posterior distribution p(M| H_k+1) via Bayes' rule. For the purposes of this paper, it suffices to think of the transition function (encoding, for instance, the dynamics of the ultimatum game) as being already known so that only epistemic uncertainty <cit.> in the underlying reward function that encodes group preferences remains. One concrete and provably-efficient algorithm for converting current environmental beliefs p(M| H_k) to a policy for execution in the current episode is through Posterior Sampling for Reinforcement Learning (PSRL) <cit.> which, in essence, employs Thompson Sampling <cit.> by drawing one statistically-plausible MDP M_k ∼ p(M| H_k) and acting optimally with respect to this sample via the optimal policy of M_k, π^⋆_M_k. While, in principle, each step of ground-truth experience sampled from M could enable a posterior update and, consequently, a change in the behavior policy used within the episode, such switching leads to volatility that slows learning <cit.>.Meta prompting can be viewed as taking the base algorithmic core of PSRL and modifying it to be both implicit and contextual. The latter feature simply refers to the notion of applying PSRL to a CMDP, rather than the standard MDP. For clarity, we provide the pseudocode for such a contextual version of PSRL as Algorithm <ref>, which also appears in prior work on meta reinforcement learning <cit.>. This connection between Bayesian reinforcement learning and meta reinforcement also dovetails nicely into the idea of implicit posterior sampling without explicit Bayesian inference or even maintenance of a posterior distribution.Unlike the standard PSRL algorithm for tabular MDPs whose provably-efficient learning guarantees rely on precise distributional assumptions and explicit probabilistic models of the underlying MDP <cit.>, an implicit posterior-sampling approach recognizes the two minimum needs of (1) being able to draw samples from the posterior distribution given the history of all interactions thus far and (2) the ability to act optimally with respect to these samples. Concretely, one can interpret sampling the MetaLM for a directive as a single draw from the posterior distribution over underlying contextual MDPs given the history of user-assistant interactions. Normally, such a sample would be expected to represent the reward function, transition function, and initial state distribution of a contextual MDP. Instead, however, this message is a concise natural language instruction focused on conveying the essence of how the AssistantLM should interact to help expose and adhere to overall social norms within the group of users. Prior work has already established generalizations of PSRL which operate based on lossy compression of the underlying MDP, rather than fully specifying every detail of the reward structure and transition dynamics <cit.>. Meta prompting follows suit with recent work that explores the versatile role that natural language may play in the context of Bayesian reinforcement-learning algorithms <cit.>; rather than acting as a summary of the ever expanding history of agent-environment interactions, this work instead treats the constitution as a sufficient statistic for inducing the optimal policy of some statistically-plausible hypothesis for the underlying contextual MDP. We provide pseudocode for our SCAI as Algorithm <ref>.Naturally, the AssistantLM then becomes the key linchpin for acting optimally with respect to a directive sampled from the implicit MetaLM posterior. This implementation of posterior sampling via memory-based meta learning has been established in prior work  <cit.>, with the interpretation that the MetaLM adaptively filters the history of past user-assistant interactions according to Bayes' rule <cit.> and, in the context of LLMs, essentially produces a verbal policy from the overall posterior predictive distribution over optimal policies <cit.>. Finally, we note that the SCAI system likely interacts with several users or groups of users in parallel, potentially playing different roles of either issuing or deciding on ultimatums through differing context samples. Such concurrent reinforcement learning has been established not only as an effective practical heuristic <cit.> for accelerating learning speed but also as a provably-efficient exploration technique <cit.>, particularly when used in conjunction with PSRL <cit.>. Our approach extends this latter line of work to incorporate contextual MDPs as well as considerations for natural language based tasks with LLMs.0.43 0.550.55 § ADDITIONAL SIMULATION RESULTS§ PROMPT ILLUSTARTION
http://arxiv.org/abs/2310.17769v2
{ "authors": [ "Jan-Philipp Fränken", "Sam Kwok", "Peixuan Ye", "Kanishk Gandhi", "Dilip Arumugam", "Jared Moore", "Alex Tamkin", "Tobias Gerstenberg", "Noah D. Goodman" ], "categories": [ "cs.CL", "cs.AI" ], "primary_category": "cs.CL", "published": "20231026202703", "title": "Social Contract AI: Aligning AI Assistants with Implicit Group Norms" }
arabic Graph Convolutional Networks for Complex Traffic Scenario Classification Tobias Hoek^1,2 Holger Caesar^1^1TU Delft Andreas Falkovén^2^2Kognic Tommy Johansson^2January 14, 2024 =============================================================================================== A scenario-based testing approach can reduce the time required to obtain statistically significant evidence of the safety of Automated Driving Systems (ADS). Identifying these scenarios in an automated manner is a challenging task. Most methods on scenario classification do not work for complex scenarios with diverse environments (highways, urban) and interaction with other traffic agents. This is mirrored in their approaches which model an individual vehicle in relation to its environment, but neglect the interaction between multiple vehicles (e.g. cut-ins, stationary lead vehicle). Furthermore, existing datasets lack diversity and do not have per-frame annotations to accurately learn the start and end time of a scenario. We propose a method for complex traffic scenario classification that is able to model the interaction of a vehicle with the environment, as well as other agents. We use Graph Convolutional Networks to model spatial and temporal aspects of these scenarios. Expanding the nuScenes and Argoverse 2 driving datasets, we introduce a scenario-labeled dataset, which covers different driving environments and is annotated per frame. Training our method on this dataset, we present a promising baseline for future research on per-frame complex scenario classification. § INTRODUCTION Self-driving or autonomous vehicles (AVs) have received significant attention in recent years due to their potential to revolutionize transportation. These vehicles offer a promising solution to many of the drawbacks associated with traditional commuting methods.AVs have the potential to enhance the commuting experience in terms of comfort and productivity during the ride, while also addressing societal challenges such as emissions reduction <cit.>, traffic congestion resolution <cit.>, and lower travel costs <cit.>. However, one of the most significant advantages of AVs is their potential to improve overall road safety for all traffic participants.The existing simpler automation systems for vehicles known as Advanced driver-assistance systems (ADAS) show promise in reducing traffic incidents <cit.> already. Ongoing research and car development aims to enhance traffic safety through higher-level Automated Driving Systems (ADS).To ensure superior performance of ADS compared to human drivers, proper development and testing are crucial. However, conducting test drives in real traffic poses safety risks and requires an impractical amount of driving miles to gather statistically significant evidence. According to <cit.>, obtaining such evidence would require 275 million failure-free miles, given the rarity of critical situations in regular traffic scenarios. This timeframe is unfeasible for the production of AVs using regular driving speeds. An alternative solution involves conducting smaller test drives where critical situations are simulated. By leveraging these simulated scenarios, it is possible to obtain the same statistical evidence of ADS performance in critical situations within a more manageable timeframe <cit.>. To keep pace with the rapid development of these systems, multiple countries are updating their legislation for the acceptance of AVs. A clear example of such a change in legislation is the regulation that is proposed by the EU (EU2019/2144 <cit.>).This regulation establishes type-approval requirements for vehicles and components, emphasizing safety for all, including occupants and vulnerable road users. While existing regulations covered ADAS and ADS evaluations, advancements towards SAE level 4 self-driving vehicles and effective scenario-based testing led to adding critical scenarios to mandatory acceptance tests. These scenarios play a vital role in gathering the required statistical evidence to validate the safety performance necessary for regulatory approval. Consequently, it becomes crucial to determine if your system is ready for these particular scenarios. Detecting such scenarios within your dataset not only reveals insights about dataset quality but also streamlines both validation and training processes. This could be done with the use of a classification algorithm for scenarios. However, this gives rise to two issues.Firstly, the most current scenario classification methods target simpler situations. These situations involve either a single vehicle or the vehicle's interaction with its surroundings. However, the few existing approaches dealing with complex scenarios perform classification per-agent instead of per frame (where each frame is a snapshot at a certain interval in the time dimension). Secondly, no comprehensive publicly available dataset exists that provides per-frame labeling of scenarios. To address these issues we present the following contributions:0em * we Designed a supervised scenario classification approach that is able to classify complex ego-centered scenarios, that are not constrained to specific environments (e.g. highways) and that requires modeling the relation between agents, and agents and the environment based on their position, direction, and velocity.* We extend Graph Convolutional Networks to incorporate the latest advances for representing agent-agent and agent-environment interaction. For the temporal aggregation CNN over the temporal dimension is used.* We created a scenario classification dataset by hand-selecting scenarios and annotating every frame. This is made as an extension of the publicly available datasets nuScenes <cit.> and Argoverse 2 <cit.>.* We evaluate our method and related works on our dataset and compare it against baselines, creating a reference approach for future work on our dataset.§ RELATED WORK The literature on scenario classification is limited. Additionally looking into closely related tasks such as maneuver detection or trajectory prediction can be insightful. These methodologies have in common that there are challenges in the spatial and in the temporal aspect. The existing works will be elaborated accordingly. §.§ Scenario classificationFew existing works focus on scenario classification.A method by <cit.> is based on rules and detects lane changes of surrounding vehicles. It relies on distance measurements between the ego vehicle, other vehicles and environmental features like lane markings. However, this rule-based method will show its limitations when trying to classify more complex scenarios involving multiple cars or strong variations within a specific scenario.<cit.> proposed a method that also uses the sensor measurements taken from various sensors of the car, such as the inertial measurement unit (IMU) or distance measures to lanes or other cars. These raw measurements are used as input channels for their CNN. This approach shows more promise in terms of scalability compared to the rule-based approach, but it still has limitations as it does not consider the presence of other vehicles.In addition to the sensor measurements, <cit.> uses dashcam footage within their pipeline. This footage is merged into one feature block with the help of intermediate object detection steps.This work is limited because it is solely based on the detection of the cars and does not use information such as lane markings.Spatial aggregation. There are also models that use a more comprehensive spatial aggregation. These works use a form of intermediate representation. Methods that employ a grid as an intermediate representation are proposed by <cit.> and <cit.>. The former suggests a grid representation that incorporates occupancy and velocity. The latter also developed a grid representation, but in contrast, this grid is based on polar coordinates and the velocity relative to the ego vehicle. However, the main limitation of both these methods is their inability to incorporate environment information (e.g. road markings or centerlines of driveable road). In general, grids have limitations. Low resolutions cause rasterization artifacts and hinder the accurate shape depiction. Raising resolution may alleviate these issues but will enlarge the grid with excessive and unnecessary information.Maintaining high resolution but a small input size, which has a positive effect on computational efficiency, is addressed via graphs in <cit.>, which is an approach designed for vehicle behavior classification. In this work, the graph encodes agent locations and points sampled on lane markings as vertices. Using a Graph Convolutional Network <cit.> (GCN), the relation between all these vertices is processed for further steps. This model can classify scenarios involving actor-environment relationships and relationships among various actors. However, it lacks critical information about agent direction and velocity, which is essential for distinguishing scenarios in diverse situations, such as the contrast between highway and urban driving settings. Additionally, their method focuses on actions of all agents, rather than actions involving or around the ego vehicle.Several works use GCNs for trajectory prediction models.In <cit.>, a comparable graph input approach is employed. <cit.> differs from the other two because the edge weights are normalized based on the distance between agents. The convolution in LaneGCN <cit.> differs from these three works because it uses dilated convolution <cit.> between the graph layers for a larger receptive field. LaneGCN uses multiple different GCNs based on the directional relations of the selected waypoints. Our work differs from LaneGCN in terms of how the GCN is used.They apply GCN solely to static map data, excluding agents.Their temporal focus is on initial agent trajectories, ignoring map evolution and agent-map relationships using GCN. Our approach integrates both aspects, leveraging the temporal evolution of the map and usage of GCN for agent-map and agent-agent interactions. Temporal aggregation. Several methods are used in literature to encode the temporal aspect of the scenario. Some methods use Recurrent Neural Networks, e.g. LSTMs <cit.> or GRUs <cit.>. These methods can suffer from training inefficiency, slow computational speed,and are prone to overfitting for small datasets due to their large number of parameters <cit.>. A newer approach is the use of attention mechanisms <cit.>. This is used in <cit.> or in combination with LSTMs in <cit.>. These attention models show very promising results on temporal data, although they also increase complexity significantly and require large amounts of data. A simpler alternative involves applying a conventional CNN across the temporal dimension. On smaller datasets used for scenario classification tasks, this shows good results either by performing this convolution on crafted or learned features <cit.> or merged deeper within the model where the consecutive CNNS are alternately on the spatial and the temporal aspect <cit.>. §.§ Datasets Numerous datasets have been proposed for autonomous vehicle perception <cit.>, prediction <cit.> and planning <cit.>. Unfortunately, the situation differs for the scenario classification task.For this task, real-world traffic scenarios are categorized into predefined classes per interval of frequency f. Existing datasets for scenario classification use either simulated data <cit.> or data obtained in limited environments, e.g. only highway data <cit.>. Furthermore, many datasets are not publicly available <cit.>.While <cit.> offers information about scenarios, they label entire sequences as scenarios, which is not suitable for precise scenario classification. Instead, we label individual frames. Furthermore, their work is auto-labeled and manually reviewed to guarantee high precision. In contrast, we manually reviewed two datasets to also guarantee a high recall.This enables us to phrase scenario classification as a multi-class classification problem. § DATASET In order to develop and assess a scenario classification technique, a corresponding dataset is essential. Given the absence of an existing or accessible one, we generated our own dataset. The process for creating this dataset is outlined in this section. §.§ Scenario definition According to <cit.>, a scenario defines as follows:A scenario depicts the temporal evolution between scenes within a sequence, starting with an initial scene and covering a specified duration. Here a scene is defined as: A snapshot of the environment, encompassing scenery, dynamic elements, actors' self-representations, and entity relationships. To create a list of relevant scenarios, we start from the scenarios proposed in the EU type-approval regulation <cit.> and remove scenarios that cannot be detected in public datasets. Examples of these removed scenarios are collision avoidance, emergency brake scenarios, and specific scenarios such as blocking toll gates.Finally, we select 8 scenario categories (Tab. <ref>). The frequency and duration statistics in this table correspond to our dataset. Further details on this will be provided in the scenario extraction paragraph.Here a cut-in (1) represents a scenario where another vehicle changes lanes into the ego vehicles lane. Stationary vehicle in lane (2) is a variation on a cut-out scenario, where a stationary vehicle is in the ego lane, such that the ego vehicle has to either brake or perform an obstacle avoidance maneuver. 3 and 4 are ego lane changes in both directions. 5,6,7 represent the actions at crossings. No scenario (0) indicates all other driving scenarios, including lane keeping and more complex maneuvers not included in the list. This list of scenarios is mutually exclusive and complete, thus making it suitable for the scenario classification task. §.§ Dataset creationAfter defining the scenarios of interest we created the dataset based on nuScenes and Argoverse 2. This process involved three main phases. Initially, data was chosen and labeled. Then, a preprocessing step aligned the differing frequencies between the two datasets. Finally, to ensure a better balance and eliminate less relevant timeframes, we removed unnecessary timesteps in the dataset's final stage. Data selection and labeling. The traffic information used for this dataset is obtained from existing public driving datasets, specifically nuScenes <cit.> and Argoverse 2 <cit.>.In the selection phase, all the front-camera videos in the datasets are inspected manually. A sequence is selected for the dataset if it includes at least one of the explicitly defined scenario classes (classes 1-7) from Sec. <ref>). Meaning that sequences with only the presence of class 0 are not taken into account. This mitigates the extreme class imbalance inherent in the task, as class 0 dominates the datasets. These class 0 timeframes around labeled scenarios are taken into account resulting in a sufficient number of occurrences within the dataset.For each keyframe in the dataset, annotated with bounding boxes for each agent, we label the current scenario. This results in 312 sequences of 20 seconds obtained from nuScenes, and 253 sequences of 15 seconds from Argoverse, or a total of 565 sequences. Frequency alignment. We use nuScenes and Argoverse 2, which are annotated at 2Hz and 10Hz respectively.We use linear interpolation to bring both datasets to the same frequency (4Hz). For Argoverse, this means that we interpolate between every 2nd and 3rd keyframe. This enables us to train the same scenario classification model on both.Scenario extraction. Instead of using complete sequences from the original datasets, we extract shorter sequences for each scenario. Our interest extends beyond classification; we also need to determine precise scenario start and end times, which requires temporal context. We obtain this by cutting out all scenarios (except no scenario) with a random amount of timesteps before and after each scenario. This is limited to a maximum of 8, if available in the original sequence, and a minimum of zero. This procedure has the advantage that it further reduces class imbalance since most of the frames in the full sequences are labeled as no scenario. This results in 652 sequences of varying lengths, since the full sequences may contain multiple scenarios. Sequence durations range from 2 to 23 seconds, with an average of 8 seconds. All data is sampled at 4Hz. The distribution between classes can be seen in Tab. <ref>. We notice that the more complex scenarios (1,2,3,4) occur less often than the crossing related scenarios. The standard deviation is notably significant compared to the mean duration. This is unsurprising, as scenarios can be executed at different speeds, leading to a broad range of durations. § METHOD Our method utilizes Graph Convolutional Networks to classify complex traffic scenarios, capturing agent-agent and agent-environment interactions. Shown in Fig. <ref>, our model takes graph inputs, comprising three core components: spatial aggregation, temporal aggregation, and a classification head producing frame-wise class probabilities.§.§ Graph construction We represent each frame of the traffic scenario by a graph.This graph is given by G_t = { V_t, A_t }, where t is the frame index with t ∈{1, ..., T } and T represents the sequence length.For each frame, V_t denotes the graph's vertices with V ∈ℝ^N × c.Here is N the number of vertices present in the graph and c represents the feature channels.In this case V_i = (x,y,ϕ, v) and c = 4. Here x,y represents the bird's eye view location of the vertex, and ϕ is the heading angle both in an ego-centered frame. v is the velocity in m/s of the agent represented by the i^th index.See Fig. <ref> for a visual explanation.For our method, the nuScenes x,y, and ϕ had to be transformed to the ego frame. In Argoverse 2 this was already the case. Waypoints are added to the graph similarly, at 3-meter intervals along the centerlines of the driveable road. These vertices have zero velocity such that V_i = (x,y,ϕ, 0). ϕ is the driving direction of the road segment at this particular waypoint. The model will learn to distinguish road waypoints and vehicles in a later stage. The edges between vertices are denoted by adjacency matrices A_ti∈ℝ^N × N.Here i represents 5 different adjacency matrices employed to learn diverse relationships. The first is A_suc, which covers relations between waypoints and it successive waypoints. A_pre is the same for preceding waypoints. A_W2A is the connection between waypoints and agents (excluding ego). A_E2W covers the relation between the ego-vehicle and the waypoints, and at last A_E2A is between the ego-vehicle and the other agents. Each adjacency matrix is applied in different stages outlined in Sec. <ref> and <ref>.To illustrate, we show how the graph G_t for a scenario is constructed in Fig.<ref> and Eq.<ref>. These illustrations provide insight into the creation of V_t and an Adjacency matrix respectively. The depicted relation is A_E2A in this matrix, which includes self-connections via the addition of the identity matrix. Without self-connections, only the neighboring vertices are taken into account in the GCN, not the vertex itself.A+I =[[ 1 1 1 1 0 ⋯ 0; 1 1 0 0 0 ⋯ 0; 1 0 1 0 0 ⋯ 0; 1 0 0 1 0 ⋯ 0; 0 0 0 0 0 ⋯ 0; ⋮ ⋮ ⋮ ⋮ ⋮ ⋱ ⋮; 0 0 0 0 0 ⋯ 0 ]]_N× N§.§ Spatial aggregation Spatial feature extraction occurs across three stages, each stage is based on a different relation, such that the model can learn whether a vertex is a waypoint or an agent. The first one learns the spatial aspect of the map data and their relation, the second one does the same for all the other agents. The last stage is where the relation between the environmental and agent features and the ego vehicle is learned.Every stage is built upon a GCN <cit.>. The matrix operations required to compute the new hidden layer are given by the following equation:H^(l+1)=σ(D̃^-1/2ÃD̃^-1/2 H^(l) W^(l))To include self-loops, Ã_t is obtained by adding the identity matrix I to A_t.Here A is the Adjacency corresponding to the stage. The diagonal degree matrix D̃ is employed to calculate the average of neighboring vertices. H denotes the feature matrix prior to convolution, and W represents the trained weights. Finally, σ represents an activation function, ReLu <cit.> in this case.§.§.§ Environment representation We represent the centerline of each drivable lane on the road as a static vertex in the graph. ϕ is the direction pointing towards its successive waypoint. Compared to the agents v =0 because a waypoint is static. Such that V_i = (x,y,ϕ, 0).The spatial dependencies of these waypoints are extracted in two parallel convolution blocks.See the environment representation part of Fig. <ref>.The objective of these two blocks is to learn the directional relation between the centerline waypoints.The adjacency matrices used in these two steps are { A_t }_i ∈{ suc, pre}, (successive, preceding).A_suc is obtained by using directional connections between a centerline waypoint and its successive point. Since the waypoints of lane segments are ordered from start to end. The successive adjacency for this specific segment is obtained by shifting this identity matrix one place to the right. The connection is now between a vertex and its successive vertex. A_suc is assembled by combining these segment adjacency blocks into a single matrix. Extra connections are added between the end of one segment with the start of its succeeding segment. If a lane segment is two or more successive lane segments (e.g. a fork crossing), connections to the first points of both segments are added.A_pre is constructed likewise but in the opposite way as A_suc.Each of the parallel graph convolution blocks consists of 4 layers of graph convolution followed by a linear layer. The outcomes of both blocks are summed together and fed through a fully connected layer before passing to the next step.This approach is inspired by the MapNet part of LaneGCN. <cit.>. For simplicity, we don't use the relations between the waypoint and their left and right neighbors, which is the case in LaneGCN.§.§.§ Agent representation As mentioned earlier, agents are represented as graph verticesV_i = (x, y, ϕ, v).The relationships among all agents and the ego vehicle are encoded in the Agent-Environment fusion part of Fig. <ref>.A graph convolution using A_W2V (waypoint to vehicles) precedes this step. A_W2V captures relations between vehicles (excluding the ego vehicle) and environment features from Sec. <ref> if they are within the distance threshold of d=30m. To limit the first layer of GCN to cars in the direct environment. The purpose of this operation is to “update” the spatial information of the other vehicles according to their relation with the environment. This is done such that the relation between the agents and the environment is taken into account when modeling the relation between both separate parts and the ego vehicle.Next, a GCN facilitated by A_E2V is applied between the ego vehicle and the features of other vehicles, obtained in the previous step. A_E2V is produced by connecting the ego vehicle to all other vehicles within d=30m.These connections are unweighted such that the model can learn the importance weights of the connections themselves. This block consists of two GCN layers, fewer than in the environment representation, as the lower density of vertices (there are fewer agents than waypoints) makes the required receptive field achievable after just two layers. §.§.§ Agent environment fusion First, the features of the environment and the agents are merged. This is done in the block running in parallel with the agent representation block mentioned in Sec. <ref>. This parallel structure allows the model to learn features simultaneously using both environmental and agent information, while still preserving their distinctiveness. The relationship between the ego vehicle and environmental features from Sec. <ref> is established using A_E2E (Ego to Environment). A_E2E encapsulates ego vehicle to waypoint unidirectional connections within d=30m. These connections remain unweighted, enabling the model to learn their individual importance.The output of the GCN blocks is fed through a fully connected layer separately before the second stage of the fusion process. In the second part of the feature fusion process, the outputs are summed and fed through a fully connected layer to generate the final spatial encoding. §.§ Temporal aggregation As defined in Sec. <ref>, scenarios describe the temporal development of a scene.Thus scenario classification cannot depend solely on spatial data.Graph evolution over time must be considered.Since we are interested in short-term scenarios (8s on average), we use CNNs for temporal aggregation. All the spatial information is captured by the aforementioned blocks. The input of the CNN becomes of shape H ∈ℝ^N × F × T. Here, F represents the number of channels of the features learned from the previous step, T denotes the number of frames, and N the number of vertices in the graph.A convolutional kernel with dimensions 1 × F × Q is applied to slide over this input along the T dimensions to learn the temporal dependencies.We use dilated convolution <cit.> in the temporal dimension for a larger receptive field without using too many layers. The input is padded to maintain the same output size. It is important to note that due to the kernel's convolution over multiple timeframes, which also include future timeframes, the model is restricted to performing offline predictions exclusively. §.§ Classification headThe last stage of the model consists of a fully connected layer that outputs the class probability logits for every frame of the temporal window that is observed, in the shape T × n_classes (=8 in our case). A softmax function is used to obtain class probabilities.The class with the highest probability is selected as the final prediction at frame t, which gives a set of predictions Y = (c_1, ..., c_T), where T is the sequence length, as output.§.§ Loss functionThe network is trained by minimizing the common cross-entropy loss for n classes:L_CE=-∑_i=1^n y_ilogŷ_iWhere ŷ_i is the softmax probability for the i^th class and y_i is 1 if the class label i is the correct ground truth label or 0 if this is not the case.§ IMPLEMENTATION DETAILS The model primarily uses PyTorch and PyTorch Geometric (PYG) for GCN implementation and efficient graph handling. Training occurs on an NVIDIA Titan RTX GPU. To enhance computational efficiency, sparse form adjacency matrices are employed, using two indices for connections rather than dense matrices. §.§ Spatial feature extractorIn Sec. <ref> a block with 4 graph convolution layers is detailed, featuring 4 layers and outputs of 16, 64, 128, and 128 channels. The agent representation block contains two graph convolution layers with an output feature dimension of 128. Layer normalization and ReLU activation are appliedGCN layer.The Agent-environment fusion block, with two graph convolution layers, retains 128 dimensions. In all three parts after every GCN layer, Layer Normalization <cit.> and Rectified Linear Unit (ReLU) <cit.> are applied. §.§ Temporal AggregationThe temporal feature extractor is composed of four CNN layers. The first layer reduces the feature dimensions from 128 to 16. The next two layers maintain 16 feature channels, and each uses a 1× 3 kernel. In the first three layers, asymmetrical padding of 1, 2, and 4 is applied in the time dimension, respectively. Zero padding is used in the vertex dimension to preserve the same dimensions. The last convolutional layer uses a kernel size of 1× 7 with a padding of 3 in the time dimension only. This is done for smoothing the predictions. After each convolutional layer a Scaled Exponential Linear Unit (SELU) <cit.> is applied. §.§ Training processThe model is trained for 25 epochs using the Adam optimizer <cit.>. The learning rate is initiated at 1×10^-4and decays with a factor of 0.1 after epochs 8,14 and 18. Class weights are used to prevent the effects of class imbalance on the classification output.The class weights are as follows:W_i = N_samples/n_classes× n_samples, i Where W_i is the weight for specific class i, N_samples is the total amount of samples, n_classes the total amount of classes, and n_samples,i the amount of samples labeled as i.§ EXPERIMENTS The proposed model's performance assessment is evaluated in three steps. First, we compare it to simpler scenario classification models to understand the impact of our model's elements. Then, we perform error analysis with an Error distribution diagram for the top-performing model. Finally, we assess per-class performance. §.§ Ablation study The metric used to compare different versions of the models is the area under the precision-recall curve (PR-AUC). This metric is advantageous because it focuses on identifying positives, rather than attempting to balance negatives, without the need to fine-tune a decision threshold.PR-AUC is also well-suited for use on imbalanced datasets. The average PR-AUC is calculated by finding the PR-AUC of every class first using a one-versus-all strategy. We conducted an ablation study to assess the significance of each component of the model. The findings are summarized in Tab. <ref>.The full model (as in Fig. <ref>) outperforms all other variations. Residual connections over the main blocks of Fig. <ref> perform worse. Introducing weighted adjacency matrices, where connections within the adjacency matrices reflect the reciprocal of the distance (d^-1) such that the weight of closer vehicles is larger. This weighting leads to performance suppression compared to unweighted adjacency. The importance of map data becomes evident when examining the results of the experiment in which the map data is removed.Substituting the temporal aggregation method with an LSTM instead of a convolution results in poorer performance compared to the full model, but the inclusion of residual connections enhances performance in this case. A model with the same spatial encoding as the full model, but without any temporal aggregation is shown as "No temp. aggregation". The low PR-AUC shows the importance of temporal aggregation. Furthermore, the removal of ego convolution from Sec <ref> results in a substantial performance drop, although it still performs better than the model lacking both ego convolution and map convolution.The worst model is the baseline, comprising of a single GCN applied across all available vertices, followed by a CNN in the temporal dimension. This architecture fails to capture the distinctions between waypoints and agents, leading to a significant performance decline. §.§ Error analysis Continuous sequences present various challenges, such as varying sequence lengths, potential merging or fragmentation of scenarios, and fuzzy scenario boundaries that are difficult to determine even for humans. To gain a better understanding of the model's predictions, we proceeded to conduct more comprehensive testing on the model that exhibited the best results in the previous section. The initial step involved generating the Error Distribution Diagram (EDD) <cit.>. See Fig <ref>. The EDD breaks down False-Positives (FP) and False-Negatives (FN) into multiple categories <cit.>. For FP, we consider three subcategories:0em * Overfill: The prediction extends beyond the ground truth boundary of the scenario.* Merge: The prediction combines two separate scenarios of the same class into one.* Insertion: The model predicts a scenario where no scenario is actually happening.For FN, we have three subcategories:0em * Underfill: The prediction does not cover the entire ground truth of a scenario.* Fragmenting: The model splits one scenario into multiple smaller scenarios.* Deletion: The prediction fails to detect a scenario.For multi-class classification, we distinguish between different cases when an FN classification occurs <cit.>. Underfill is divided into two categories: substitute underfill, where the underfill error is replaced by another class, and normal underfill, where it is replaced by 0. Similarly, fragmentation has substitute fragmentation and normal fragmentation. When a boundary lies between two non-zero scenarios, the underfill-overfill error option also comes into play.The occurrence percentages of these error subcategories are visualized in the EDD.Overfill, underfill and underfill-overfill are placed above the serious error line because these are mistakes that are inevitable considering the fuzzy boundaries of the scenarios beginning and end. §.§ Per-class performance In addition to comparing different model setups using PR-AUC, we offer a more intuitive metric for each class. Tab. <ref> shows class prediction accuracy per dataset and the class distribution in the training data. Classes 1 and 2 show lower accuracy on both sets, which can be attributed to two reasons. Firstly, their occurrences are fewer compared to others. Secondly, these scenarios are more complex, they require information from the relation between agents and between agents and the environment. The accuracy on class 0 is also high because this is still present in every sequence and therefore dominant in the train set. Another positive insight is that overall accuracy is similar per class for both datasets. Despite some scenarios being underrepresented in one dataset (like class 2), the model generalizes well across traffic scenarios, not just specific datasets. In summary, the model is trained with relatively few instances of each class. This is particularly noticeable when compared to previous scenario classification studies. However, it continues to exhibit satisfactory performance.§.§ State of the art comparisonThe literature presents various works on scenario classification. However, these works often struggle with complex scenarios. <cit.> is capable of classifying more complex scenarios. To compare performance we conducted comparative tests using their model on our dataset.Implementation-wise, there are notable distinctions between the models. Firstly, our spatial aspect relies on Cartesian coordinates, theirs is based on the quadrant in which a vehicle or object is situated relative to another.secondly, their work performs classification per graph vertex instead of per timestep. This means that for a given observation window of length T with N detected objects it outputs N class predictions instead of T, as in our work. To make testing on our dataset possible, some alterations had to be made in the model of Mylavarapu <cit.>. Details on these alterations are described in Appendix <ref>. The comparative results of these experiments can be found in Tab. <ref>. In the table, we differentiate between ego and non-ego actions. Ego actions are solely related to the ego vehicle's actions and their relationship with the environment, indicated by a checkmark. Non-ego actions involve interactions between several agents and the ego vehicle and their relation to the environment.We can conclude from Tab. <ref> that the overall accuracy is very comparable. As we can see our model outperforms the ego actions specifically on the lane changes. The explanation for this lies in the fact that our model is ego-centered. This is because, in our GCN part, several layers are focused solely on the relation between the ego vehicle and the environment or actors. In their work, the GCN is based on the relation between all present vertices. Furthermore, our model's better performance in predicting ego actions is attributed to the quadrant-based approach's lower sensitivity to minor changes like during for example lane changes, as it only detects differences when a vertex moves to another quadrant.Next to this accuracy comparison, we compared the average required training time per epoch for both models. These results are also shown in Tab. <ref>. This is training only, so no validation. This shows that our model trains more than three times faster than theirs. Furthermore, in order to train their model on our dataset, we had to shrink the input size due to memory constraints. These factors collectively showcase the substantial computational efficiency advantage of our model. The explanation for this is that their temporal aggregation method, based on a combination of LSTMs and attention is significantly more complex than ours. § DISCUSSIONHere we put the results from the previous section in context. While our best method from Tab. <ref> achieves a PR-AUC of 58.8, a model that performs random guessing according to the frequencies in the validation set, achieves only a PR-AUC of 12.5. This shows that our method effectively identifies and detects scenarios.The per-class accuracy in Tab. <ref> showcases the model's ability to classify all trained classes.The model's generalizability extends beyond a single dataset, enabling further training with diverse sources for enhanced performance. This also makes it possible to add scenarios that are not present in the currently used datasets.Our distinguishing aspect is the per-frame classification.This also presents a challenge in terms of performance. The EDD graph in Fig. <ref> becomes insightful in this context.Even human annotators disagree about the precise beginning and end of a scenario. When we accept underfill and overfill errors to some extent, our model performs even better than at first sight. Another advantage of our method is that we support varying sequence lengths, while <cit.> only supports fixed sequence lengths. Our method also has a significantly reduced training time, which indicates that simplicity and efficiency of the selected components for spatial and temporal aggregation.In conclusion, we developed a competitive method with scalability and computational efficiency advantages compared to related works.Its potential is significant, especially when further refined, optimized, and supplemented with additional data. § CONCLUSION In this work, we discussed that scenario-based testing of ADS is very time-efficient. Finding these scenarios streamlines this process more. We designed a scenario classification method that is able to find the beginning and the end of diverse and complex scenarios. The model uses GCNs for the spatial aspect and on CNNs for the temporal aspect. The combination makes it possible to learn to classify scenarios that are based on interactions of a vehicle between the environment as well as between vehicles. We showed that the model only classifies a serious error in less than 30% of the frames. This is achieved through training and validating the model on a newly labeled scenario classification dataset, which extends nuScenes and Argoverse2.We thus provide a baseline model for future works on this dataset. Future work will cover advanced network structures, such as Transformers and other attention models, implemented in various parts of the model. This could be implemented for improved spatial aggregation as well as temporal aggregation.We will also investigate whether Large Language Models can be used to automate the dataset creation process and enable open set scenario classification.ieee_fullnametocchapterAppendix § APPENDIX§ COMPARISON TO MYLAVARAPU ET AL. When comparing our model to the approach used in <cit.>, we notice several differences. First, their approach takes camera footage as input and generates an activity label for each node within the graph.These labels include: 1) moving away, 2) moving towards us, 3) parked, 4) left lane change, 5) right lane change, and 6) overtaking.Nodes that represent waypoints are labeled as parked.In both models there is a clear distinction between spatial and temporal encoding.Vertices in their model are of shape V_i = (O), where O is the object type O = {vehicle, waypoint}. This means that a waypoint is labeled as 1 and a car as 0. In our model, the nodes are given as V_i = (x, y, ϕ, v). Their model employs multi-relation GCNs over the quadrant in which two vertices are relative to each other, while we use multi-relation GCNs based on the object types. In their model temporal aspects are captured using LSTMs and attention, while we use simple convolutions over the time dimension. As discussed earlier our model focuses on the ego vehicle, because several GCN layers perform convolution only on the relation between the ego vehicle and its environment or the surrounding agents.In their model, the GCN uses the relation between all the present vertices.This means that in the rare case that two scenarios happen simultaneously within the observation window, their model does not necessarily classify the correct scenario. Our approach is ego centered and therefore more likely to classify the correct scenario in this case. Because of these differences, certain adjustments were necessary prior to conducting a comparative test. As the published code lacks the semantic segmentation part, we do not use this component. We create a graph for our dataset, which we use as input to their method.We get adjacency in their format by computing angles between vertices.These angles are then replaced by (0,1,3,4,5), which stand for top-left, bottom-left, top-right, bottom-right, and self-edge. This applies if the angle matches a quadrant or is a self-edge. Furthermore, their model is designed for fixed sequence lengths. To make it work on our dataset, with sequences of varying lengths, the sequences are padded to a fixed length. The padding is removed after the GCN part, such that the original sequence length remains. Since every timestep is handled separately inside the GCN, this padding does not affect the other timesteps. Converting their model to do classification over the timesteps instead of the vertices, the last average pooling and fully connected layers are applied on the vertex dimension instead of the temporal dimension resulting in a T × n_classes output instead of N × n_classes. § OUTPUT VISUALIZATIONTo gain a better understanding of the different model performances we visualize a specific scenario in Fig <ref>.To gain a better understanding of this specific scenario, an image from the front camera is shown in Fig. <ref>. We see the agent cutting in from the right. In this visualization, only the relevant vehicles are plotted, i.e. the ego vehicle and one agent. To improve clarity, we have represented the (x, y) coordinates at intervals of 2 frames (2Hz). The green sections denote drivable road segments.We can see that in this cut-in scenario, the agent enters from the side of the road. The decreasing gap between the ego points as time progresses implies braking, the widening gap between the locations of the agents indicates that there is acceleration happening within the lane.The outputs of the different models for this specific scenario are shown in Tab. <ref>.Compared to the ground truth the full model performs quite well. The scenario is detected too early so the predictions at frames 5 and 6 will be labeled as overfill for the EDD. The importance of temporal aggregation is shown in the model "No temporal agg". It shows a lot of wrong classifications and it fails to learn the temporal aspect of any scenario. This can be seen because it switches between classes at high frequency and scenario occurrences of 1 frame exist. This is not the case in real-world scenarios. The LSTM learns that there is a cut in the present within this sequence since it is only predicting 1s and 0s, but it fails to find the correct beginning and end of the scenario.In Tab. <ref> we can see the results of various models applied to this specific scenario. The full model, when compared to the ground truth, demonstrates good performance.However, it detects the scenario too early, leading to frames 5 and 6 being erroneously labeled as overfill for the EDD. The significance of temporal aggregation becomes apparent when examining the "No temporal agg" model. It exhibits a high number of incorrect classifications. Moreover, it fails to capture the temporal aspect of the scenario correctly. This can be seen in the frequent switches between classes over time. These rapid switches in scenarios are never the case in real-world data. The LSTM model learns the correct scenario since there are only 0s and 1s present, but it fails to learn the timing of the scenario.
http://arxiv.org/abs/2310.17773v1
{ "authors": [ "Tobias Hoek", "Holger Caesar", "Andreas Falkovén", "Tommy Johansson" ], "categories": [ "cs.CV", "cs.AI", "cs.LG", "cs.MA", "I.2; I.4; I.5" ], "primary_category": "cs.CV", "published": "20231026205124", "title": "Graph Convolutional Networks for Complex Traffic Scenario Classification" }
Exploring End-User Empowerment Interventions for Dark Patterns in UX]From Awareness to Action: Exploring End-User Empowerment Interventions for Dark Patterns in UX Both authors contributed equally to this work.University of Notre Dame Notre Dame IN USA [email protected] 0000-0003-0845-5563 [1] Work done as a visiting researcher at the University of Notre Dame.Cornell University Ithaca NY USA [email protected] 0000-0003-4286-8468 [2]Cornell Tech New York NY USAVirginia Tech Blacksburg VA USA [email protected] of Notre Dame Notre Dame IN USA [email protected] study of UX dark patterns, i.e., UI designs that seek to manipulate user behaviors, often for the benefit of online services, has drawn significant attention in the CHI and CSCW communities in recent years. To complement previous studies in addressing dark patterns from (1) the designer’s perspective on education and advocacy for ethical designs; and (2) the policymaker’s perspective on new regulations, we propose an end-user-empowerment intervention approach that helps users (1) raise the awareness of dark patterns and understand their underlying design intents; (2) take actions to counter the effects of dark patterns using a web augmentation approach.Through a two-phase co-design study, including 5 co-design workshops (N=12) and a 2-week technology probe study (N=15), we reported findings on the understanding of users' needs, preferences, and challenges in handling dark patterns and investigated the feedback and reactions to users' awareness of and action on dark patterns being empowered in a realistic in-situ setting.<ccs2012><concept><concept_id>10003120.10003121.10003126</concept_id><concept_desc>Human-centered computing HCI theory, concepts and models</concept_desc><concept_significance>500</concept_significance></concept><concept><concept_id>10003120.10003123.10011759</concept_id><concept_desc>Human-centered computing Empirical studies in interaction design</concept_desc><concept_significance>300</concept_significance></concept></ccs2012> [500]Human-centered computing HCI theory, concepts and models [300]Human-centered computing Empirical studies in interaction design[ Toby Jia-Jun Li===================§ INTRODUCTION Dark patterns <cit.> are user interface design choices that lead to certain decisions users might not otherwise make, often for the purpose of benefiting an online service <cit.>. Such dark patterns often result in user behaviors that are against the best interests of users e.g., pressured selling, video binge-watching, giving up personal data, and installing applications that they do not need <cit.>.Dark patterns vary in their complexity and impact. Ranging from subtle manipulative nudges <cit.> like non-consensual additions to shopping carts <cit.> and prioritized options <cit.>, to overt deception like alarming virus alerts, these patterns have led to adverse user impacts, raising public awareness and prompting regulation <cit.>. Most current efforts in addressing dark patterns take (1) the designer’s perspective on education and advocacy for ethical designs <cit.>; and (2) the policymaker’s perspective on new regulations <cit.>. For example, a CHI 2021 workshop titled “What can CHI do about dark patterns? <cit.>” was held to discuss how designers can address dark patterns and what changes designers can advocate for through interactions with stakeholders. Another instance is an act named “Deceptive Experiences To Online Users Reduction (DETOUR)” introduced to prohibit large online platforms from using dark patterns <cit.>.However, these efforts often fell short of fully utilizing the autonomy of end users in self-protection <cit.>.End users have strong incentives and the desire to protect themselves from online threats, but often lack the capacity and associated support <cit.>. Moreover, dark patterns are generative and shapeshifting, thus will continuously evolve, making it difficult to fully define and regulate through policies.Therefore, to complement previous efforts, we take on an end-user-empowerment orientation in this paper to explore the design of interventions for dark patterns considering the autonomy of end users.Guided by the Protection-Motivation Theory (PMT) <cit.>, we coined two types of intervention for our end-user-empowerment approach, targeting users' awareness and action. First, we enhance awareness by increasing transparency about the presence and impacts of dark patterns. Second, we enable users to take action against dark patterns, as previous studies have shown that awareness alone is not sufficient <cit.>. We employ a web augmentation approach, allowing users to select between pre-defined UI enhancements to dark patterns according to their preferences.We also propose a Design-Behavior-Outcome framework, to map out the design space for UI enhancements in user action.This framework situates individual intervention techniques (e.g., hiding, disabling, friction, etc.) from previous work <cit.> at different interaction phases between users and dark patterns. The resulting UI enhancements can change interface designs and user flows, or evoke users to reflect on the consequences caused by dark patterns. To explore the design of our end-user-empowerment intervention for UX dark patterns, we conducted a two-phase co-design study.The first phase was five exploratory co-design workshops with 12 participants. We investigated user needs, challenges, and preferences in handling UX dark patterns.Through the workshops, we found that users have the desire to actively learn about dark patterns' impact, and their perceptions and coping mechanisms of dark patterns are individualized and dynamically changing.They also expect to be able to counteract dark patterns by changing interfaces, adjusting user flows, and reflecting on behavioral outcomes.Informed by the results of the first phase workshops, we further curated and deployed a technology probe study with 15 new participants for two weeks. The probe study aims to contextualize users in their everyday experience, investigate their feedback towards our approach throughout 2 weeks, and elicit more design implications for future end-user-empowerment interventions. We materialized our awareness and action interventions as a probe named Dark Pita[Dark Pita is an acronym for Dark Pattern Intervention for Transparency, and Accountability.] in the form of a browser extension against a representative sample of dark patterns in popular online services. The results showed that with our end-user empowerment approach, users gained transferable knowledge about dark patterns, felt empowered with autonomy over UIs, and chose UI enhancements to act against undesired dark patterns based on their dynamic, contextualized goals on different platforms.=-1 Although the current version of Dark Pita is limited to handling a small sample of dark patterns with hand-crafted design enhancements, it exemplifies a new bottom-up end-user-empowerment approach.The study findings confirmed the effectiveness and presented useful design implications.The paper also outlines a research roadmap towards scaling up our approach with development in user behavior modeling, interface semantic understanding, and citizen science platforms. We end this paper with a discussion on how the end-user-empowerment approach connects to the ongoing efforts in policy-making and advocacy for design ethics. In summary, this paper makes the following contributions.* A novel end-user-empowerment intervention approach for counteracting dark patterns in UX by enabling the end users of interfaces to recognize, understand, and take action upon dark patterns, including raising the transparency of dark patterns' presence and impacts; and modifying dark patterns by switching between UI enhancements according to their own personal preferences and goals. * Findings and design implications from a two-phase co-design study consisting of 5 co-design workshops (N=12) to explore users' underlying needs, preferences, and challenges in handling dark patterns; and a 2-week technology probe study (N=15) to investigate users' feedback and reactions to their awareness of and action on dark patterns being empowered in an everyday setting. * An agenda for the research community to scale up this approach and deploy it in conjunction with ongoing efforts in crowd-sourced collective intelligence, citizen science, machine learning, policy making, and advocacy for design ethics.§ BACKGROUND AND RELATED WORK§.§ Studies of Dark PatternsBrignull coined the term “dark pattern” (also known as “deceptive design pattern”) which refers to “a user interface that has been carefully crafted with an understanding of human psychology to trick users into doing things that they did not intend to” <cit.>. Such dark patterns are prevalent—a previous study analyzed 240 popular mobile apps and found that 95% of them contained at least one instance of dark patterns <cit.>. They commonly come in a variety of types across the web and mobile platforms <cit.> in different cultural context <cit.>, exploiting users' attention <cit.>, time <cit.>, money <cit.>, privacy <cit.> and autonomy in outright or subtle ways <cit.>.At CHI 2023, a new SIG was formed to combat the growing issue of dark patterns in tech design through research, regulation, and interdisciplinary collaboration <cit.>.The prevalence and “dark” nature of dark patterns arose in a wide range of work published in the past years by HCI and CSCW academics. Brignull established a site[https://www.deceptive.design/] to collect examples of dark patterns and divided them into different types <cit.>. Gray et al. <cit.> introduced “dark patterns” as an ethical phenomenon in design and identified five manipulative design strategies: nagging, obstruction, sneaking, interface interference, and forced action. This foundational work has led researchers to uncover dark patterns on gaming <cit.>, robotics <cit.>, IoT devices <cit.>, and social platforms <cit.>, thereby establishing both generic <cit.> and domain-specific <cit.> taxonomies of dark patterns. To unify these diverse taxonomies, Mathur et al. <cit.> proposed six design attributes to characteristic dark patterns at a high level of generality. They described how dark patterns modify the disclosed information and underlying choice architecture for users, helping us to disclose manipulative mechanisms and provide targeted alternatives in our study.=-1 Previous work also examined the designers' perspective regarding their intents and stakeholder values leading to “dark” designs <cit.>. Moreover, studies on user attitude and perception have also expanded the dark pattern literature <cit.>. They investigated users' accounts of felt manipulation <cit.>, unintended behaviors <cit.>, and perceived nuances between dark patterns and “asshole design <cit.>,” foregrounding the need for users to have agency over their online experience <cit.>. Therefore, our work builds on previous efforts to address dark patterns (Section <ref>) and understand user awareness (Section <ref>), using co-design methods to explore a new intervention approach to empower users against online manipulation. =-1 §.§ Efforts in Addressing Dark PatternsPrevious work investigated how designers, educators, and regulators can contribute to addressing dark patterns' adverse influence on end users <cit.>. From the perspective of designers, a growing number of researchers called for the incorporation of ethics into the design process <cit.>.Chivukula et al. <cit.> revealed that designers often have dark and tacit intentions to persuade users with business purposes of satisfying stakeholders, even with sensitivity to user values. Academics, therefore, have proposed design methods to foster better alignment with user values, such as value-centered design <cit.>. From an education standpoint, Gray et al. <cit.> encouraged UX professional organizations to build ethical education into the fabric of HCI/UX education and practice.Educators can also offer courses to deepen users' understanding of dark patterns <cit.>,train users to identify them <cit.>, and increase their resistance through long-term boosts <cit.>.In terms of policymaking, efforts to investigate how dark patterns hurt user benefits (Section <ref>) have been raised as a space for new policies to be formed. Regulators can implement economic incentives and regulatory interventions to force companies to reduce dark patterns in their services <cit.>. For example, recently published official reports from the European Union Commission <cit.>, the European Data Protection Board (EDPB) <cit.>, and the Federal Trade Commission (FTC) <cit.> that specifically outline taxonomies of dark patterns, examples of violations, and opportunities for characterization and governance interventions. However, most of these efforts overlooked the end users' autonomy of self-protection <cit.>. End users have strong incentives and the desire to protect themselves from threats in their online experiences, but often lack the capacity to do so <cit.>. Meanwhile, dark patterns are shape-shifting and continuously evolving, making it hard to completely ban them with policies.There is no one-size-fits-all solution.Previous studies have coined many intervention techniques to change individual types of dark patterns, such as enforcing consent <cit.>, hiding or disabling <cit.>, adding friction <cit.>, and using “bright patterns” <cit.>.Due to the diversity of dark patterns, these techniques can hardly be effective for all.To add an additional challenge, users' preferences for intervention techniques can change with their evolving understandings and perceptions <cit.>. Therefore, we need to better understand end users' expectations of interventions and their spontaneous approach to self-protection.In this work, inspired by previous studies of user awareness of dark patterns <cit.> and end-user web augmentation <cit.>, we take a human-centered approach to support end users, by disclosing the presence and impact of dark patterns, and empowering users to “fix” the undesired ones with pre-defined UI alternatives. §.§ User Perception of Dark PatternsSeveral researchers have conducted empirical studies to understand users' perception of dark patterns <cit.>.For example, Gray et al. <cit.> identified qualitatively supported insights to describe end users' experiences of being manipulated.They found a broad awareness from users that something is “off” or “not correct,” but still lacking the ability to precisely describe what drives the feeling of being manipulated <cit.>. Maier and Harr <cit.> suggested users' perception of dark patterns goes through four stages—impression, assessment, balance, and acceptability.They have to get an impression of dark patterns, assess their convenience and manipulation, balance the trade-off, and then accept or reject dark patterns.However, the obscurity of design intents and the abuse of cognitive biases <cit.> make it difficult for users to comprehensively understand dark patterns (impression), and therefore hinder the subsequent assessment and balance processes. Therefore, we conducted the first-phase co-design workshops, seeking to answer what information users need to make up for the lack of transparency. Based on the findings, we propose the awareness intervention, aiming to empower the end users of an interface to recognize dark patterns (impression), understand the potential effects on their choice architecture and welfare (assessment), and balance the tension between user values and manipulation (balance).Furthermore, Bongard-Blanchy et al <cit.>. discovered that the awareness of dark patterns does not necessarily lead to the ability to oppose adverse manipulative influences.A single “transparency” intervention may be insufficient to help users counteract dark patterns, implying the dual role of raising users' awareness and empowering them to take actions. Therefore, we propose another intervention called action which is based on an end user web augmentation approach.§.§ Web Augmentation Web augmentation <cit.> allows end users to customize existing web interfaces for personalized user experiences. GreaseMonkey[https://addons.mozilla.org/en-US/firefox/addon/greasemonkey/] is among the earliest browser extensions that manage user scripts to augment websites, many of which target adapting web UIs. Since GreaseMonkey requires users to write code scripts, it is mostly used by people with programming skills. Later on, many low-code or no-code web augmentation tools have been designed to lower the technical barrier <cit.>, allowing end users to change websites by direct interaction with UI elements <cit.> or replacing components with defined alternatives <cit.>. This interaction paradigm makes it easier for end users without programming expertise due to its naturalness <cit.>.Many of these tools adopt a community-driven approach, where users share their web augmentations to be re-used by others (e.g. GreaseSpot[https://wiki.greasespot.net/] and Arc Boosts Gallery[https://arc.net/boosts]). However, these communities and their dynamics seem to be under-studied in CSCW, with only a few papers from adjacent research communities <cit.>.Previous work has investigated the use of web augmentation to address specific dark patterns. For example, Nouwens et al. <cit.> designed a browser extension, Consent-O-Matic, that automatically responds to consent pop-ups based on the user’s preferences. Kollnig et al. <cit.> proposed an approach named GreaseDroid, enabling Android users to remove dark patterns in mobile applications with “patches”. While these two technical-centric studies extended the feasibility of dark pattern interventions through web/mobile augmentation, they did not fully investigate end users' needs for such interventions through a user-centered lens. In this work, our two-phase co-design study seeks to complement these technical UI augmentation work by trying to understand end users' needs, preferences, and expectations for interventions through UI augmentation. Our insights provide inspiration for community creators in designing alternatives to unethical design patterns in the future.§ CO-DESIGN WORKSHOPS In the first phase of our study, we conducted 5 in-person exploratory co-design workshops[The protocol of workshops has been reviewed and approved by the IRB at our institution.] to achieve the following goals: =-1 * Exploring users' perceived disruptiveness and annoyance of different dark patterns in various usage contexts;* Learning the existing measures that users have developed or adopted, consciously or subconsciously, to cope with dark patterns;* Investigating users' needs and expectations regarding dark pattern intervention techniques.§.§ ParticipantsWe recruited 12 participants (PA1–PA12; 5 men, 7 women) through word of mouth, email mailing list, and flyer distribution. Our participants represent diverse backgrounds in occupational domains (e.g., health, education, social assistance, and information services), Internet usage (ranging from 2–5 hours to 8+ hours per day), and knowledge of dark patterns (7 had heard of the concept, the remaining 5 had not). Detailed demographics can be found in Appendix <ref>. We conducted 5 in-person workshops with 2–4 participants in each session. The groups were divided based on participants' time availability. Each participant was compensated $30 for their time. §.§ Workshop ActivitiesEach workshop lasted 2 hours and started with a brief introduction to the concept and examples of dark patterns. The participants completed three activities together: a focus group discussion, a storyboard fill-in session, and a tangible website redesign activity.§.§.§ Focus Group Discussion In a focus group, participants were first introduced to the concept of dark patterns, then reflected on and shared dark patterns examples they previously encountered in everyday lives. During the discussion, researchers provided feedback and clarifications to help them understand the boundary and varied “darkness” level of dark patterns <cit.>. The participants were then asked to rank their examples by the level of perceived annoyance and disruptiveness and provide explanations. Researchers followed up with questions to find out the current strategies adopted by participants to address the impacts of dark patterns.We designed this activity to help participants ground their understanding of dark patterns' prevalence and impact in their concrete personal experiences. Reflecting on dark patterns and the associated level of annoyance also acted as a stimulus, prompting participants to contemplate countermeasures in subsequent activities. The format was intentionally less structured, with an emphasis on encouraging speak-up and fostering a comfortable environment that would promote open dialogue in subsequent activities.§.§.§ Storyboard Fill-inThe second activity was a storyboard fill-in. Storyboards are commonly used HCI tools to visually communicate user experience scenarios to the audience <cit.>. In co-design workshops, storyboards help contextualize participants and prompt them to think about their needs, goals, and constraints in the scenarios described <cit.>. In our study, we adopted “fill-in-the-blanks” storyboards <cit.> to explore the users' desired solutions to dark patterns.For each storyboard, we left one or two frames blank, encouraging participants to provide insights into their understanding of dark patterns and their preferred abilities to counter these patterns[Examples of participant-filled storyboards are available in the supplemental materials.].Specifically, we presented participants with 6 storyboards (Fig. <ref>) depicting scenarios in which a tool helped them mitigate the negative impacts of dark patterns. Of these, 3 focused on improving users’ awareness of dark patterns (Fig. <ref>a), and 3 (Fig. <ref>b) others on helping users take act against dark patterns.Each storyboard contained 4 frames, representing the background of the scenario, the tool used, how the tool helped, and the desired outcome respectively.We left the second or/and the third frames blank and asked participants to brainstorm their desired tools and their interactions. §.§.§ Tangible Website RedesignThe last activity was a tangible user interface redesign. While storyboard fill-in focused on the design of intervention scenarios (e.g. the user flow of interventions), this activity targeted intervention at a lower, more granular level—the specific alternatives of dark patterns on UI design. We designed this activity to be tangible, in the form of paper prototyping instead of digital UI mockup modification, to encourage participants to make bold changes and think outside the box <cit.>. It also avoids the learning process for a new digital design tool. We curated 13 representative dark pattern examples on 7 websites for online shopping, flight booking, video streaming, and social media as a diverse set of scenarios. These examples were selected from previous research literature <cit.>, online discussions of dark patterns  <cit.>, and the researchers' own experiences, with the goal of triggering discussions among participants. In each workshop, participants selected around 5 dark patterns that they were most concerned about to work on. For each dark pattern instance, we provided a printout of the website interface and a set of cut-out UI widgets from the same interface. During the activity, participants edited the cut-out widgets and drew new UI widgets using a variety of provided stationery[The provided stationery included but was not limited to pencils, colored pens, scissors, highlighters, and glue sticks], and re-assembled them into a more desired design of the original website interface. The interfaces created by the participants were collected for analysis[Examples of participant-redesigned interfaces are available in the supplemental materials.]. §.§ Workshop FindingsFollowing the open coding methods <cit.>, two researchers conducted a thematic analysis of audio transcriptions for workshops and materials produced in the three activities. The researchers conducted three rounds of iterative labeling, in which descriptive labels on relevant transcript pieces were created, grouped, and generalized into higher-level themes. The analysis was discussion-based, with no necessity for inter-rater reliability due to the aim of discovering emergent themes <cit.>=-1. Comparative analysis was not the focus of the study, but a comparison of responses from tech industry users and other users showed no significant difference in their perceptions or coping mechanisms concerning dark patterns. This is in line with previous work <cit.> which showed no significant differences between end users and experts in perceptions of dark patterns.Our workshop findings (WF) are described below in response to our workshop goals. WF1: Users would like to learn more about the impact of dark patterns Participants expressed their desire to know more about the potential impacts of dark patterns. Although many were able to detect dark patterns, most participants only developed a vague assumption about the impacts, falling short of articulating the specifics. The particular mechanisms and impact remain as a “blackbox”, confirming findings in <cit.>. Participants often asked about the detailed mechanisms and impact of dark patterns and felt the knowledge was useful. Importantly, clear knowledge of dark patterns' impact can help users choose services more consciously and potentially reduce the irritation of seeing dark patterns. PA6 and PA7 mentioned that for disguised ads on Instagram, “those are annoying at first, but once you know that (its impact) and come to expect it, it's like okay (less annoying) (PA6)”. These findings complement existing research by demonstrating users' autonomy—they are not merely passive consumers of dark patterns. They are interested in actively learning, and the acquired knowledge can change their usage behavior and connections with online platforms. WF2: Users' perceptions of dark patterns are personal and dynamically changing, which are formed based on user preferences, types of dark pattern instances, and usage contexts.Despite the prevalent negative perception of manipulative UX design patterns, not all participants viewed them unfavorably. In fact, responses varied widely from negative to positive, echoing findings from studies on online behavioral advertising <cit.>. Users often perceived a persuasive pattern as helpful when it aligns user goals with stakeholder profits. PA2 liked the autoplay feature on Netflix, even when understanding it used forced continuation, because “it is useful when I am away from my mouse”. Similarly, for disguised ads on social media, PA12 expressed that “I won't block them. I usually don't engage or buy stuff... Maybe I'll see something in the future I like”. We summarized 3 most common perceptions from participants: disruptiveness (the user experience was disrupted by the dark pattern), indifference (the dark pattern was neither harmful nor useful), and helpfulness (the pattern was helpful in the current context).Users encountering a dark pattern typically evaluate its potential pros and cons subconsciously, influenced by factors like perceived convenience, potential consequences, and the pattern's apparent malicious intent. Accordingly, three factors—the user, the dark pattern, and the usage context—determine this perception. An example is the autoplay feature for the next episode on Netflix. PA1, PA10, and PA12 expressed their dislike of this feature, while PA2, PA5, and PA11 generally thought it was convenient. Furthermore, even the same user's perception can shift with different usage contexts and changing goals. PA11 found auto-play harmful when using Netflix during work because of the short break time, but useful when casually browsing after work just for fun. These findings are in parallel with results in <cit.> and enhance previous findings by highlighting the highly individualized and contextualized nature of users' perceptions of dark patterns.WF3: Users develop varied coping mechanisms based on their different dark pattern perceptions.This finding reveals more details regarding how end users react when facing dark patterns' perceived influences, in addition to the conclusions in <cit.>. For disruptive dark patterns, many users actively seek solutions to mitigate the impact. PA12 used a calendar to track the end of free subscription trials as a reminder to unsubscribe. On Instagram, PA6 developed a habit to avoid disguised ads when tapping through all stories. “Funnily enough, every time I watch a story, I have developed... an unconscious habit, I close out (by swiping down) and I click the next one. (PA6)” If the user can successfully find a solution, it becomes a “muscle memory” for them. For the Instagram habit, PA6 expressed that “I didn't know why I do that, but I guess that's a dark pattern and I am unconsciously adapted.” PA1 also developed the habit of reaching their mouse before an episode ends on Netflix, to wiggle the cursor in time and avoid forced continuation.For dark patterns that users feel indifferent to, the most common strategy is ignoring them. PA6 mentioned that they gradually got used to “confirmshaming” dark patterns and “just don't care anymore”. When asked about a disguised advertisement on a flight booking website during the redesign activity, PA8, PA9, and PA10 reported that they did not even notice it. They considered it “too colorful” to be relevant and therefore simply ignored it. §.§ Design ImplicationsThe thematic analysis results of our co-design workshops offered several design implications for dark pattern intervention techniques and user empowerment. DI1: Empower users with the ability to make changes on dark patterns.When encountering disruptive dark patterns, users often feel manipulated but have no ability to resist ([WF1]WF1). To help participants regain self-autonomy <cit.>, we can empower end users with the ability to change the interfaces of dark patterns. It would help users take the initiative to mitigate the negative impact. DI2: Provide information on the potential consequences of dark patternsDuring our workshops, participants expressed the need for information on the potential influences of dark patterns and envisioned a ranking of their severity. With such information, users can make better informed evaluations of the impact of a dark pattern on themselves ([WF2]WF2) and develop their coping mechanism accordingly ([WF3]WF3). DI3: Offer users multiple intervention options for each dark pattern.Perceptions of dark patterns may shift with users, types of dark pattern instances, and usage contexts. Even for the same dark pattern, users may act differently ([WF3]WF3). As a result, it is necessary to have multiple intervention options for users to choose from. In this way, users can have more flexibility in personalization and autonomy.DI4: Design dark pattern interventions with three strategies: interface design change, user flow adjustment, and behavioral outcome reflection.In our redesign activity, participants proposed intervention techniques for dark patterns that can be categorized into three approaches: (1) modifying interface components and layouts to eliminate malicious design, (2) adjusting user flows to prevent users from falling into behavioral traps, and (3) evoking reflection by uncovering the outcomes of dark patterns for long-term self-change. Future intervention designs can take inspiration from these strategies and apply them in appropriate scenarios.For example, on a flight booking website, while the website highlighted the more expensive first-class and main-cabin options over the basic economy, PA8, PA9, and PA10 changed them to the same size and color to pursue a fair style. PA1, PA2, and PA3 designed an agent to provide an appropriate action guide with dark patterns on Amazon to help them save money. PA6 and PA7 wanted to know, in the long term, how many times dark patterns on a certain website affected their behavior, to reflect on their relationship with the platform.We used these findings and design implications from our co-design workshop to guide our second co-design phase—a technology probe study. § TECHNOLOGY PROBE STUDY The co-design workshops (Section <ref>) served as a starting point for us to understand the existing relationship of users with dark patterns and their desired interventions. They looked at users' past daily experiences of encountering and coping with dark patterns with little or no external support. To further explore users' in-situ reactions toward “fixing” dark patterns on their own devices, we conducted a two-week deployment study of a technology probe.The technology probe method, proposed by Hutchinson et al. <cit.>, deploys “simple, flexible technologies” as probes in the real world with three goals: “the social science goal of collecting in-context information about the use and the users, the engineering goal of testing the technology, and the design goal of inspiring users and researchers to envision future technologies” <cit.>. This method is widely used to examine the influence of new technologies on the daily experience of users as part of the co-design process <cit.>. It is worth mentioning that a technology probe study, while containing an engineering goal of field-testing a probe, is not equivalent to an evaluative study for the efficacy of well-developed systems. This method does not seek to evaluate the probe's effectiveness on users' behavior change but to discover design implications and insights <cit.>. Thus we designed our technology probe, Dark Pita,with three research goals in mind:* Social science goal is to understand end-user reactions, preferences, and desires in situ on awareness of and action for dark patterns in their online experiences. * Engineering goal is to field test the technical feasibility of combining awareness and action as an end-user-empowerment intervention for dark patterns in realistic contexts of use. =-1 * Design goal is to explore the design space of techniques, strategies, and interfaces for end-user-empowering interventions, with a specific focus on trade-offs and user constraints. §.§ Theoretical Grounding According to the design implications in the co-design workshops, the probe should help users to (1) raise awareness of dark patterns and understand their underlying design intents ([DI2]DI2) and (2) take actions to counter the effects of dark patterns in a web augmentation approach ([DI1]DI1).The awareness and action mechanisms are naturally aligned with the Protection Motivation Theory (PMT) <cit.>, which has been widely applied in behavior change design <cit.>. PMT is commonly used to understand people's responses to triggers that appraise a potential threat <cit.>.It suggests to intervene in people's cognitive appraisal processes to motivate self-protection by articulating fear appeals.In PMT, two factors, threat appraisal (how much people consider themselves at risk) and coping appraisal (how effective people think their actions are against the risk), determine whether people would protect themselves <cit.>. This is in line with our workshop findings ([WF2]WF2 and [WF3]WF3). Thus in our design, we used PMT as our theoretical basis and mapped our awareness and action mechanisms to threat and coping appraisal by disclosing the risk of dark patterns and guiding users to take effective measures against them (Figure <ref>). §.§ The Probe: Dark PitaTo fulfill the research goals, we materialized our new PMT-based approach as a technology probe named Dark Pita, in the form of a browser extension that facilitates awareness and action for end-users against a small representative sample of UX dark patterns in several popular online services.In this section, we first describe an example scenario to demonstrate the user experience of interacting with Dark Pita.Then, we introduce the probe's main features in line with the five dimensions of probe design by Hutchinson et al. <cit.> and describe how we come up with UI enhancements for sampled dark pattern instances based on a new Design-Behavior-Outcome framework.Finally, we detail the technical implementation of Dark Pita. §.§.§ User Experience We selected five popular online services to support in Dark Pita (Amazon[https://www.amazon.com/], Youtube[https://www.youtube.com/], Netflix[https://www.netflix.com/], Facebook[https://www.facebook.com/], and Twitter[https://twitter.com/]). These samples represent different types of online services across task domains (i.e., online shopping, video streaming, and social media), containing diverse types of dark patterns (described in Section <ref>). They also possess substantial user bases, facilitating our recruitment process by accommodating a larger pool of potential participants. In this section, we provide one example scenario of how a user may interact with Dark Pita.Lisa is a frequent user of Amazon. When trying to check out on an item's page, she sees two buttons: “Buy Now” and “Add to Cart”. The “Buy Now” button reduces the friction in checking out, improving Amazon's conversion rate; however, it can potentially cause users to buy unnecessary items they regret later <cit.>. Here, Amazon designed “Buy Now” to be more visually prominent, making it easier to click on than “Add to Cart”. Dark Pita notifies Lisa that dark patterns are detected on the page and highlights the “Buy Now” button (Fig. <ref>a and b). Then, she clicks on the highlighted area, and the awareness panel reveals (Fig. <ref>c). It shows Lisa information about this dark pattern's manipulative mechanism and potential impacts, making her realize how the design can potentially trick her into directly checking out instead of adding the item to the cart. This way, Lisa will not be able to look at the total price of all items in the cart and reflect on the purchase before checking out, making her more likely to overspend on Amazon. To mitigate the effect of this dark pattern, Lisa opens the action panel (Fig. <ref>d) and chooses a UI enhancement that changes the color of the “Buy Now” button to the same as the “Add to Cart” button (from several options available as shown in Fig. <ref>d). With such experience, Lisa realizes that user interfaces can be styled differently to manipulate her decision-making process. Lisa feels that Dark Pita gives her more control and autonomy over these malicious interfaces. =-1§.§.§ Design Features Following the threat appraisal and coping appraisal processes of PMT (Section <ref>), Dark Pita consists of an awareness panel and an action panel (Fig. <ref>). The awareness panel brings the attention of end users by disclosing the manipulative mechanism of a dark pattern and the potential impact on user behavior (Fig. <ref>). The action panel empowers end users to take action to mitigate the negative impact by choosing from the multiple UI enhancements provided (Fig. <ref>). Overall, our probe can (1) detect and highlight dark patterns on websites, (2) disclose the manipulative mechanism ([DI2]DI2), (3) articulate the potential impact ([DI2]DI2), and allow users to (4) select UI enhancements for dark patterns ([DI1]DI1, [DI3]DI3, and [DI4]DI4), and (5) preview the enhancement effect. Lastly, Dark Pita can (6) record participant interactions with the browser extension, the UI enhancements, and the supported dark patterns and allow participants to keep diary notes for study purposes. Dark pattern detection Dark Pita detects dark patterns when the user enters a website. Once the probe detects dark patterns (the detection technique is described in Section <ref>), a top banner appears, allowing the user to highlight all discovered dark patterns using the “show” button (Fig. <ref>b). When the user hovers their cursors over a highlighted dark pattern, a blue border suggests the clicking affordance for more information. =-1 Manipulative mechanism disclosure Dark Pita provides a brief explanation for each dark pattern (Fig. <ref>b) and introduces the threat susceptibility with “dark” attributes <cit.> (Fig. <ref>a). These attributes describe changes that the dark pattern imposes on the user's underlying choice architecture. Users can hover over each tag to get more details. For example, “restrictive” means that the dark pattern “eliminates certain choices that should be available to you.” Potential impact articulationFor each dark pattern, Dark Pita explains its potential impact (threat severity) through a normative perspective of individual welfare <cit.> (Fig. <ref>c). Specifically, we consider three types of individual welfare—financial loss, invasion of privacy, and cognitive burden, based on the framework proposed by Mathur et al. <cit.>. Dark Pita identifies the theme to which the dark pattern belongs and elaborates on its impact. For example, it says “You are likely to get distracted and watch videos that you never planned to. The automatic preview on hover grabs your attention and distracts you even further” for the “recommended videos” dark pattern on Youtube.=-1 User interface modification If the user wants to take action to mitigate the potential impact of a dark pattern, they can press the “Take Action” button. The action panel will appear on the left. Dark Pita provides 1–4 options of UI enhancements for each dark pattern to empower users to take protective actions (self-efficacy) (Fig. <ref>d). The user can select their desired enhancements and save the changes for their next visit to the site. We detail the design process of UI enhancements in Section <ref>. UI enhancement preview For each enhancement, Dark Pita introduces its response efficacy by explaining the effect (Fig. <ref>e) and providing a preview (Fig. <ref>f). The explanation informs the user about the intervention mechanism offered by the enhancement. It describes how this enhancement scaffolds the user to avoid harm to individual welfare. For example, the probe says “Dark Pita will disable the preview function, which can protect you from being distracted.” for a “block preview” enhancement. To demonstrate how an enhancement works, Dark Pita also displays a preview animation below the explanation.Action logging and diary notesThe probe records interactions of participants who have explicitly given their consent. All personal information contained in log entries is removed locally on the participant's device before being sent out. The log contains fine-grained interactions with the probe, such as timestamps, site information, panel openings, and saved UI enhancement choices. Dark Pita also provides a diary note panel for users to submit their reflections and expectations about the probe, dark patterns, and interventions. Participants can also attach screenshots that capture context information.Overall, Dark Pita's features implement recommendations by Hutchinson et al. <cit.> that technology probes should be distinguished from regular design prototypes in five dimensions:* Functionality: Dark Pita is simple enough to test only two key ideas: raising awareness of and taking action on dark patterns. * Flexibility: Dark Pita allows users to access its features on five sites in three task domains. It also provides multiple UI enhancements for each dark pattern, allowing users to modify dark patterns flexibly. * Usability: Dark Pita leverages a small sample of dark pattern instances to demonstrate its functionality, while leaving other dark patterns on these sites to evoke participants' reflections and design ideas. Usability was not a main concern in our deployment study. * Logging: Dark Pita implemented a comprehensive logging mechanism of participant interactions. It allows participants to record their situational thoughts and keep diary notes. * Design phase: Dark Pita proposes an end-user-initiated intervention paradigm in complement to existing solutions to dark patterns led by designers, educators, policymakers, etc. It is used in early design and aims to influence the future design of interventions and user empowerment. §.§.§ UI EnhancementsWe first sampled 13 instances of dark patterns across five popular sites. Then, we designed 1–4 UI enhancements for each instance (31 in total) using selected intervention techniques from previous studies <cit.> and co-design workshops ([DI4]DI4) based on a Design-Behavior-Outcome framework.Dark pattern samples Given that millions of dark patterns exist on the internet <cit.>, it is infeasible to cover all instances in our design probe. To build a simple and flexible probe <cit.> as a demonstration of our approach, we sampled a representative group of dark patterns. Also to ensure our participants can use the probe frequently during the study period,we selected 13 instances from five popular websites across different service categories (Amazon, YouTube, Netflix, Facebook, and Twitter). Based on Mathur et al.'s approach <cit.>, three researchers determined each instance's “dark” attributes individually and discussed to resolve conflicts and reach a consensus. Detailed category information and descriptions of the selected dark patterns are included in Appendix <ref>. The research team balanced the dark pattern instances with different attributes and from different categories to form a representative sample, while some dark pattern attributes or categories are unavoidably more common than others.Design-Behavior-Outcome frameworkPrevious studies have explored various intervention techniques to counteract the influence of dark patterns on user behaviors <cit.>. These techniques can act in different phases of user interaction with dark patterns. Based on [DI4]DI4 from our co-design workshops, we propose a Design-Behavior-Outcome framework to situate different intervention techniques in their corresponding interaction stages to design appropriate UI enhancements for end users (Fig. <ref>). This framework can inspire future intervention technique designs before, during, and after a user's interaction with dark patterns. * Design: Design interventions change the visual style of the interfaces or the information displayed before the user interacts with the dark pattern.* Behavior: Behavior interventions directly guide, modify, or constrain users' behavior during the interaction flow with the dark pattern.* Outcome: Outcome interventions explain the possible consequences of provoking user reflection after interacting with the dark pattern. Specifically, we first selected 7 intervention strategies from previous literature on dark patterns <cit.> and technology-mediated nudging <cit.>, including detection <cit.>, warning consequences <cit.>, hiding <cit.>, disabling <cit.>, counterfactual thinking <cit.>, friction <cit.>, and reflection <cit.>.Detection and warning consequences were excluded because we have implemented them in the awareness features of Dark Pita.As a complement, we derived 3 participant-designed intervention strategies from our co-design workshops, that is, fairness, information disclosure, and action guide (DI4).Overall, we selected 8 techniques and situated them in our proposed Design-Behavior-Outcome framework (Table <ref>). =-1 Intervention designFor each instance, we designed 1–4 UI enhancements (31 in total, shown in Table <ref>). In Fig. <ref>, we illustrate how we designed 3 enhancements against Amazon's “Buy Now”.By using a more prominent color and hiding the cart item subtotal from users, the “Buy Now” button potentially promotes users' impulsive buying behavior <cit.>.To target the visual design of this dark pattern before interaction, we designed an enhancement to change the “Buy Now” button's color to the same as the alternative option (the fairness strategy, Fig. <ref>a).During interaction, the enhancement using friction opens a popup alert when the user hovers over “Buy Now” (Fig. <ref>b).Another alternative enhancement after interaction, utilizing the reflection strategy, provides a summary on how much dark patterns on Amazon have potentially led to unintentional shopping (Fig. <ref>c). A similar suite of UI enhancements can be used for similar dark patterns such as Subscribe & Save or Buy Now & Pay Later. Details of UI enhancements can be found in the Appendix <ref>.§.§.§ Implementation The Dark Pita probe is a Chrome browser extension. It was implemented using the Vue[https://vuejs.org/] framework and the Chrome extension API[https://developer.chrome.com/docs/extensions/reference/]. For dark pattern detection, it uses a rule-based method by matching the attributes of HTML elements (e.g., , , ) using manually authored regular expressions or finding unique section title strings in theattribute of HTML elements. This approach allowed Dark Pita to detect all instances of the same dark patterns regardless of the content of the page. For UI enhancements, we implemented them by programming the browser extension to automatically add, remove, and/or modify the corresponding DOM elements when the target website is loaded. Once the user selects a UI enhancement for a detected dark pattern, Dark Pita immediately executes the corresponding script to modify the UI of the website. Additionally, we use the Chrome storage API[https://developer.chrome.com/docs/extensions/reference/storage/] to store user configurations of UI enhancements. Every time a user opens a new instance of the target web page, the extension automatically retrieves the user's saved configurations and applies their previous UI enhancement setup. =-1§.§ Study ParticipantsAfter implementing the probe, we recruited participants through online advertising and word-of-mouth. None of the previous co-design workshop participants were included in order to mitigate the geographic biases in in-person activity recruitment. We conducted purposive sampling to ensure the diversity of participant demographics (e.g., age, gender), technology literacy, occupation, Chrome usage, and familiarity with dark patterns. In total, 17 participants were recruited, and 15 (PB1–PB15; 9 males, 6 females) completed the study. One of the two dropouts did not find time to use our probe after the entry interview. We were unable to get in touch with the other dropout after the entry interview. Therefore, we excluded the data from these two participants from the analysis. Detailed demographic information about the 15 participants can be found in Appendix <ref>. §.§ Study ProtocolThe two-week technology probe deployment study[The study protocol has been approved by the IRB at our institution.] began with a semi-structured entry interview for study introduction and Dark Pita installation. We also discussed participants' perceptions, attitudes, and behaviors toward dark patterns in past experiences to help them better understand the concept and contextualize our study. During the study, participants were asked to use Dark Pita on routine websites according to their everyday habits. We obtained consent from all participants prior to the study. We understand that experiences with dark patterns can be annoying, so we ensured that each participant understood their right to leave the study at any point if they wish.During the two weeks of use, we encouraged participants to log their thoughts and feelings as they interacted Dark Pita. We provided several heuristic questions to guide participants. For example, they can talk about a specific dark pattern, answering questions such as “How does the dark pattern affect your online experience?” and “Are there any other intervention designs that you can think of?”, or they can report any reflections or issues they encounter when using the probe. As an incentive, we offered a $2 (USD) reward for each submitted note (up to $16) and encouraged each participant to submit at least one note every two days. Our probe also recorded detailed log data as described in Section <ref>.In the middle of the study, a semi-structured 30-minute check-in interview was conducted. The main goals of this interview were to (1) clarify users' questions and concerns from the first week of use; (2) understand their usage behavior with our probe; and (3) remind them of the study procedures, including probe usages and diary note submissions. At the end of the study, we conducted a semi-structured one-hour exit interview with each participant to (1) collect information about participants' experiences with Dark Pita and explore the rationale behind interesting user behaviors or diary notes; (2) understand changes in user behaviors, perceptions, and attitudes toward dark patterns; and (3) find out users' feedback for interventions in Dark Pita. All interviews were conducted online through Zoom and recorded with the consent of the participants. Each participant was compensated $100 for using our probe and joining the three interviews. Additional compensation for sending diary notes is also provided. Appendix <ref> contains the protocols of our entry, middle, and exit interviews.§.§ Data Analysis MethodsThree researchers conducted open coding and thematic analysis <cit.> of interview transcriptions and diary notes. Throughout the analysis, they went through three rounds of labeling and engaged in constant discussions to identify code, merge themes, and resolve conflicts. Similar to the co-design workshop analysis process (Section <ref>), our goal was to discover emergent themes, and the analysis process was discussion-based, so the inter-rater reliability is not necessary <cit.>. The themes we derived during our data analysis were included in Appendix <ref>. Our analysis paid attention to users' reactions and changes throughout the 2-week period, with the three research goals in mind (Section <ref>). § RESULTS §.§ Probe EngagementThe engagement validates the feasibility of our probe (engineering goal). In total, 48,368 (mean=3224.53, std=2836.14, min=352, max=9112) action logging entries and 115 (mean=7.67, std=4.84, min=1, max=17) diary note entries were created. The participants visited a total of 13,611 (mean=907.4, std=819.88, min=58, max=3220) distinct web pages on their browsers instrumented with Dark Pita, where our probe was triggered 4,834 (mean=322.27, std=316.65, min=28, max=1188) times. 10 participants (66.7%) visited all three types of online services, 4 participants (26.7%) visited video streaming platforms and social media platforms, and 1 participant only visited video streaming platforms over the course of the study. The participants set up UI enhancements for dark pattern instances 280 (mean=18.67, std=8.93, min=1, max=34) times. During the two weeks, these UI enhancements were triggered 14,355 times (mean=957, std=1178.48, min=2, max=4621) in total. For each UI enhancement, the average number of times that it was triggered was 463.06. 2 UI enhancements (i.e., counterfactual thinking (Appendix <ref>) for the discount price on Amazon and reflection for the remaining time on Netflix) were not successfully used due to technical problems caused by website updates on Amazon and Netflix. Fig. <ref> shows the daily engagement of participants with our probe. Overall, the logs indicate that most participants were actively engaged with our probe. They visited example websites that contain instances of dark patterns, modified the interfaces of these websites with UI enhancements to mitigate the impact of dark patterns, and submitted diary entries.§.§ Key InsightsOur findings from the entry, mid-study, and exit interviews with 15 participants, together with their 115 diary entries, provide key insights (KIs) for understanding the reactions and needs of end users on awareness of and action for dark patterns (social science goal) and demonstrate the usefulness of our end-user-empowerment intervention approach (engineering goal).KI1: Providing information about specific instances of dark patterns allows users to gain transferable knowledge The users found the information presented about dark patterns on the awareness panel to be educational. PB12 found that attribute tags are “great information” and helped him better understand the design rationale of designers hidden behind dark patterns. PB16 added that Dark Pita made him know “what are the certain things that kind of triggered me into going down a hole”. Equipped with such knowledge, the participants developed new perspectives on online services. PB3, PB6, PB6, and PB12 mentioned Dark Pita helped them more explicitly see a large number of dark pattern instances on Twitter, which reduced their level of trust in the platform. PB6 became more critical of disguised ads on some platforms, and PB10 behaved more cautiously to avoid dark patterns. By seeing the dislike count on YouTube again (enabled by a UI enhancement of Dark Pita) (Appendix <ref>), PB7 was able to gain a more comprehensive view of the videos they watch. This finding showed that improved awareness through information disclosure on dark patterns can change users' perception of digital platforms, extending previous survey results of end users' mistrust <cit.>.Importantly, users were able to transfer their newly learned knowledge of dark patterns to other platforms Dark Pita did not yet support. For example, PB12 would investigate “how the similar designs (of dark patterns) might be applied to other interfaces that I use”. PB2, PB5, PB9, and PB12 started to think about dark patterns in mobile apps and desired to use Dark Pita on phones. The probe even inspired PB9 to ponder “textual” dark patterns (e.g., misleading and manipulative language on interfaces). To summarize, these findings demonstrated that providing information about dark patterns can not only raise users' awareness, but also inspire them to transfer the learned knowledge to dark patterns on other platforms. KI2: The capability to modify existing interfaces boosts the user perception of empowerment and autonomy In our deployment study, 7 participants (46.7%) explicitly mentioned the feeling of empowerment of being able to change interfaces, as they were no longer just passive consumers of decisions made by designers. PB3 mentioned that “the most empowering was being able to highlight algorithm-recommended content on Twitter... it provided a level of consciousness (during browsing)”. Previous work on dark patterns in HCI and CSCW has primarily regarded end users as passive receivers of these deceptive practices <cit.>. These findings show the benefit of our end-user-empowerment approach, treating users as engaged actors in mitigating dark patterns.Notably, participants emphasized the importance of having this support from a third-party tool. PB5, PB6, and PB12 mentioned that some platforms also allow users to make interface changes; for example, on Facebook, users can remove disguised ads and select why they are not interested. While this allows the user to hide the ad, it serves the company's business interest; some users realized this and chose not to use it. PB12 mentioned that users and companies have contradictory goals: users want to see fewer targeted ads, while the company wants to make ads more personalized to generate more revenue. On the contrary, a third-party tool like Dark Pita presents no conflict of interest with users, leading to enhanced user trust.=-1KI3: The dynamic goals and usage contexts of users when using online services determine their desired UI enhancements. Users have diverse goals online: for example, PB10 wants to reduce their time on Twitter, while PB3 and PB5 do not mind spending more time on Twitter browsing. This is in line with our workshop finding on users' perceptions of dark patterns ([WF2]WF2). In the deployment study, these differences in goals shaped users' choices of UI enhancements.Even a single user's goals can change with different usage scenarios, which in turn modifies their choice of UI enhancements. For instance, PB2, PB3, and PB14 separately reported that they did not mind the “video autoplay on hover” feature on YouTube homepage but disabled it on individual video-watching pages. This was because the feature was helpful for previewing content on the homepage, but distracting and time-wasting when watching individual videos. PB14 also turns the focus mode on YouTube on and off between “when I want to focus... and when I just want to have fun or just relax... I think they get changed based on what I wanted to do at the time.” =-1Users' goals often reflect and shape their personal relationship with a service platform. During our study, the ability to modify dark patterns reminded users of their long-term goals against impulsive behaviors. In the exit interview, PB12 mentioned that seeing so many dark patterns explicitly marked on Twitter made him want to use the platform less. They described the feeling as “someone nagging you should stop doing this”. Although sometimes annoying, they still found it helpful for their long-term goal of reducing Twitter usage. This is also related to studies on self-control such as  <cit.>.Previous discussions mostly view the “darkness” level of deceptive patterns as an objective attribute <cit.>; however, this insight extends such narrative by showing the personal and dynamic nature of such “darkness”. End users' goals and usage contexts are dynamic and individualized. Therefore, a one-size-fits-all approach or a designer- or policymaker-initiated approach cannot fully accommodate them. By offering users the choice of multiple UI enhancements, our approach enables them to customize their intervention for a dark pattern based on individual contextual preferences. =-1 §.§ Design ImplicationsOur findings offer design implications (DI) for future techniques, strategies, and interfaces of end-user-empowerment interventions for dark patterns (design goal). DI5: Design non-intrusive and less-interrupting UI enhancements Users prefer non-intrusive and less-interrupting UI enhancements. Visually non-intrusive UI enhancements that do not interfere with users’ normal browsing experiences were greatly appreciated during our study. For example, PB15 shared that the highlighted disguised ads with thick red borders (Appendix <ref>) became annoying, so he chose to directly hide the ads (Appendix <ref>) instead. On the contrary, blocking the previews of recommended videos (Appendix <ref>) is a “gentle” method to remove distracting content while avoiding users' fear of missing out on information <cit.>. DI6: Provide fine-grained control over modification of dark patterns Participants expected future interventions to give them more fine-grained control over dark patterns. For example, PB13 wanted to “control the quantity” of promoted content on Twitter and leave approximately one-third on their feed instead of removing all of them. Similarly, PB2 envisioned a filter to “control content” based on personal interests, i.e., automatically identifying information potentially beneficial to her and removing the rest. DI7: Improve transparency and provide global control for the UI enhancements. In addition to the “design transparency” of dark patterns, our participants also wanted transparency in the design of UI enhancements. They expected to see the intentions and mechanisms of these UI enhancements, so they could understand how these work against dark patterns and select the ones that fit their needs. For example, PB5 and PB15 suggested providing explanations on how Dark Pita calculates the time or money spent on reflection (Appendix <ref>). PB15 also mentioned that they wanted a global control panel and dashboard for all active UI enhancements so that they could quickly get explanations of them, view their status, and change their configurations. DI8: Contemplate with the boundary between UI enhancements and dark patterns. UI enhancements usually involve a certain degree of persuasive design or nudge techniques themselves, which is similar to dark patterns. Based on what we have learned from our studies, it is important to align the goals of the user and the goals of the intervention tools ([KI2]KI2 and [KI3]KI3) to protect the welfare of users (e.g. privacy data) <cit.>. In addition, according to Hansen and Jespersen's framework <cit.>, the dividing line between manipulative and beneficial nudges is transparency (i.e., if the user can perceive the intentions and means behind the nudge) ([DI7]DI7). Such potential alignments should be clearly explained to users to help them make informed decisions about adopting UI enhancements. With carefully set boundary between UI enhancements and dark patterns through goal alignment and transparency, we can meaningfully prevent further manipulation against end users' will.§ SCALING UP: A RESEARCH AGENDAOur findings highlight the potential of an end-user-empowerment approach in helping users understand, intervene, and make informed decisions about dark patterns based on their specific needs, goals, and context. By disclosing information and enabling actions against dark patterns, users gained an increased sense of autonomy in online experiences (KI1). Our proposed design-behavior-outcome framework maps out design opportunities for future dark pattern interventions. Through our 2-phase studies, we revealed that end users desire non-intrusive (DI6), personalized (KI2), and dynamic (KI3). Future research needs to carefully consider the distinct preferences of the target user groups and the context of use of digital services (DI8).Although our two-week technology probe study illustrated the usefulness and technical feasibility of this approach, scalability remains a challenge. Our manual process for 31 UI enhancements on 5 websites is adequate for a small-scale probe, but cannot practically cover a significant selection of millions of dark patterns for real-world impact <cit.>. This scalability challenge is two-fold: on one hand, new dark patterns emerge quickly, and it requires considerable maintenance overhead to keep up-to-date for all sites; on the other, designing multiple user-desired UI enhancements for each dark pattern requires significant effort (illustrated in Section <ref>).To help a large audience effectively mitigate dark patterns' impacts in real-world settings, a new approach is needed to scale up this effort. Here, we propose several possible future directions and discuss relevant efforts in adjacent research areas.§.§ A Crowd-Sourced Collective Intelligence ApproachA crowd-sourced, collective intelligence approach can be an effective way to tackle scalability issues <cit.>. This could involve community contributions for identifying dark patterns, their impacts, and potential UI enhancements. The aggregated data can expand the capabilities of tools like Dark Pita, as well as provide training data for future machine learning (ML) models that detect dark patterns, predict user behaviors, and generate interventions, as we will discuss in Section <ref>. This crowd-sourced approach can involve multiple stakeholders: * End users can identify dark patterns in their daily experiences, report their behaviors in response to dark patterns, and express their desired changes. Public release and wide adoption of tools such as Dark Pita can provide a platform for soliciting such information. * Designers who are motivated to contribute can provide meta-information of their design intentions behind UX designs along with these features.* Third-party developers can develop new detectors and UI enhancements for instances of dark patterns and contribute them to a unified repository or “community wiki” for public use.=-1 §.§ A Citizen Science ApproachA citizen science approach <cit.> can enhance transparency by gathering data on the design processes that result in dark patterns.UX practitioners commonly use A/B tests <cit.> to examine the effect of dark patterns (e.g., discouraging subscription cancellation). This approach seeks to involve the public in contributing the hypotheses, protocols, and outcomes of these A/B tests to reveal the hidden design intents behind dark patterns. This approach can improve design transparency, similar to how pre-registration of experiments and data transparency contribute to the open science movement <cit.>. Meanwhile, guardrails must be put in place once this information becomes public: if a study shows the business benefits of including dark patterns, they should not be blindly misused by other companies and designers. Implementing the citizen science approach would involve (1) a consistent format to report the relevant experiment information; (2) a community repository where the information can be aggregated, organized, and shared; and (3) optionally, a platform or a set of tools for conducting UX experiments that make it easier to share the experiment information. The citizen science approach can also engage multiple stakeholders: * UX practitioners who are ethically-minded can participate by sharing the hypotheses, protocols, and outcomes of these experiments. * Third-party researchers can audit the shared results by replicating experiments using the information provided. * Policy makers and community activists can mandate or advocate for companies' adoption of this approach, which we will discuss in Section <ref>. §.§ A Machine Learning Approach With the latest advances in the computational UI understanding <cit.> and user behavior modeling <cit.>, machine learning (ML) techniques to model UX dark patterns show great promise in scaling up the effort in dark pattern intervention. Early explorations in this area such as AidUI <cit.> have demonstrated the impressive performance of ML models in automated dark pattern recognition. Previous efforts to automate the detection of dark patterns <cit.> in consent banners with ML <cit.> and reverse engineering <cit.> have also shown promising results in this area. Specifically, ML models have the potential to (1) identify instances of dark patterns and categorize them, (2) predict the consequences/user behaviors under the influence of these dark patterns; and (3) generate UI enhancements with different dark pattern intervention strategies.However, the lack of large datasets on dark pattern designs, user behavior under the influence of dark patterns, and users' preferred action against dark patterns are major barriers to the ML approach. In addition to the ContextDP dataset proposed in AidUI <cit.>, such datasets may also be constructed from existing curated lists of dark patterns, e.g., Deceptive Design Hall of Shame[https://www.deceptive.design/hall-of-shame/all], from website crawling, or from data collected using our proposed crowd-sourced (Section <ref>) and citizen science (Section <ref>) approaches. It is vital to acknowledge the potential abuse of ML techniques in efficiently creating dark patterns in interfaces. We implore researchers and practitioners using ML in design to be vigilant, adhere to previous empirical results of dark patterns <cit.>, and ensure their designs align with user goals, to mitigate misuse. The development of such ML models must consider the intended users' goals and ethical values to guarantee their widespread utility and benefit <cit.>. §.§ Coordinating Efforts with Design Ethics Advocacy and Policy Making Our end-user-empowerment approach complements designer-centered ethical practices and policy-focused regulation against dark patterns. We propose several directions that coordinate these efforts to scale up the impact. Designer-focused efforts can strengthen the proposed crowd-sourced (Section <ref>) and citizen science (Section <ref>) approaches. As discussed, designers play an important role in both approaches; designer education and advocacy are crucial to boosting their participation and engagement <cit.>.The citizen science method <cit.> helps with a key issue in policy making: defining dark patterns for regulation is challenging as a comprehensive definition is currently lacking <cit.>.Citizen science promotes “design transparency” <cit.> in policy making, such as mandating study preregistration and sharing of A/B testing experiments. If a mandate is not yet practical, we can also take gradual steps, such as issuing “design transparency” or “ethical design” badges to companies or organizations that comply with the requirement. §.§ The Power Imbalance between End Users and DesignersOur end-user-empowerment approach has implications for addressing the power imbalance between end users and designers in interfaces. Today, designers usually have dictating power over interface design. Even if users can modify interfaces, the possible configurations are often pre-defined by designers. Our design probe Dark Pita and our end-user-empowerment approach attempt to shift this power imbalance through awareness and action.Together with advancements in ML for dark patterns (Section <ref>), new community-based approaches <cit.> will further empower end users against designers' “interface dictatorship”.New communities such as Arc Boost Gallery[https://arc.net/boosts] provide great opportunities for future investigation. For example, CSCW academics can investigate the common community structures, dynamics, and member values to gain insights for sustaining web augmentation communities around dark patterns. Existing research on dark pattern Reddit communities <cit.> and CSCW research on online communities <cit.> have built solid foundations for such explorations. Meanwhile, we hope our Design-Behavior-Outcome framework cast light on the design space for intervention techniques and could guide future community creators in coming up with useful solutions.To address this “tug-of-war” between designers and users, we can also achieve end-user empowerment by “de-powering” designers, a more radical and aggressive approach. Research on malleable interfaces <cit.> has shown feasibility in generating UIs automatically based on the specifications of service functionalities, user preferences, and usage context. In this way, the role of designers will be limited to describing the specifications of a system, with little power over the visual presentations of information and the interaction mechanisms. This would prevent the creation of many dark patterns in the first place.§ LIMITATIONS AND FUTURE WORKThis presented work has several limitations. First, given the diverse range of dark pattern taxonomies <cit.>, it is difficult to comprehensively cover all types of dark patterns in one work. Our sampled dark pattern instances were limited to three genres of online services: online shopping, video streaming, and social media. However, dark patterns can exist in a wide variety of platforms or task domains. In addition, although we ensured diversity in the sample of dark pattern instances we used, our manual curation process allowed us to reach only a limited sample. These limitations are in line with our above-mentioned scalability challenge; we hope that future studies can scale up our efforts with the research agenda we have discussed.Also, although our probe Dark Pita only supports web browsers on computers, dark patterns also exist in mobile applications <cit.>. In fact, many participants in our deployment study expressed their desire to use Dark Pita on smartphones: They feel more vulnerable and less alert to dark patterns on mobile devices, given the often casual usage context. Future work can explore the expansion of our end-user-empowerment approach to mobile platforms. Previous work such as <cit.> has shown the technical feasibility of a similar approach on Android with the Accessibility API, while the stricter developer permissions on iOS remain a challenge.The use of in-person co-design workshops, while facilitating more effective and smooth interactions, could have biased our participant pool towards individuals close to the workshop locations. This geographically constrained recruitment approach might limit the diversity of experiences and perspectives contributing to our co-design process (reflected in [appendix:workshop_demographics]Appendix A.1). To offset this potential bias, we conducted the probe study online, recruiting participants from a wider range of backgrounds (shown in [appendix:deployment_demographics]Appendix A.5). In addition, in-person interaction with participants might make them hesitant to share negative opinions, which could introduce bias to our results. Future studies can improve by recruiting more geographically diverse participants to explore their perspectives based on different cultural and socio-economic backgrounds.The scale and primary qualitative nature of our two-phase study inherently limit the representativeness of our findings. Despite our best efforts to cultivate a diverse participant pool, our conclusions mainly reflect the perspectives of our specific sample group, failing to fully capture views from homogeneous groups (e.g., experts and non-experts). We recommend future research to consider larger-scale studies for a more comprehensive exploration of this topic. =-1Despite the promising qualitative results reported in the probe deployment study, the limited duration and scale of our technology probe study did not allow us to track long-term user behaviors to quantitatively examine the efficacy of our approach in behavioral changes. Consequently, the feedback we gathered could be affected by novelty effect <cit.>. We plan to further develop our probe into a fully functional system and conduct larger-scale longer-term field experiments to measure the impacts on user individual welfare (e.g., financial loss) and autonomy <cit.>. Furthermore, to measure collective welfare <cit.> and collect community intelligence, we also plan to release Dark Pita to the general public. § CONCLUSIONThrough a series of co-design workshops and a 2-week deployment study of a technology probe, we proposed and tested an end-user-empowerment approach for dark pattern intervention. We discussed implications for the design of future interventions to support users' awareness and actions against undesired UX dark patterns. Our approach presents opportunities in coordinating with the ongoing efforts in addressing dark patterns from the perspectives of designers, educators, and policymakers. We laid out a research agenda to scale up this approach by utilizing developments in crowd-sourced collective intelligence, citizen science platforms, computational UI techniques, and user behavior modeling to guide future work in this domain.ACM-Reference-Format§ APPENDIX §.§ Co-Design Workshop Participant Demographics§.§ Themes Emerged from the Co-Design WorkshopsL0.2X The top two levels of themes generated from our qualitative analysis for co-design workshops. The level-3 codes are not included due to the large quantity.Level-1 Theme Level-2 Theme 2rcontinued from the previous pageLevel-1 Theme Level-2 Theme2rto be continued5*x]@l@Past ExperienceWith Dark Patterns Users experiences and perceptions of dark patterns are individualized for specific DP instances. 2-2 Users are frustrated and concerned about dark patterns, but even when they are aware and intend to change, they still feel manipulated. 2-2 Even when users are aware of and concerned about dark patterns, they might have to put up with it because the platform service is essential to them.6*x]@l@Factors Making DarkPatterns Annoying Users perceive dark patterns that are not expected, involve deception, potentially cause financial loss, and are hard to solve / make them lose autonomy more annoying. 2-2 Users are more directly affected by individual dark pattern examples, their understanding of dark pattern come from specific interaction with these instances 2-2 Users subconsciously evaluate the benefits/loss caused by each dark pattern instance. Such evaluation later is translated into individual coping methods.8*x]@l@Desired SupportFrom Intervention Users desire impact and severity of the impact information from our extension to support their subconscious process of benefit / loss calculation to develop coping strategy for dark patterns. 2-2 Users proposed a variety of strategies to make changes to specific interface components with dark patterns. 2-2 Users want to change the interface layout to make relevant and useful information prominent.2-2 Users want to change the user flow of the service against dark patterns.2-2 Users want to have autonomy over general extension use experience.§.§ Dark Pattern Instance Details lp3cmX Dark pattern instances with types <cit.> and descriptions.Name Type Description3rcontinued from the previous pageName Type Description 3rto be continuedProminent “Buy Now” Button False Hierarchy The “Buy Now” button on the product page is designed in a more prominent orange color, making it easier to click on than the safer “Add to Cart” button, although they serve similar purposes. “Buy Now” provides a frictionless experience that accelerates customers' checkout process and encourages purchase <cit.>. It improves Amazon's conversion rate but can potentially make users buy unnecessary items they regret later.Disguised Ads Hidden Information / False Hierarchy The interface design put the “sponsored” tag at the bottom corner of the ad in a small and gray font. It's easy for users to miss the tag and not realize the content is sponsored. In some interfaces the tag can also be positioned to the top right of the item. This lack of position consistency makes it hard for users to catch the tag every time. Fake Discounts Hidden Information Discounts information may be exaggerated or shown in a misleading way, with a little disguised information button leading to another webpage describing the detials for different types of discount information <cit.>. It tricks users into thinking the items are on sale and buying it while the price might be the same according to 3rd party price tracking sources.=-1Limited Time Recommendation Limited-Time Message / False Hierarchy The recommended items often appear on the homepage with a limited time offer information. The design also makes this part dominant and takes the whole top sections of the viewport. Video Autoplay Autoplay When users hover over recommended videos on the YouTube homepage, it starts to play automatically. It exploits the psychological fact that users are more attracted to moving things <cit.> and tries to intrigue users so they would start watching the videos. Hiding Dislike Count Hidden Information YouTube only displays the like count but not the dislike counts. This may cause bias, as a video's collective rating helps users choose videos to watch and affect their evaluations of the content. Hiding the other half of the information may inflate users' positive perception of all videos across the platform.Auto Recommendations Autoplay Video recommendations on the sidebar after a video plays are individualized to cater to users' watching preferences, and autoplay on hover makes users even more likely to watch them. Hiding Total Episode Time Hidden Information The timeline only shows the time remaining, not how long you have spent on the episode. This can prevent users from realizing how long they have spent watching.Automatic Preview Autoplay / False Hierarchy Netflix automatically plays the featured trailer for you upon arriving on the site. It is also displayed disproportionately to other videos, which are essentially the same to users.Fake Trending Content High-Demand Message / Pre-Selection In the “trending” section, Twitter personalizes the content for individual users but makes it seem to base on just the content's popularity. Many users are not aware of this and do not like this deceptive feature. Disguised Suggested Tweets Hidden Information / High-Demand Message This is suggested content by the Twitter algorithm, but it does not explicitly label itself as “sponsored” or “suggested”. Instead, it uses confusing labels such as “Popular videos” which users often miss. Sneaking Short Videos Into Feed High-Demand Message / Hidden Information Facebook often sneaks short video contents users don't follow into their feeds in the form of a widget named “Reels” to promote short video contents on the platform. It is similar to the dark pattern “sneak into basket” <cit.> for shopping websites, but instead of losing money for buying unwanted items, users lose time over content they do not originally plan to consume.=-1Disguised Sponsorship Hidden Information Content prompted by the Facebook algorithm simply has a light-colored text label saying “Sponsored” or “Suggested for you” which many users miss. Otherwise, they appear identical to regular posts in a user's feed. §.§ UI Enhancement Details llX UI enhancements with targeted dark patterns, intervention strategies, and descriptions.Dark Pattern Intervention Description 3rcontinued from the previous pageDark Pattern Intervention Description 3rto be continued5*Prominent “Buy Now” Button Hiding Dark Pita will make the “Buy Now” button disappear. Users can still purchase the item by adding it to the shopping cart and checking out there. 2-3 Fairness Dark Pita will make the “Buy Now” button be in the same color as the regular “Add to Cart” option. 2-3 Friction Dark Pita will add an overlay as friction before proceeding to purchase when users try to click buy now.6*Disguised Ads Hiding Dark Pita will hide the disguised ads, which eliminates ads camouflaged as regular items. 2-3 Friction Dark Pita will make the cursor invisible if it navigates through the ads area, which rouses users' attention. 2-3 Information Disclosure Dark Pita will catch the content of the ambiguous ads and explicitly present their ads identity by adding extra text labels.2-3 Counterfactual Thinking Dark Pita will mark that the item(s) may be promoted because they paid Amazon. This may also help users avoid unnecessary browsing. 8*Fake Discounts Hiding Dark Pita can hide the discount information and prevent it from influencing users' decisions. 2-3 Information Disclosure Dark Pita will help users understand the rationales behind the price and the marketing jargon. 2-3 Counterfactual Thinking Dark Pita will add some visual effects and remind users to give a second thought before making their purchase decisions.2-3 Action Guide When usershover over the discount price, Dark Pita will provide some suggestions on actions that users can take towards this item. Those actions are in accordance with users' long-term goals.5*Limited Time Recommendation Hiding Dark Pita will hide ALL recommended-items sections on the homepage, which drastically eliminates potential distractions. 2-3 Counterfactual Thinking Dark Pita will add visual effects and remind users to think twice before mindlessly browsing.2-3 Reflection Dark Pita will track and show users' extra cost on Amazon that is likely caused by dark patterns. 6*Video Autoplay Hiding Dark Pita will hide ALL recommended videos on the homepage. Users can still use the search bar on top; this is helpful when users visit YouTube with a specific purpose in mind. 2-3 Disabling Dark Pita will disable the preview function. This may prevent users from being distracted by the motions.2-3 Reflection Dark Pita will track and show the extra time users spend on YouTube due to dark patterns. Hiding Dislike Count Information Disclosure Dark Pita will show the hidden dislike counts. 6*Auto Recommendations Hiding Dark Pita will hide ALL recommended videos on the sidebar. This is helpful when users want to prevent themselves from binge-watching. 2-3 Disabling Dark Pita will disable the preview function. This may prevent users from being distracted and spending too much time on YouTube.2-3 Reflection Dark Pita will track and show the time users spend on YouTube and let users know how much is through the dark patterns on YouTube interfaces. Hiding Total Episode Time Reflection Dark Pita tracks and shows the time users spend on Netflix. This may prevent binge-watching. Automatic Preview Disabling Dark Pita will disable background preview on Netflix homepage. This may help prevent users from being distracted by the featured content. Fake Trending Content Hiding Dark Pita will hide this section to help you focus on your feed. Users can always use the “Explore” tab on the left sidebar. This may help users reduce Twitter consumption. 3*Disguised Suggested Tweets Information Disclosure Dark Pita will detect this type of tweets and explicitly mark them as promoted for users. 2-3 Friction Dark Pita will detect this type of tweets and replace them with an overlay. If users still want to see the tweet, they can click on the reveal button.6*Sneaking Short Videos Into Feed Hiding Dark Pita will hide the Reels. 2-3 Counterfactual Thinking Dark Pita will prompt users to think about the mechanism behind the selected content. 2-3 Friction Dark Pita will add an overlay to the Reels to prevent users from immediately being distracted by them. If users still want to view them, click on the reveal button. 2*Disguised Sponsorship Hiding Dark Pita will hide the selected suggested content for users. 2-3 Information Disclosure Dark Pita will explicitly label it as suggested content. §.§ Deployment Participant Demographics§.§ Deployment Study Interview ProtocolsWe used the following protocols to conduct our three stages of interviews in our deployment studies. Follow-up questions were asked whenever the interviewer(s) saw fit. §.§.§ Entry Interview (1 hour)* Gather participants' perceptions and experiences with dark patterns in daily lives * Give a general introduction to dark patterns* Ask participants about: * websites they usually go to and the dark patterns there (Netflix, YouTube; Amazon; Twitter, Facebook)* the negative emotions, felt manipulation, self-autonomy loss, and the likelihood of being influenced when seeing these dark patterns* Show participants some dark patterns and how to use Dark Pita to change them * Tell participants that in our study we encourage them to: * change a few dark patterns on the websites they use every day* come up with more UI design enhancement strategies they desire* send at least one diary note every 2 days* come up with more dark patterns they encounter on websites we don't yet support* Give participants our manual, containing information and Q&A on using our extension, and let them know how to contact us or ask questions§.§.§ Check-in Interview (30 minutes)* Check in with participants on their questions & issues during the past week using our extension* Before the study, check any outlier in the participant's user log or diary notes, make clarifications if needed* Ask participants about the dark patterns they used our extension to change and ask their thoughts on: * how do they think the change impacted their behavior on these websites?* in the long term, which changes do you want to keep? Which ones do not? Why?* Do you have any alternative enhancements for this dark pattern you desire? * Did the participant find dark patterns on other websites?* Remind participants of sending diary notes regularly* Schedule a third interview session with the participant §.§.§ Exit Interview (1 hour)* Understand and clarify any outliers in user log data and diary notes* Have the participant talk about their experiences with Dark Pita during the past 2 weeks, with them referencing the websites for more contexts* More specifically, what dark patterns did they use Dark Pita to change? * Why did the participant change it? How did the change impact their online experience?* How did the change make the participant feel emotional?* Which UI enhancement was the participant's favorite? Why? * What dark patterns did the participant not use Dark Pita to change? * Why? Is it because of the dark pattern or the intervention?* If it is the intervention, what intervention does the participant desire? * Questions on educational values * Did the participant learn anything new about dark patterns? If so, what are they?* Anything else the participant did to learn more about dark patterns besides our study? * Why? How did the participant get interested? What motivated them to learn more?* what experience they had, or which feature in Dark Pita made them want to learn more about dark patterns?* What new thoughts on dark patterns & intervention techniques did the participant have?* Would the participant like to continue using the tool in the long term? Why? * What change does the participant want to make in the future?* Other general feedback?§.§ Themes Emerged from the Deployment StudyL0.3X Qualitative analysis themes for the deployment study interviews. Themes from our second round of labeling were already comprehensive and our third round of labeling yielded only limited improvement over the second round. As a result, we simply used our second-round themes as level-1.Level-1 Theme Level-2 Theme 2rcontinued from the previous pageLevel-1 Theme Level-2 Theme2rto be continued8*x]@l@Dark Pita was able to help usersunderstand the concept of DP,discover DPs on the currentplatform, and transfer theknowledge to other platforms Our probe increased their knowledge about dark patterns 2-2 Dark Pita made the user realize the large number of dark patterns that exist on the website 2-2 Users are inspired to generalize Dark-Pattern-related knowledge to their daily usage of other platforms/interfaces 2-2 Dark Pita helped participants see dark patterns more explicitly, some of which they may have felt annoyed by before 2-2 Intervention provides a new perspective to online service. 2-2 Users want to use the probe further to other sites, services, and scenarios. 5*x]@l@Users are concerned if theplatform provides the ability tochange the interface, it will beused to collect preference dataand still benefit the companies. Websites may provide some options for users to change dark patterns. 2-2 While companies and users may aim at the same techniques, their goals could diverge and even be against each other. 2-2 Users think the options provided by the services are not good or even annoying because they are still out of the company's benefits – to understand users' preferences. 7*x]@l@Users expect a neutralcommunity to provide such toolsother than the service providers,because of misalignment betweenusers' and companies' goals Future exceptions about user empowerment. 2-2 Websites may provide some options for users to change dark patterns. 2-2 While companies and users may aim at the same techniques, their goals could diverge and even be against each other. 2-2 Users think the options provided by the services are not good or even annoying because they are still out of the company's benefits – to understand users' preferences. 4*x]@l@Dark Pita enables users changedark patterns, giving users posi- tive feelings and self-autonomy. Users feel empowered with actions of our probe. 2-2 Good feelings about being empowered to change dark patterns. 2-2 Users think many other people would like to have the ability to change dark pattern interfaces 6*x]@l@Design Implications for interven- tion: less intrusive (visual + exp- erience), more controllable, more straightforward, less FOMO. Visually clear + less intrusive intervention gives better perception 2-2 Users' requests for more customizability and autonomy over their interface 2-2 Do NOT interrupt the normal experience. 2-2 Hiding elements on the website may give users FOMO and create backlash (users want to see more)
http://arxiv.org/abs/2310.17846v1
{ "authors": [ "Yuwen Lu", "Chao Zhang", "Yuewen Yang", "Yaxing Yao", "Toby Jia-Jun Li" ], "categories": [ "cs.HC" ], "primary_category": "cs.HC", "published": "20231027015454", "title": "From Awareness to Action: Exploring End-User Empowerment Interventions for Dark Patterns in UX" }
1 .001Vietnamese Visual ReasoningKhiem Vinh Tran et al. mode = title]ViCLEVR: A Visual Reasoning Dataset and Hybrid Multimodal Fusion Model for Visual Question Answering in Vietnamese 1,2]Khiem Vinh Tran[ orcid=0000-0001-7511-2910] [email protected], Methodology, Data curation, Software, Writing - original draft [1]organization=University of Information Technology, city=Ho Chi Minh city,country=Vietnam [2]organization=Vietnam National University,city=Ho Chi Minh city,country=Vietnam [3]organization=HUTECH University,city=Ho Chi Minh city,country=Vietnam 3]Hao Phu Phan[ orcid = 0009-0000-3962-0117 ] [email protected] Methodology, Software1,2]Kiet Van Nguyen[ orcid=0000-0002-8456-2742] [email protected] Writing - review editing 1,2]Ngan Luu Thuy Nguyen[ orcid=0000-0003-3931-849X] Supervision [1][email protected] Data curation, Writing - Original draft preparation [cor1]Corresponding authorIn recent years, Visual Question Answering (VQA) has gained significant attention for its diverse applications, including intelligent car assistance, aiding visually impaired individuals, and document image information retrieval using natural language queries. VQA requires effective integration of information from questions and images to generate accurate answers. Neural models for VQA have made remarkable progress on large-scale datasets, with a primary focus on resource-rich languages like English. To address this, we introduce the ViCLEVR dataset, a pioneering collection for evaluating various visual reasoning capabilities in Vietnamese while mitigating biases. The dataset comprises over 26,000 images and 30,000 question-answer pairs (QAs), each question annotated to specify the type of reasoning involved. Leveraging this dataset, we conduct a comprehensive analysis of contemporary visual reasoning systems, offering valuable insights into their strengths and limitations. Furthermore, we present PhoVIT, a comprehensive multimodal fusion that identifies objects in images based on questions. The architecture effectively employs transformers to enable simultaneous reasoning over textual and visual data, merging both modalities at an early model stage. The experimental findings demonstrate that our proposed model achieves state-of-the-art performance across four evaluation metrics. The accompanying code and dataset have been made publicly accessible at <https://github.com/kvt0012/ViCLEVR>. This provision seeks to stimulate advancements within the research community, fostering the development of more multimodal fusion algorithms,specifically tailored to address the nuances of low-resource languages, exemplified by Vietnamese.Visual Question Answering Low-resource language Vision-Language Multimodal FusionData Fusion Visual Reasoning[ [ Accepted: 8 August 2023 ===========================§ INTRODUCTION Visual question answering (VQA) has emerged as a highly challenging task within the field of Artificial Intelligence (AI), attracting significant attention from researchers. The objective of VQA is to predict an answer based on an input image and a corresponding question. It serves as a fundamental component for various complex AI applications, encompassing the automatic understanding of both offline and real-time video streams. Examples of such applications include assistive technologies for visually impaired individuals, collaborative robotics, and embodied intellectual assistants.VQA presents a multimodal challenge that necessitates the seamless integration of fine-grained image analysis techniques with advanced natural language models. One of the primary hurdles in VQA <cit.> lies in addressing the symbol grounding problem, which remains an unresolved issue in AI. In essence, this problem revolves around establishing meaningful connections between symbols within an AI model and real-life objects and situations. In the context of VQA <cit.>, the challenge involves mapping the symbols employed in natural language processing models, which interpret questions, to the objects and situations depicted in visual scenes processed by computer vision models.The field of VQA predominantly focuses on a select number of high-resource languages, with English being the primary focus, which overlooks the diverse linguistic landscape represented by billions of speakers <cit.>. Data-intensive deep learning systems have led to significant advancements in VQA performance for high-resource languages. However, the lack of extensive datasets for low-resource languages presents a formidable challenge in their NLP processing <cit.>. Consequently, addressing VQA in low-resource scenarios has become one of the foremost open challenges in the field of VQA today.The challenges faced by languages with limited linguistic resources, such as Vietnamese, should indeed be recognized. Despite being the national language of Vietnam and spoken by nearly 100 million people, making it the 15th most widely spoken native language globally, Vietnamese still encounters resource scarcity, which poses obstacles in various artificial intelligence domains, including visual question answering research and development [https://www.worldometers.info/world-population/vietnam-population/].Research on Visual Question Answering (VQA) has predominantly focused on well-resourced languages, especially English, since its emergence around 2015 <cit.>. Nevertheless, there exists a significant research lacuna concerning VQA in languages with limited resources, such as Vietnamese. Addressing this void, Tran et al. (2021) <cit.> unveiled the ViVQA dataset, pioneering the development of a VQA dataset specifically for the Vietnamese language. However, it's imperative to acknowledge that the ViVQA dataset encapsulates only a fragment of the VQAv2 dataset since it was constructed utilizing a machine translation approach.Taking into consideration the conceivable complications associated with validation and the intrinsic constraints of outcomes derived from machine translation, the efficacy of the semi-automated approach introduced in <cit.> may not ascertain performance at a human level in the translation of an English VQA dataset into Vietnamese. As a result, the credibility of the ViVQA dataset as a benchmark may be compromised for executing experimental procedures, assessing VQA systems, or fostering advancements in Vietnamese VQA research.Therefore, there is a clear and urgent need for a new semi-automatically annotated VQA dataset that can serve as a robust benchmark for VQA research in Vietnamese, addressing the limitations and potential issues associated with the existing ViVQA dataset. This effort would contribute significantly to the development and advancement of VQA in low-resource languages like Vietnamese. Additionally, our research endeavors encompass the execution of a series of experiments, predicated on the methodologies and frameworks outlined in existing scholarly works. These experiments are conducted utilizing contemporary approaches, with a view to assessing and corroborating the validity and efficacy of current theories and models. Subsequent to this rigorous examination, we advocate for a novel hybrid multimodal fusion strategy, a conceptual framework conceived to synthesize existing approaches, with the aspiration of realizing superior empirical outcomes. The newly proposed paradigm serves as a substantial and robust baseline, instrumental for advancing subsequent inquiries and scholarly explorations, specifically within the context of our distinctive dataset and, more broadly, within the realms of Visual Question Answering (VQA) and Visual Reasoning tasks. The methodological innovation proffered by this research holds significant implications for the proliferation of knowledge and the inception of novel investigative trajectories within the interdisciplinary domain of visual cognition and computational reasoning, hence contributing to the cumulative progression of the academic discourse in this scientific field. Our contributions are as follows: * Firstly, we introduce the ViCLEVR dataset, which serves as a data source for Vietnamese Visual Question Answering (VQA). This benchmark encompasses four independent metrics, allowing for a comprehensive evaluation and providing insights into the performance of VQA systems. Additionally, the ViCLEVR benchmark can be utilized to assess part-based reasoning capabilities.* Secondly, we introduce a novel visual reasoning methodology incorporating a hybrid multimodal fusion mechanism, which integrates elements of the Vietnamese language to bolster reasoning capabilities in tasks related to visual understanding. This innovative approach exploits the distinctive attributes inherent to the Vietnamese language, aiming to augment the precision and efficacy of visual reasoning processes. The integration of linguistic elements serves to enhance the nuanced understanding of visual stimuli, facilitating advanced interpretative and analytical insights, thereby contributing to the evolving discourse in the field of computational visual cognition and reasoning.* Thirdly, we conduct a detailed analysis of five existing methods, alongside our novel approach, to investigate the impact of different model designs on the performance of VQA specifically for Vietnamese. Through this analysis, we gain valuable insights into the strengths and limitations of these methods, as well as our own proposed approaches, in the context of Vietnamese VQA.This manuscript is structured in the subsequent manner. In <ref>, we delve into the existing methodologies prevalent in the field. In <ref>, the paper's principal contributions, specifically the development of a Vietnamese reasoning dataset and its analytical aspects, are accentuated. Next,<ref> delineates the theoretical underpinnings germane to the proposed methodology. A comparative assessment of the proposed solution vis-à-vis other extant baselines using the ViCLEVR dataset is furnished in <ref>. Finally, the study concludes with reflective observations in <ref>.§ RELATED WORK VQA datasets VQA-v1.0 <cit.> is a well-established Visual Question Answering (VQA) dataset that utilizes the COCO dataset <cit.>. It comprises two distinct subsets: VQA-v1-real, featuring real photographs, and VQA-v1.0-abstract, incorporating artificially generated cartoon images. The training phase of VQAv1-real utilizes 123,287 images, while the testing phase employs 81,434 images selected from the COCO dataset.VQA v2.0 <cit.> represents an updated iteration of the VQA dataset, designed to address previous concerns and biases. The training set consists of 443,757 image pairs, the validation set includes 214,354 image pairs, and the test set encompasses 447,793 image pairs. Remarkably, this revised version is twice the size of its predecessor. The dataset comprises 1.1 million pairs of images and questions, accompanied by 13 million associated answers, all of which have been annotated arbitrarily.Visual Genome <cit.> is a dataset that aims to enhance cognitive abilities, specifically spatial reasoning, through the practice of genome visualization. The dataset comprises a large collection of over 108,000 photos, each featuring an average of 35 distinct items. These items are associated with 26 characteristics, and there exist 21 pairwise interactions between objects within the photo. The primary objective of Visual Genome is to facilitate improved performance in cognitive tasks involving spatial connection thinking.The Visual 7W dataset <cit.> is derived from the comprehensive Visual Genome dataset <cit.> and focuses on a subset of the available data. It consists of 327,939 question-answer pairs and 47,300 images sourced from MS COCO <cit.>. The dataset includes 1,311,756 multiple-choice questions, carefully categorized into seven distinct question types: what, where, when, who, why, how, and which, collectively forming the "7W" classification.VizWiz <cit.> is a groundbreaking vision dataset, obtained from individuals who are blind, making it the first widely accessible dataset of its kind. It presents an intriguing challenge within the field of Visual Question Answering (VQA) <cit.> by focusing on the prediction of whether a given visual question can be answered. The dataset builds upon previous research <cit.> and encompasses a collection of 72,205 visual questions, accumulated over a span of four years. These questions were collected through the VizWiz mobile application, available on both iPhone and Android platforms.The KVQA (Knowledge-aware VQA) dataset <cit.> was curated with a specific focus on questions that necessitate external knowledge for accurate answers. It contains a total of 183,000 question-answer pairs, with approximately 18,000 individuals captured within 24,000 images. Answering the questions in this dataset requires employing multi-entity, multi-relation, and multi-hop reasoning over a Knowledge Graph (KG) <cit.>. Another distinctive aspect of this dataset is the presence of inquiries that extend beyond KG entities as ground-truth answers.OKVQA dataset <cit.>is the most extensive knowledge-based Visual Question Answering (VQA) dataset available, featuring comprehensive annotations, including questions, answers, and knowledge categories. This dataset comprises 14,031 images accompanied by 14,055 diverse questions covering a wide array of topics, such as travel, materials, sports, cooking, geography, plants, animals, science, weather, and many others. Visual reasoning in VQA The CLEVR dataset <cit.> is designed as a diagnostic tool for the evaluation of explicit visual reasoning abilities <cit.> within the context of Visual Question Answering (VQA). The dataset comprises an extensive collection of 100,000 images, accompanied by 864,968 associated questions. Ground-truth annotations are provided for the photographs, encompassing essential item properties such as size, shape, material, color, and spatial coordinates.Berkeley's SHAPES dataset <cit.> presents a valuable resource for the investigation of explicit reasoning in visual question-answering tasks. It consists of synthetic images featuring 2D abstract shapes. The dataset includes 15,616 synthetic pictures, each encompassing diverse sizes and spatial placements, alongside a set of 244 binary questions requiring responses in a yes or no format.The GQA dataset, developed by Stanford <cit.>, is a recent addition to their collection, specifically designed to facilitate scene comprehension and reasoning tasks. It comprises a comprehensive set of 113,018 real-world images sourced from the Visual Genome dataset, accompanied by their corresponding scene graphs. The scene graphs are subjected to extensive normalization and rectification processes to ensure precise annotations and generate high-quality queries. In addition to the images and scene graphs, the GQA dataset includes a vast collection of 22,669,678 multi-step questions. These questions are generated using a sophisticated question engine that leverages the rich information extracted from the scene graphs. Moreover, the dataset encompasses 524 structural patterns, with 250 patterns manually constructed and an additional 274 patterns retrieved from the VQA 1.0 dataset. These structural patterns serve as valuable resources for guiding the generation of diverse and challenging questions within the GQA dataset.The NLVR dataset <cit.> is a multimodal dataset that combines human language descriptions with synthetic visuals. The images feature various objects, such as triangles, circles, and squares, arranged in different sizes and positions within the image. The dataset includes hand-written descriptions for each image, provided by crowd workers. The NLVR2 dataset <cit.> was designed to overcome language bias and improve upon the limitations of the original NLVR dataset, which was synthetic in nature. The NLVR2 dataset, also known as Natural Language for Visual Reasoning, includes pairs of visuals along with corresponding grounded natural language descriptions, similar to NLVR. By incorporating real-world images and more diverse language, NLVR2 aims to address issues such as restricted expressivity and semantic diversity encountered in the synthetic NLVR dataset.In accordance with CLEVR, a proliferation of variant datasets has been generated, eliciting heightened interest and participation from scholars in the field. CLEVR-HUMAN <cit.> is tailored to the collection of human-generated free-form natural language queries concerning CLEVR images. CLEVR-Hans <cit.> represents an innovative visual scene dataset characterized by its intricate portrayal of complex compositions involving diverse objects. This dataset further categorizes CLEVR images into multiple discrete classes, facilitating granular investigations. CLEVR-Math <cit.> introduces a multimodal math word problems dataset, encompassing straightforward mathematical word problems primarily involving addition and subtraction. These problems are elucidated through a hybrid representation comprising textual descriptions and complementary images, effectively illustrating the contextual scenario. CLEVR-X <cit.> extends the foundational CLEVR dataset with the incorporation of natural language explanations <cit.>, enhancing the dataset's interpretability and overall utility. Super-CLEVR <cit.> emerges as a comprehensive initiative, systematically addressing diverse facets within the domain of Visual Question Answering (VQA) <cit.>. This dataset introduces four pivotal factors for examination, encompassing visual complexity, question redundancy, concept distribution, and concept compositionality, with the aim of advancing the collective understanding of these critical dimensions.Nonetheless, despite the availability of these invaluable resources, a noticeable gap is discernible within the realm of visual reasoning datasets akin to CLEVR, particularly within the context of widely spoken yet low-resource languages, exemplified by the Vietnamese language. This conspicuous lacuna within the research landscape serves as a compelling catalyst propelling our ongoing efforts to construct a novel CLEVR-style dataset, meticulously tailored to prevalent low-resource languages, with Vietnamese serving as a prominent exemplar, thus redressing this unmet need.Visual Question Answering Datasets in Vietnamese While there are myriad benchmarks for the Visual Question Answering (VQA) task in English <cit.>, languages with scarce linguistic resources, such as Vietnamese, face a notable dearth of such resources. In a significant stride in 2021, Tran et al. <cit.> launched the ViVQA dataset, marking the inception of a dedicated VQA dataset for Vietnamese. To craft this dataset, machine translation techniques were harnessed to transcribe questions and answers from a segment of the VQAv2 dataset into Vietnamese, followed by an exhaustive verification to ensure the accuracy and fluency of the translations. Building on these foundational steps, Nguyen et al. in 2022 <cit.> broke new ground by unveiling a multilingual dataset through a shared task. Notably, this dataset incorporates the Vietnamese language, broadening the scope of VQA research to delve into the Vietnamese linguistic setting. This seminal work signifies a monumental leap in VQA research, especially tailored to the unique linguistic nuances and demands of Vietnamese.Moreover, in a subsequent study, Nguyen et al. <cit.> presented the OpenViVQA (Open-domain Vietnamese Visual Question Answering) dataset. This extensive compilation is curated for VQA tasks that necessitate open-ended responses in Vietnamese and comprises over 11,000 images paired with more than 37,000 question-answer sets.To the best of our knowledge, there remains an unfulfilled need for a dataset in Vietnamese that zeroes in on visual reasoning. Motivated by this void, our research aims to bridge this lacuna and augment the domain of visual reasoning tailored to the Vietnamese linguistic context.§ VICLEVRDATASET ViCLEVR provides a dataset posing challenges that necessitate advanced reasoning skills for effective resolution. It acts as a pivotal tool for performing extensive diagnostic studies, focusing on discerning the depth of visual reasoning proficiencies inherent in Visual Question Answering (VQA) systems. For meticulous management and integrity of the dataset, it employs synthetic images and auto-generated questions.Every image in the dataset is paired with accurate object locations and attributes, offering exact and dependable referential data. Questions contained within the dataset are also rendered in a format that is machine-readable, enabling methodical analysis and assessment. The presence of these ground-truth configurations permits diverse analytical approaches, including evaluations based on the type of question, the topology of the question (examining chain versus tree structures), the length of the question, and varied object relationships. Such in-depth examinations aid in acquiring a holistic comprehension of the competencies and performance levels of VQA models.§.§ OverviewOur dataset consists of 26,000 rendered images from CLEVR dataset <cit.> and 30,000 semi-auto-annotated questions. The images are rendered from a synthetic scene with fixed objects and materials. The questions are generated using a grammar that allows for a wide range of compositional queries. Motivated by CLEVR <cit.>, the questions in our dataset are divided into six categories: * Counting: These questions ask how many objects of a particular type exist in the image.* Color: These questions ask which color of the particular object exists in the image.* Comparison: These questions ask to compare two objects in the image.* Size: These questions ask for the size of an object in the image.* Material: These questions ask about the material of an object in the image.* Shape: These questions ask about the shape of an object in the image.To enhance the evaluative process, we provide meticulous annotations for each question, offering explicit descriptions of the reasoning processes and approaches necessary to arrive at the correct answer. These annotations serve as an integral component in evaluating the performance of visual question answering (VQA) models. §.§ Question-Answer Pair CreationThe CLEVR dataset formulates questions using a template-driven methodology, leveraging what are termed as "question families" to automate question generation. Within CLEVR, there exist 90 distinct question families. Each of these families is characterized by a singular program template and, on average, contains four text templates. The derivation of these text templates is twofold: initial templates are manually crafted with one or two templates allocated per family, and the remainder are sourced through crowdsourced rephrasing of questions. To amplify linguistic variability, alternative terms describing shape, color, and material are integrated. Remarkably, by employing templates that house up to 19 variables, these limited question families are capacitated to produce an extensive array of distinct questions.However, it should be noted that the aforementioned approach is not directly applicable to generating Vietnamese questions due to the grammatical differences in the structure of the Vietnamese language. Consequently, an alternative approach is required for creating Vietnamese questions within the CLEVR framework, drawing inspiration from the construction process employed in the CLEVR <cit.> and GQA datasets <cit.>.In order to complete this phase, we enlist a team of proficient Vietnamese crowd workers. By harnessing the advantages of crowdsourcing, our goal is to amass a substantial volume of data with the desired variations that ensure linguistic diversity in the generation of authentic questions and answers, along with a comprehensive vocabulary. The dataset's quality is of utmost significance, and to uphold it, we have established meticulous guidelines for monitoring and maintaining the dataset's integrity. Crowd workers are furnished with a series of protocols, delineated in Table <ref>, to which compliance is mandatory. To preserve the exploratory essence of the questions and answers, it is anticipated that the questions would predominantly aim at eliciting information rather than manifesting a binary or selective characteristic. In a parallel manner, answers are envisaged to be extensive, surpassing the scope of single-word replies. Additionally, meticulous attention is applied to the regulation of queries concentrating on quantities, colors, and orientations. Although pivotal in discerning and differentiating objects, these elements may be prone to linguistic divergences and inconsistencies during the crowdsourcing phase. Illustrations embodying these norms are depicted in Figure <ref>.Crowd workers are allocated the responsibility of formulating question-answer (QA) pairs corresponding to an optimum array of images. In scenarios where the cumulative number of QAs associated with an image is below the stipulated minimum threshold articulated in the guidelines, crowd workers are accorded the latitude to generate QAs reflecting analogous semantics to the extant ones, predominantly in the context of images exhibiting restrained intricacies. Conversely, in instances where an image is bereft of specificities and recognizable nuances, it may be excluded from the procedure. The generation of questions and answers persists until the fulfillment of all predetermined subsets. §.§ Dataset Validation In order to maintain a high quality and consistent dataset, we subject it to a rigorous validation process, which is depicted as one of the steps in the pipeline illustrated in <ref>. Initially, a skilled crowd worker is assigned a portion of subsets from the dataset and tasked with identifying and rectifying any spelling or syntax errors they encounter. This process aims to enhance the overall accuracy and linguistic integrity of the dataset.For optimizing the subsequent training phase, the dataset's question-answer (QA) pairs undergo preprocessing. This entails transforming the text into lowercase and introducing suitable whitespaces between words and punctuation. Such preprocessing measures foster uniformity and consistency in the textual information, guaranteeing its alignment with the training models and algorithms.§.§ Data AnalysisThe ViCLEVRdataset encompasses a total of 26,000 images, each accompanied by 30,000 question-answer pairs that pertain to the visual content of the images. To ensure unbiased evaluation, we partition the dataset into training, test, and validation sets with a randomized distribution, adhering to a ratio of 7:2:1, respectively.The distribution of question lengths within the ViCLEVR dataset is depicted in <ref>, showcasing the dataset's diversity and the intricate nature of its questions. Notably, a considerable proportion of the questions falls within the length range of 15 to 25.A statistical analysis of ViCLEVR, as shown in <ref>, reveals that a substantial portion of the questions exhibit distinct characteristics, while only a limited number of questions from the validation and test sets are present in the training set.Within the scope of ViCLEVR, the linguistic attributes of the dataset entail a multitude of statistical dimensions, such as the frequency of questions, answers, and the semantic dependencies manifested within both questions and answers. The height of the semantic tree, structured based on these semantic dependencies, is also included, as detailed in Table <ref>.The Linguistic Complexity Specification (LCS) methodology, as presented by <cit.>, serves to evaluate the intricacy of linguistic constructs within sentences. It probes into the statistical interplays among tokens present in specified sentences, taking cues from the outcomes of the corresponding dependency parser pertinent to the language in question. Leveraging the insights from dependency analysis, the LCS frames semantic structures, subsequently gauging their depths. Heightened counts of dependencies coupled with more extended semantic structures suggest augmented sentence complexity.Concurrently, the Linguistic Level Specification (LLS) method <cit.> is harnessed to classify the introduced texts into categories such as individual words, compound phrases, or full sentences, contingent on the operational dependency parser. The foundational concept guiding this method posits that texts constituted by a lone token (segmented by word for Vietnamese or demarcated by spaces for English) are delineated as words. Those that feature a central token functioning as an action word, complemented by another token serving its subject, are demarcated as sentences. Residual texts are categorized as phrases. Implementing the LLS method facilitates discernment of the prevalent linguistic tier of sentences that humans conventionally opt for in response to queries. This underscores the inherent spontaneity and breadth in the dataset's answers, emphasizing the organic spectrum of human responses.§ OUR PROPOSED MODEL The method we propose is architecturally segmented into four principal segments: the Image Embedding module for assimilating visual information, the Question Embedding module for textual integration, the Multimodal Fusion module for amalgamating the extracted features, and the Classifier layer. The latter is integral as it is instrumental in forecasting the corresponding answers, a schematic of which is illustrated in <ref>. The defining structural attributes of this model unfold as follows:§.§ PhoViT §.§.§ Question embeddingFor each individual instance within the context of both training and testing, the input comprises a textual question and an associated image. The question undergoes a process of tokenization, wherein it is initially segmented into constituent words through the utilization of spaces and punctuation marks. Furthermore, numerical values or words grounded in numerical representations are encompassed within the scope of words within this context. Every individual word undergoes the transformation into a vector representation through the utilization of a look-up table. The entries within this table comprise 300-dimensional vectors, which are concurrently learned alongside other training parameters. It is worth noting that these vectors are initially initialized with pre-trained values. In our study, we employ PhoW2V embeddings <cit.>, which are characterized by 300-dimensional word representations. These embeddings are generated using the Word2Vec skip-gram model <cit.>, specifically adapted for the Vietnamese language. Notably, these embeddings are derived from an extensive 20GB corpus of Vietnamese text.The initial step involves tokenizing the input question into individual words, which is subsequently limited to a maximum length of 44 words. Each word within the question is then subject to conversion into a vector utilizing the 300-dimensional PhoW2V word embeddings, a resource that has been pre-trained on an extensive corpus. This process yields a sequence of words, constituting a matrix of dimensions n× 300, where n ∈ [1, 44] signifies the number of words present in the question. Importantly, the resulting question features Q for all words are retained and collectively form a question feature matrix denoted as Y ∈ R^n× d. §.§.§ Image embeddingThe input image undergoes processing via a standard Transformer architecture, drawing inspiration from the Vision Transformer (ViT) as described by <cit.>. The image, represented as e_x ∈ℝ^H × W × C, where H and W delineate its resolution and C indicates channel count, is restructured into a sequence of linearized 2D patches, expressed as x_p ∈ℝ^N × (P^2 · C). The patch dimensions are denoted by (P, P) and the total number of these patches is computed as N = HW/P^2, which also defines the Transformer's input sequence length. Throughout the Transformer's layers, a consistent latent vector size D is retained. These patches are linearly projected to generate D-dimensional vectors (see Eq. 1), which results in the patch embeddings and the initial learnable embedding in the sequence. The final state of this embedding post-Transformer encoding becomes the image representation y.For both pre-training and subsequent fine-tuning, a classification head is integrated with the Transformer encoder. In the pre-training phase, this head is formulated as a Multi-Layer Perceptron (MLP) with one hidden layer, while during fine-tuning, a singular linear layer is employed. To maintain positional context, position embeddings are added to the patch embeddings. Our model opts for conventional 1D position embeddings due to negligible performance increments observed with intricate 2D-aware embeddings. The processed sequence of embedding vectors then serves as the input for the Transformer encoder, which aligns with the structure detailed by <cit.>. This encoder is characterized by successive layers of multi-headed self-attention and MLP modules, with the ensuing output being mapped onto image features I. §.§.§ Multimodal fusionThe Multimodal Fusion module encompasses a Deep Stacked Attention module, akin to the approach presented in <cit.>. Utilizing the previously mentioned question features denoted as Q and question features represented as I, we engage in profound co-attention learning by channeling the input features through a deep co-attention model comprising K deep attention layers (DA), arranged in a cascaded manner (denoted as DA_1, DA_2, ..., DA_K). Denoting the input features for the K-th DA layer as Q_K-1 and I_K-1, their resultant output features are designated as Q_K and I_K, respectively. These subsequently become the inputs for the next DA layer, following a recursive progression.[Q_K,I_K] = DA_K([Q_K-1,I_K-1])The deep stacked fusion model incorporates a seamless alignment of K layers of Dual Attention (DA) <cit.> in a profound manner. This leads to the derivation of Q_K and I_K as the final attended features pertaining to image and question at the last layer. The resulting fused embedding delineates the representations of both the image and the question. Subsequently, this embedded fusion is streamlined and channeled towards the designated output classifier. §.§.§ Output classfierSubsequent to the deep stacking phase inherent to multimodal fusion, the ensuing fused features intrinsically embody expansive information pertaining to the attention distribution across the words of the question, denoted as Q_w, and the regions of the image, represented as I_w. In light of this, a model delineated for attentional reduction is constituted, integrating a multi-layer Multihead Self Attention (MSA) mechanism spanning N layers to procure its attended features Q_f and I_f. To illustrate, considering Q_w, the attended feature Q_f is derived as:α = softmax(MSA(Q_w)) Q_f = ∑_i=1^Nα_i × Q_w_iHerein, α = [ α_1, α_2, ..., α_N ] ∈R^N represent the attention weights that are discerned through learning. This is analogous for I_w and Q_w. Utilizing the computed Q_f and I_f, a linear multimodal fusion function is construed as:z = LayerNorm (W^T_x × Q_f+ W^T_y × I_f)Here, W_x, W_y ∈ R^d×df serve as two linear projection matrices, with df denoting the unified dimensionality of the fused feature. The fused feature z is subsequently projected into a vector s ∈ R^N, where N represents the count of the most prevalent answers within the training set, and is followed by the application of a sigmoid function.In the following steps, the final pooled outputs corresponding to the two pairs in question and images are consolidated to forge a concatenated representation. This amalgamated representation is then channeled into a classifier layer, executing operations via the Vision-Language Feed-forward Network (VL-FFN) as a fully connected layer, and then used to forecast the associated answer.In alignment with the findings of <cit.>, leveraging an assortment of modality specialists augments the model's ability to assimilate a more extensive array of information specific to each modality. The consolidated self-attention module excels in discerning the alignments between various modalities, enabling enhanced integration for tasks characterized by their multimodal attributes, such as those involving vision and language. This methodology assists in the meticulous amalgamation of the nuanced elements inherent to each modality, leading to a more fortified and insightful harmonization and amalgamation of information across modalities. For simplicity, PhoViT is trained with binary cross-entropy (BCE) loss <cit.>. Binary Cross-Entropy (BCE) is employed as the loss function to facilitate the training of an N-way classifier, which is constructed atop the fused feature z. Throughout the inference phase, caption tokens are generated sequentially in an autoregressive fashion, enabling a coherent and contextually aware generation of caption components. This method ensures the holistic incorporation of information in the generation process, providing a sequentially refined output. L = -∑_i^N∑_j^M z_ij log(z_ij) - (1 - z_ij) × log(1 - z_ij))In this context, the indices i and j respectively traverse through the M training questions and N candidate answers, systematically iterating each element within the specified sets to perform subsequent operations or comparisons, ensuring comprehensive evaluation or processing over the entire spectrum of training questions and candidate answers.§ EXPERIMENTS§.§ Comparative baselinesConducting experiments encompassing the entirety of available methodologies presents logistical complexities. Consequently, we have undertaken the replication of a judiciously selected subset of methods. This subset includes baseline models that exclusively rely on textual information (LSTM-Q <cit.>) . Additionally, we have implemented a straightforward baseline model that combines LSTM combined with CNN representations, a configuration known to approximate the state-of-the-art performance (LSTM+CNN <cit.>). Our subset also encompasses both historical and contemporary state-of-the-art approaches in ViVQA dataset (VieHieCoAtt <cit.> and BARTPhoBEiT <cit.>). Finally, we have incorporated novel methodologies specifically developed for the Vietnamese language utilizing a recently introduced dataset (ViMCAN <cit.> ). Within the framework of the current experiment, the methodologies that have been employed are categorized into four unique types, to wit: Traditional Neural Network, Co-attention, Transformer, and Hybrid. Each typology is characterized by distinctive techniques and methodologies, specifically devised for addressing tasks related to Visual Question Answering (VQA).The Traditional Neural Network typology is embedded with conventional deep learning methodologies, explicitly designed for the complexities inherent in VQA tasks. It serves as a foundational approach, focusing on traditional computational models to interpret and process visual and contextual information.The Co-attention typology, on the other hand, is structured around the implementation of a co-attention mechanism, establishing a framework that allows parallel focusing and alignment on various segments of the input data, creating a cohesive interplay between the visual and linguistic components.In the Transformer typology, the application is centered around the transformer model, a well-established approach known for its self-attention mechanism, offering a refined method for processing sequential data and enabling the model to prioritize different segments of input information based on contextual relevance.Lastly, the Hybrid typology represents a novel approach proposed in this study, integrating the salient features of the aforementioned typologies. This integrative method is envisioned to leverage the combined attributes of Traditional Neural Networks, Co-attention mechanisms, and Transformers, aiming to explore potential synergies and subsequently augment the performance in VQA tasks. The amalgamation of the characteristics of the distinct types is intended to foster a comprehensive understanding and enhanced interpretative capability in VQA applications.Further detailed expositions of the stated methodologies and their individual characteristics, components, and implementations are provided in the ensuing sections of this document, elucidating the intricate mechanisms and theoretical underpinnings of each approach. §.§.§ Traditional neural network approachLSTM-Q <cit.> The LSTM-Q model, interestingly, delivers commendable results on VQA datasets <cit.> and CLEVR <cit.> even in the absence of image-based input. This model interprets the question through acquired word embeddings and subsequently employs a word-level LSTM <cit.> for processing. The concluding hidden state of the LSTM is channeled into a multi-layer perceptron (MLP), which then estimates a probabilistic distribution of potential answers. Given its exclusive dependence on question data, the model can only accommodate biases that are conditional on the question.CNN+LSTM <cit.> The model accompanied by the dataset utilizes a combination of CNN-based image embedding and LSTM-based question embedding. The embeddings obtained from these two components are merged using point-wise multiplication, and the resulting embeddings are subsequently fed into a multi-layer perceptron classifier to predict the probability distribution of the answer.§.§.§ Co-attention approachViHieCoAtt <cit.>The Alternating Co-attention mechanism functions through an iterative procedure, concentrating selectively on either the question or image features, guided reciprocally by the attributes of the image or question. This recurrent operation permits dynamic and adjustable allocation of attention, aiding in the amalgamation of pertinent information from both modalities. The ViHieCoAtt approach employed PhoW2V embeddings <cit.>, generated through pre-training both 100-dimensional and 300-dimensional syllable embeddings alongside 100-dimensional and 300-dimensional word embeddings, using the Word2Vec skip-gram model <cit.>. This initial training phase was undertaken on expansive Vietnamese text corpora, encompassing both syllable and word levels, amounting to 20GB in total.ViMCAN <cit.> ViMCAN stands as the Vietnamese iteration of Modular Co-attention Networks <cit.>. MCAN employs a layer comprised of MCA. The MCA layer represents a modular amalgamation of two fundamental attention units: the self-attention (SA) unit and the guided-attention (GA) unit, which draw inspiration from the scaled dot-product attention mechanism. MCAN endeavors to concurrently investigate inter-modality and intra-modality relations, yielding commendable outcomes. It introduces a profound Modular Co-Attention Network constructed of Modular Co-Attention (MCA) layers organized in a cascaded manner in depth, allowing for the exploration of nuanced attention dynamics and interactions within and across modalities. §.§.§ Transformer approachBARTPhoBEiT <cit.> BARTPhoBEiT is a our previous novel integration of the BARTPho <cit.> and BEiT-3 <cit.> models, specifically tailored for the Vietnamese language. This innovative model incorporates pre-trained Sequence-to-Sequence and bidirectional encoder representations derived from Image Transformers. The BARTPhoBEiT model's performance is comprehensively evaluated using the ViVQA datasets <cit.>, achieving state-of-the-art (SOTA) results. This evaluation offers valuable insights into the model's effectiveness and suitability for Visual Question Answering (VQA) tasks in the context of the Vietnamese language. In this paper, we extend its capabilities to handle visual reasoning tasks with some minor improvements.§.§.§ Hybrid approachPhoViT In Section <ref>, we delineate the conceptualization and construction of PhoViT, an innovative model we have developed, embodying a synergy of neural network methodologies, co-attention mechanisms, and a framework grounded in transformer-based methodologies. The essence of our approach is the utilization of PhoW2V, a specialized construct of a neural network, designed for the embedding of questions, and the employment of Vision Transformer (ViT) for the intricate embedding of images. Subsequently, our approach incorporates a sophisticated multimodal fusion methodology <cit.>, leveraging stacking attention—commonly referred to as co-attention, to effectually amalgamate informational constituents derived from both images and questions. This meticulous integration is pivotal, enabling the coherent synthesis of multiform information modalities, and contributing to the efficacious convergence of visual and textual elements. This novel approach exhibits substantial promise, elucidating its potential ramifications through systematic experimental investigations within the disciplinary contexts of Visual Question Answering (VQA) and Visual Reasoning tasks. §.§ Evaluation metricsConsistent with the preceding research conducted by <cit.>, elucidating the evaluation metrics utilized for gauging the model's efficacy is crucial before delving into the analysis of the experimental outcomes. The appraisal in this research encompasses four pivotal performance metrics: F1 score, Precision, Recall, and Accuracy. The derivation of the F1 score and Accuracy for each individual response is achieved through the tokenized forms of the anticipated answer (AA) and the standard answer (SA). Subsequently, to calculate the cumulative F1 score (F1_Overall), the F1 scores corresponding to all queries within a specific subset are averaged.Precision (P) = SA ∩ AA/AA Recall (R) = SA ∩ AA/SA F1 = 2 × P × R /P+R F1_Overall =1/N∑_i=1^N F1_i Accuracy = AA/SADrawing inspiration from the work of Nguyen et al. <cit.>, this study incorporates several evaluative metrics, notably (BLEU) <cit.>, ROGUE-L <cit.>, and METEOR <cit.>. These metrics have been meticulously selected to ensure rigorous and comprehensive assessment methodologies in our experimental framework.BLEU <cit.> BLEU (Bilingual Evaluation Understudy) is a widely acclaimed metric in the realm of natural language processing and computational linguistics, employed to assess the correspondence between machine-generated text and a set of reference texts. This metric is crucial as it quantitatively evaluates the coherency, relevance, and alignment of the generated text in comparison to the reference, providing an objective measure of the model's performance in generating linguistically and contextually accurate text.In the realm of measurement techniques, the predominant emphasis of this metric is directed towards the hue characteristics intrinsic to the precision measurement. The BLEU metric was rooted in a pair of pivotal observations: first, the frequency of n-gram entities within a hypothesis (hypo) ought not to surpass its manifestation within the reference (ref). Second, any hypo with a magnitude exceeding that of a corresponding ref should be subjected to a diminutive weighting factor (designated as a penalty weight). In more precise terms, the scoring system for a hypo, given its associated ref, can be mathematically represented as:score_token = Count_clip(token)/Count(token) Subsequently, utilizing this equation, the cumulative scoring methodology for all hypo entities within a dataset is expressed as:p_n = ∑_h ∈ hypothesis∑_token ∈ hCount_clip(token)/∑_h ∈ hypothesis∑_token ∈ hCount(token) Herein, n signifies the integer value associated with the selected n-gram.While the equation for p_n inherently addresses scenarios where the magnitude of hypo exceeds one of ref, there remains the contingency wherein the hypo's length is inferior to its ref counterpart. Addressing this, if c represents the collective length of all hypos within the dataset and r denotes the total length of all refs in the dataset, then the penalty weight, catering to hypos of length surpassing that of refs, is formulated as:BP = e^1-r/c It is self-evident that BP equates to 1 when c > r.Conclusively, the resultant BLEU score is extrapolated through the equation:log BLEU = min(1 - r/c, 0) + ∑_n=1^N w_n log p_n ROGUE <cit.> ROUGE (Recall-Oriented Understudy for Gisting Evaluation) is a set of evaluation metrics that are commonly used to measure the quality of summaries by comparing them to reference summaries. Among the several variants of the ROUGE metric, ROUGE-L (Longest Common Subsequence) is one of the most frequently employed. Ganesan et al <cit.> delineated the measures of recall, denoted as R, and precision, represented as P, employing the Longest Common Subsequence (LCS) between two entities: hypo and ref. The recall, denoted as R_LCS, is formulated as the ratio of the LCS of hypo and ref to m, which signifies the length of ref, as depicted in the equation:R_LCS = LCS(hypo, ref)/mConversely, the precision, represented as P_LCS, is given by the proportion of the LCS of hypo and ref to n, where n characterizes the length of hypo, expressed as:P_LCS = LCS(hypo, ref)/nSubsequently, the metric ROUGE-L is articulated as a combination of the aforementioned recall and precision values, encapsulated in the relation:ROUGE = (1 + β^2) R_LCS P_LCS/R_LCS + β^2 P_LCSMETEOR <cit.> In the realm of machine translation evaluation, both BLEU and ROUGE employ n-gram tokens as their foundational components for token definition. Contrarily, METEOR (Metric for Evaluation of Translation with Explicit ORdering) posits that there exist instances wherein interchanging the positions of n-gram tokens doesn't necessarily alter the overall semantic essence of the sentence. Nonetheless, such configurations tend to be penalized with diminished scores under the BLEU and ROUGE metrics. Addressing this predicament, Banerjee et al. introduced the notion of alignment between the hypothesis (hypo) and the reference (ref). In this context, alignments are construed as a collection of mappings, with each mapping representing a distinct association between tokens present in both the hypo and the ref. It's pivotal to underscore that, within this framework, a token is characterized as a 1-gram entity.It considers various linguistic phenomena, including word-to-word matches, stemming, synonymy, paraphrasing, and word order. The METEOR score is computed using precision, recall, and a harmonic mean of these with a penalty factor for word order differences.For any given hypothesis (hypo) and reference (ref), numerous alignments might be discerned.With this context established, the precision P derived from the hypo and ref based on their corresponding alignment is articulated as: P = m/wh Meanwhile, the recall R in relation to the hypo and ref is formulated as:R = m/wr Subsequently, their correlation is expressed by the F-measure:F_mean = 10PR/R + 9P Drawing parallels to the BLEU metric, METEOR outlines a penalty factor for a hypo that either surpasses or falls short of its corresponding ref, factoring in the mutual tokens present in both hypo and ref. In this regard, the penalty weight p is delineated as:p = 0.5 ×(c/um)^3 Here, c denotes the count of shared unigrams between the hypo and its ref, while um signifies the cumulative unigrams observed across both the hypo and ref. Integrating the penalty weight with the correlation between precision P and recall R, the METEOR score, with respect to the hypo and its ref, is defined as:M = F_mean× (1 - p) §.§ Experimental Results The results presented in <ref> showcase that our proposed model surpasses all baseline models in performance. The baselines consist of two models: a "blind" LSTM model that only has access to the questions, and an LSTM+CNN model employing a neural network approach. These baselines achieve relatively low-performance results, ranging from 20% to 21.7%. Specifically, the LSTM model exhibits a success rate of only 20.7% for open query questions and performs only slightly above chance for binary question types. Inclusive of the baseline models, our evaluation encompasses two contemporary models, ViHieCoAtt and ViMCan, which leverage Co-attention mechanisms and exhibit proficiency on the ViVQA dataset. Furthermore, we present a novel model, denoted as PhoViT, which constitutes a pioneering visual reasoning approach tailored specifically for the Vietnamese language. Detailed elaboration on PhoViT can be found in<ref>. In order to discern the underlying factors contributing to the superior performance of our proposed model compared to others, a comprehensive analysis will be conducted through a series of ablation studies. These studies will draw upon the findings presented in <ref>, <ref>, <ref> of the preceding subsections as a foundation for further investigation. §.§ Analysis by question category We can employ the programmatic representation of questions to assess the model's performance across various forms of reasoning. Initially, we assess the model's proficiency in handling each distinct question type, delineated by the outermost function in the program. The results are visualized in <ref>, and a more in-depth analysis of these outcomes will be provided in the subsequent discussion. Count Counting questions inquire about the quantity of objects that meet specific criteria (e.g., "Có bao nhiêu vật trụ màu xanh?" ("How many blue cylinders are there?")).Given that images consist of varying numbers of objects ranging from three to ten, and counting questions pertain to subsets of these objects, achieving a uniform distribution of answers is a formidable challenge. In the context of counting questions, both the "blind" LSTM and the LSTM+CNN models achieve accuracies that are nearly 0%. This outcome arises from the fact that these models primarily produce responses indicating values of "Yes" and "No". This pattern of results suggests that the dataset exhibits minimal question-conditional bias for counting-related inquiries. Our rejection sampler approach strives to encourage a more uniform answer distribution for these questions without imposing it as a strict requirement. Consequently, this approach introduces a bias that is contingent on the nature of the question, as evidenced by the accuracy rates of 28% and 38% achieved by ViHieCoAtt and ViMCAN. Intriguingly, our two new proposed models, the BARTPhoBEiT and PhoViT outperform comparably to existing ones, indicating that the Transformer features contribute limited information relevant to the counting task. The BARTPhoBEiT model exhibits a slight improvement in performance, but its absolute accuracy remains modest at 41%.Color Questions related to color seek to acquire information regarding the specific object's chromatic characteristics (e.g., "Màu sắc của khối lập phương phía trên vật tròn màu xanh da trời là gì?" ("What is the color of the cubic object above the sky-blue circular one?")). The LSTM, LSTM+CNN, and VieHieCoAtt models exhibit a notable decline in performance, yielding an approximate accuracy of 0% when confronted with this specific question type. This observation can be attributed to the inherent complexity of these questions within our dataset, often intertwined with other question types, necessitating a more intricate chain of logical reasoning for accurate prediction.In contrast, our BARTPhoBEiT and PhoViT models consistently outperform other methodologies. This superior performance can be attributed to the image embedding capabilities of these models, which rely on the Vision Transformer (ViT) architecture, thereby enhancing their capacity to extract and discern object colors with superior precision compared to alternative techniques. However, despite this notable achievement, the overall accuracy of these models remains marginally below the 25% threshold, underscoring the formidable challenge presented by our dataset, even when employing state-of-the-art Transformer-based approaches.Comparison Comparison queries seek to ascertain whether two entities possess an equivalent quantitative magnitude with regard to a particular characteristic (e.g., "Có phải vật hình trụ lớn hơn vật màu xanh không?" ("Is the cylinder larger than the blue object?")). The exclusively acceptable responses to such inquiries are confined to the binary options of "yes" and "no." In the domain of this particular question type, LSTM-based models exhibit superior performance compared to alternative techniques, primarily owing to the inherent strengths of LSTM models in providing precise "Yes" and "No" responses. Nevertheless, our proposed methodologies yield results slightly lower, with only a 1% discrepancy. This outcome serves as a testament to the general competence of our proposed model in effectively addressing this specific question category.Size Questions regarding size focus on ascertaining whether an object possesses greater or lesser dimensions (e.g., "Có một khối trụ phía trên vật hình vuông bên trái vật hình tròn, kích thước của nó như thế nào?" ("There is a cylindrical object above the square-shaped object to the left of the circular object; what are its size?")). Within the category of this particular question type, our BARTPhoBEiT and PhoViT models exhibit superior performance, achieving an impressive accuracy rate of nearly 55%. It's worth noting that questions of this type in our dataset often involve a blend of attributes, such as "material" and "color," necessitating a more intricate chain of reasoning compared to questions in other datasets. Consequently, the transformer-based techniques manifest their effectiveness in addressing these multifaceted questions, surpassing traditional approaches.Furthermore, the utilization of a multimodal fusion layer in conjunction with deep stacked and multiway transformers underscores its effectiveness in enhancing the overall performance and accuracy of our models in handling such complex question types.Material Material inquiries seek information pertaining to the substance or composition of an object, such as whether it is constructed from specific materials (e.g., "Chất liệu của vật màu xanh lớn kế bên vật trụ màu vàng là gì?" ("What is the material of the large green object next to the yellow cylinder?")). The outcomes for this specific question type closely resemble those of questions related to size. In this category, our proposed models consistently attain state-of-the-art performance. ViMCAN, while displaying commendable results in this context, owes its success to the integration of a deep fusion layer for answer prediction. Nevertheless, it falls short of surpassing our results due to the distinguishing factor lying in the classifier layer. Notably, our models employ a vision-language Feed-Forward Network (FFN), a method superior to the traditional FFN, thereby yielding more favorable outcomes.Shape Shape question inquiries into the geometric configuration of an object, such as whether it conforms to specific shapes (e.g., "Hình dạng của vật kim loại màu tím nhỏ kế bên vật tròn màu xanh lá cây là gì?" ("What is the shape of the small purple metal object next to the green circular one?")). Within this particular question category, our PhoViT model demonstrates a significant and notable advantage over all other models. This enhanced performance can be attributed to the innovative fusion of an attention mechanism with the Transformer-based technique, which empowers PhoViT to excel in addressing these questions effectively.It is worth noting that other models that incorporate attention mechanisms also exhibit improved performance in this context. This improvement can be attributed to the attention mechanism's inherent capacity to facilitate a focused examination of the target object, thereby aiding in the identification of its specific shape. However, our PhoViT model, with its unique combination of attention mechanisms and Transformer-based techniques, stands out as the top-performing solution in this question category. §.§ Analysis by linguistic question type In the context of our dataset, the data can be categorized into four primary types: "What," "How," "Yes/No," and "Other," driven by the inherent characteristics of the associated images and data. In the course of this experimentation, we embark on a comprehensive research analysis using six distinct VQA methodologies applied to these categories. The objective is to gain a more profound comprehension of the implications and impacts inherent in this categorization. <ref> illustrates our investigation into this particular question category. Within our dataset, questions beginning with "what" exhibit a noteworthy diversity in terms of question phrasing and tend to be longer in length, resulting in heightened complexity for these question types. Similarly, questions commencing with "How" pose a considerable challenge, necessitating the model's ability to comprehensively grasp the textual context in order to furnish precise responses. This category encompasses variations such as "how many" and "how much," which introduce additional intricacies in the Vietnamese language due to the diverse linguistic structures employed. Conversely, "Yes/No" questions inherently offer limited information for answer retrieval, and as a consequence, most models demonstrate adept performance when tackling queries of this nature. §.§ Analysis by question sizeFor the sake of facilitating a comprehensive analysis, we partition questions into distinct categories determined by their length, specifically the overall count of words they encompass. To be precise, the categorization of questions is delineated in <ref>. Subsequently, we subject both baseline models and our novel proposed methods to evaluation within each respective question group, thereby extending this assessment to encompass both questions and their corresponding answers as per this categorization. In its initial phase, our investigation entails a comprehensive examination of the outcomes achieved by the baseline models and the proposed methodologies across the aforementioned question groups. The particulars of this evaluation are expounded upon in <ref>.From a conceptual perspective, it is reasonable to anticipate that longer inquiries would entail higher levels of complexity, as they inherently involve a more extensive sequence of logical steps. Nevertheless, it is of significance to underscore that a substantial proportion of questions exhibit a noteworthy ability to yield accurate responses, even in instances where specific subtasks remain unresolved. This phenomenon finds illustration in Figure <ref>, where the successful resolution of the posed question does not hinge on a precise identification of the particular large blue cylinder. This observation is grounded in the established understanding that objects prominently positioned to the left of a cylinder inherently belong to the category of cylinders.Notably, the question depicted in Figure <ref> defies classification as degenerate, given that its entire construct plays a vital role in disambiguating references to distinct objects—namely, two blue cylinders and two rubber cylinders. Despite this intricate nature, the question manifests a relatively compact effective size due to its capacity to be addressed with precision without the necessity to fully unravel the mentioned references.A discernible trend emerges, wherein the error rate exhibited by all models exhibits an upward trajectory concurrent with an expansion in question size. This discernible pattern underscores the challenges encountered by models when grappling with intricate reasoning pathways intrinsic to more extended questions. § CONCLUSION AND FUTURE WORK In this paper, we introduced the ViCLEVR dataset, designed for visual reasoning and visual question answering in Vietnamese. The dataset generation process was elaborated, and baseline experiments were conducted, along with the introduction of new measures to gain deeper insights into the behavior and performance of models. We believe that this benchmark will serve as a valuable resource for advancing research in Vietnamese VQA by promoting more integrated approaches that intertwine visual reasoning and visual question answering, two thriving fields that have often been explored independently.Moreover, we presented PhoViT, a new hybrid multimodal model, and demonstrated its robust performance on multimodal reasoning tasks using the proposed dataset, showcasing its potential in the Vietnamese language. We firmly believe that the ViCLEVR dataset and PhoViT will inspire and facilitate the development of more compositional, interpretable, and effective reasoning models, thus propelling research in Vietnamese scene understanding and visual question answering to new heights.Furthermore, it is worth noting that the ViCLEVR dataset, despite being the inaugural visual reasoning dataset in the Vietnamese language, is relatively modest in size when juxtaposed with its English-language visual reasoning counterparts. In our forthcoming research endeavors, we intend to augment the ViCLEVR dataset by incorporating a larger volume of images and question-answer pairs. Additionally, we plan to enrich the dataset by introducing a more diverse set of answers and increasing the answer length for questions. This expansion will enable a more comprehensive evaluation of visual reasoning responses. Moreover, we have strategic plans to extend the ViCLEVR dataset into a multilingual, low-resource language Visual Question Answering (VQA) dataset. This initiative is aimed at providing a valuable resource for research in multilingual VQA, encompassing the Vietnamese language as well. § SEVERAL INSTANCES OF VICLEVR ON QUESTION CATEGORY.In <ref>, we present several exemplar instances from the ViCLEVR dataset, judiciously selected to illustrate the manner in which our queries harness question category factors pertinent to visible objects. Concurrently, we exhibit the corresponding responses, which encompass terminologies representative of such distinctive factors. These instances delineate the intricate interplay between categorical elements and their visible counterparts, serving to elucidate the multidimensional nature of the interrelations underpinning the queries and their respective resolutions within the framework of visual reasoning.§ SEVERAL INSTANCES OF VICLEVR ON LINGUISTIC QUESTION TYPE.Within <ref>, multiple exemplary instances are displayed from the ViCLEVR dataset. These have been meticulously chosen to delineate the methodologies whereby our queries employ linguistic question types, associating them with factors intrinsic to visible entities. Simultaneously, the allied responses are showcased, incorporating terminologies that are indicative of such unique factors. These exemplifications illuminate the sophisticated synergy between categorical constructs and their perceptible equivalents, elucidating the multi-faceted relational dynamics inherent to the inquiries and their concomitant resolutions within the paradigm of visual reasoning. Such instances afford insight into the nuanced interdependencies that characterize the interface between linguistic articulation and visual perception, reflecting the complex, integrative nature of cognitive processing within this domain.cas-model2-names
http://arxiv.org/abs/2310.18046v1
{ "authors": [ "Khiem Vinh Tran", "Hao Phu Phan", "Kiet Van Nguyen", "Ngan Luu Thuy Nguyen" ], "categories": [ "cs.CL", "cs.CV" ], "primary_category": "cs.CL", "published": "20231027104450", "title": "ViCLEVR: A Visual Reasoning Dataset and Hybrid Multimodal Fusion Model for Visual Question Answering in Vietnamese" }
§ INTRODUCTION§.§ Future of Moore's Law In 1965, Gordon Moore, one of Intel's co-founders, formulated in <cit.> his now famous law that states that the number of transistors per integrated circuit (IC) doubles every 18 to 24 months. This miracle has been made possible over the last 50+ years thanks to an aggressive scaling of the dimensions of silicon-based metal-oxide-semiconductor field-effect transistors (MOSFETs), as reviewed in <cit.>. Till 2003, this miniaturization followed Dennar's scaling law (<cit.>), which consisted in reducing the spatial dimensions (width, gate length, oxide thickness) and power supply of every new MOSFET generation by 30%. In other words, these quantities were multiplied by a factor of 0.7× from one generation to the other. As a consequence, the power density of ICs stayed constant over the years, while their performance kept increasing, driven by the shortening of the transistor gate length. Dennard's scaling stopped however in 2003 at the so-called 130 nm technology node (TN) because the supply voltage of transistors could no more be reduced at the same pace as their dimensions. The sub-threshold slope (SS), which indicates how rapidly the electrical current of logic switches can be increased between their OFF and ON states explains this phenomenon. In conventional MOSFETs, it is limited to 60 mV/dec at room temperature: the gate voltage must be swept by at least 60 mV to vary the current by one order of magnitude. The impossibility to push SS below this limit in MOSFETs forced the semiconductor industry to maintain relatively large supply voltages (above 1 V), thus leading to significant increases of the power and heat dissipation of electronic devices, see <cit.>. At the circuit level, the end of Dennard's law could be partly compensated by decreasing the clock frequency of ICs to reduce the power dissipation and by combining multiple cores together with sharedmemory to augment the computational capabilities. This was the beginning of the “multicore crisis”, an on-going era with energy-efficient, parallel, but sequentially slower multicore computers than at the beginning of the 2000's. At the device level, from 2003 onwards, it was observed that “simply” scaling the size of transistors was no more sufficient to enhance their operation, in particular their switching speed. Since then, different technology boosters have therefore been introduced to ensure that the performance improvements historically brought by Moore's scaling law could continue, as summarized in <cit.> * Strain engineering as in <cit.>: from the 90 nm TN, strain has been used in Si MOSFETs to alter the bandstructure of electrons and holes, with an increase of their channel mobility as a result; * High-κ oxide layers as in <cit.>: at the 45 nm TN, SiO_2, the native oxide of Si, was replaced by high-κ dielectric layers such as HfO_2 that can provide larger gate capacitances together with lower leakage currents;* 3-D FinFETs as in <cit.>: till the 32 nm TN, transistors were 2-D planar structures with a single-gate contact. In 2011, they became 3-D FinFETs with a triple-gate configuration, offering a higher immunity against short-channel effects, in particular source-to-drain tunneling. The benefits of multicore architectures, strain, high-κ dielectrics, and FinFETs are best visible in the electronic products that we use on a daily basis, be they cell phones, tablets, or laptops. Quantitatively, these benefits can be assessed by considering the Top500 list of supercomputers available in <cit.>. This list ranks all participating machines based on their performance when running the LINPACK benchmark, see <cit.>. What is measured is the number of floating point operations (Flop) that are processed per second (Flop/s) to solve a linear system of equations on the hardware of interest. Back in 1993 (first Top 500 list), the largest supercomputer in the world achieved 60×10^9 Flop/s, i.e. 60 GFlop/s. Today's cell phones, with a peak performance larger than 10^12 Flop/s (1 TFlop/s), are about 15 to 20 times more powerful than this machine. For its part, the current largest supercomputer in the world, as of November 2020, reaches 440×10^15 Flop/s (440 PFlop/s), according to <cit.>. While Si FinFETs have established themselves as the transistors of reference since 2011, they might lose this position in the future, when their gate length will shrink below 15 nm. One of the main challenges associated with the scaling of such devices is illustrated in Fig. <ref> using a simplified device structure with realistic dimensions, as schematized in sub-plot (a). At a gate length L_G=50 nm, a fin width of 10 nm together with a fin height of 40 nm yield a subthreshold slope SS=60.8 mV/dec at room temperature, close to the theoretical limit (sub-plot (b)). If the cross section dimensions are kept the same, but the gate length reduced from 50 to 15 nm, SS rapidly explodes, reaching 160 mv/dec for the shortest considered L_G, as can be seen in sub-plot (c). The observed deterioration is caused by a loss of electrostatic control, which can be related to a parameter called “natural channel length” λ_N and defined in <cit.> asλ_N=√(ϵ_Si/Nϵ_oxT_SiT_ox),where ϵ_Si, ϵ_ox, N, T_Si, and T_ox are the relative permittivity of Si, that of the oxide layer, the number of gate contacts, the thickness of the Si channel, and that of the oxide surrounding it, respectively. Short-channel effects can be prevented if the gate length L_G is at least 6× larger than λ_N. Hence, λ_N should be made as small as possible to conveniently scale transistors.In FinFETs, the number of gate contacts, N, is equal to 3. If Si remains the channel material and HfO_2 the dielectric of choice, only T_Si and T_ox can be made thinner. Because decreasing T_ox would ultimately lead to high OFF-state gate leakage currents, the only viable solution to scale the L_G of FinFETs appears to be a reduction ofT_Si, which corresponds to the fin width w. This is what has been done in Fig. <ref>(d). When pushing w down to 1 nm, a SS value of ≃60 mV/dec can be achieved, which would readily allow to push the gate length of FinFETs (well) below 15 nm, while maintaining a good electrostatics integrity. However, reliably fabricating fin structures with a width of 1 nm only is extremely demanding. Surface roughness, single impurities, or interface traps are all expected to play a non-negligible role at this scale and to negatively impact the Si channel mobility, see <cit.>. Besides these effects, device-to-device variability might become a real issue for ultra-scaled FinFET sizes. §.§ The Potential of 2-D Materials Instead of thinning the width of semiconductors that normally have a 3-D unit cell, 2-D materials with a naturally flat atomic structure might be more promising to create tomorrow's transistors with a gate length of 15 nm and below (<cit.>). Such compounds are characterized by an almost perfect electrostatic control, no surface roughness, and no dangling bonds. Graphene, a carbon monolayer with a honeycomb lattice, is an excellent example of a 2-D material. Its existence was confirmed experimentally in 2004 through mechanical exfoliation, as the dependence of its conductance on an external electric field, see <cit.>. This finding motivated the fabrication of graphene field-effect transistors (GFETs), as in <cit.>, but the absence of a band gap in graphene does not enable to fully switch these devices off, as indicated in Fig. <ref>. A dispersionless Dirac cone where the conduction and valence bands of touch each other can be noticed in the bandstructure plot, at the K-point of the Brillouin Zone. Because of it, the current flow from source to drain can never be completely blocked by the gate-modulated potential barrier, as in Si or III-V MOSFETs.A band gap can be opened up in graphene if it is patterned into a quasi 1-D nanoribbon, either lithographically as in <cit.> or chemically as in <cit.>. While the former approach is more straightforward to realize, it tends to produce graphene nanoribbons (GNRs) wider than 20 nm whose transport properties suffer from detrimental line edge roughness. The width of chemically-derived GNRs can be reduced below 1 nm with an excellent control of their edges and reproducible electrical characteristics. Transistors made of such 1-D nanostructures can exhibit ON/OFF current ratios greater than 10^6 as well as ON-state currents in the order of 2000 μA/μm (<cit.>). The simulation of a N7 GNR field-effect transistor (GNRFET) with a width w=0.74 nm is shown in Fig. <ref> as an example. The investigated logic switch has a gate length L_G=15 nm. It provides a steep sub-threshold slope SS=67.5 mV/dec, a high ON-current I_ON=6.87 mA/μm at an OFF-current I_ON=74 nA/μm and supply voltage V_DD=0.7 V. To deliver such a performance, the source and drain extensions of the GNRFET were doped with a donor concentration of N_D≈10^13 cm^-2, which might not be attainable experimentally. Furthermore, it is rather difficult to obtain low contact resistances in GNRFETs and the mass production of ultra-narrow structures is very tedious. Needed are materials that present themselves in the form of relatively easily manufacturable large-scale flakes, as graphene, but that display a band gap compatible with logic applications.Monolayers of transition metal dichalcogenides (TMD) of MX_2 composition, where M is a transition metal and X a chalcogene, fulfill these conditions with their thickness below 1 nm, band gap between 1 and 2 eV, high carrier mobility, and availability as largeflakes (<cit.>). They are therefore often seen as serious contenders to continue Moore's scaling law in the “more-than-Moore” category (<cit.>). In TMDs, each layer is composed of M atoms surrounded by two X atoms. The inter-atomic bonds within each layer are covalent, whereas van der Waals forces maintain adjacent layers together in few-layer structures. TMDs can adopt different crystal lattices (and symmetry groups), from the usually semiconducting 2H (hexagonal) phase to the typically metallic 1T (trigonal) or 1T' (modified trigonal) phase, going through other phases such as 3R (rhombohedral) or 2M (monoclinic). One lattice is generally more stable than the others, but phase transition can be triggered byexternal fields, strain, or doping, see <cit.>.Among all existing TMD materials and configurations, single-layer MoS_2 was exfoliated for the first time in 1986 with a scotch tape (<cit.>), but it was only after the first experimental demonstration in 2011 of a properly working 2H single-gate monolayer MoS_2 transistor with SS=74 mV/dec and an ON/OFF current ratio larger than 10^8 in <cit.> that TMD started to receive a wide attention from the scientific community. Since then, transistors with a WSe_2 (<cit.>), WS_2 (<cit.>), MoTe_2 (<cit.>), MoSe_2 (<cit.>), ReS_2 (<cit.>), HfSe_2, or ZrSe_2 (<cit.>) channel have been reported as well, to cite few examples. TMD monolayers were used in most cases, except for the MoTe_2 (down to 6L), HfSe_2, and ZrSe_2 (down to 3L) ones, which relied on few-layer structures. Among these 2-D semiconductors, some are better suited to obtain n-type transistors, e.g. MoS_2, ReS_2, HfSe_2, or ZrSe_2, others lend themselves more naturally to p-type devices (WSe_2 and MoTe_2), whereas MoSe_2 is rather ambipolar once crystal defects have been repaired. The type of each TMD is primarily determined by two effects: the metallic contacts attached to it, which affect the Schottky barrier height at the metal-semiconductor interfaces, and the dielectric environment around it, which can transfer electrons or holes to the channel.The complementary logic at the core of electronic circuits requires both n- and p-type transistors. Hence, the polarity of 2-D TMDs should be modifiable, which can be done through doping. For instance, a potassium-doped WSe_2 channel becomes n-type. It can then be combined with a p-type WSe_2 transistor to give rise to a fully 2-D inverter as in <cit.>. Doping TMDs and more generally 2-D materials remains however a difficult task. Several options exist for that. A back-gate can be inserted to modulate the band edges of the source and drain extensions with respect to the contact Fermi levels (electrostatically-induced doping). Alternatively, ion implantation (<cit.>), ion intercalation (<cit.>), or charge transfer from, e.g. ionic liquid (<cit.>) can be utilized. Nevertheless, none of this approach currently allows to reach doping concentrations as high as in Si.Apart from doping, TMDs face several other challenges that prevent device engineers from accessing their intrinsic performance and compare it to that of Si FinFETs. One of them is the obtention of high quality, large-scale monolayers. Already in the 1960's, mechanical exfoliation via scotch tape was applied to isolate layered materials (<cit.>). Only small flakes can be gained with this ratherstraightforward technique, thus hindering mass production of TMD transistors. Lately, impressive progresses have been made with the help of the chemical vapor deposition (CVD) (<cit.>) and metal-organic CVD (<cit.>), which has been shown to produce large monolayer areas with high carrier mobilities. High-quality TMD flakes in the μm^2 range can also be generated with atomic layer deposition (ALD) (<cit.>).Other challenges related to the fabrication of high-performance, TMD-based transistors will be discussed in Section <ref>. As addressing most of them will necessitate significant technology improvements, device simulation can be used in the meantime to provide invaluable insights into the physics of these logic switches, predict their performance limit under ideal conditions, and support the on-going experimental activity. This Chapter will first introduce a suitable modeling approach for that in Section <ref> before presenting results of recently undertaken theoretical investigations in Section <ref>. Thelatter are not restricted to TMDs, but will also explore novel 2-D semiconductors that could become the channel material of future, ultra-scaled n- and p-type transistors.§ MODELING APPROACH§.§ Requirements and State-of-the-Art A critical ingredient of device simulation, regardless of the chosen approach, is the bandstructure of the different materials that constitute the domain of interest. The bandstructure enters either directly or indirectly into the physical model, via the Hamiltonian matrix H or the effective mass m^* of the different components, respectively. These quantities are connected through Schrödinger's equation:H(k)·ψ_k(r)=E(k)ψ_k(r), 1/m^*=.1/ħ^2d^2E(k)/dk^2|_dE/dk=0,where k refers to the electron wave vector, ψ_k(r) to the wave function of the system at position r and wave vector k, E(k) to the corresponding k-dependent band dispersion, and ħ to Planck's reduced constant. The effective masses are extracted at band extrema that are characterized by the condition dE/dk=0. In Figs. <ref>, <ref>, and <ref>, the Hamiltonian matrices of the investigated FinFET, GFET, and GNRFET were constructed in the effective mass approximation (EMA) for Si and in the single-p_z orbital scheme of <cit.> for graphene. Both models are computationally very attractive, pretty accurate for the systemsmentioned above, but unfortunately not ideal for most 2-D materials, as will be explained in the following paragraphs.The bandstructure of selected TMDs is shown in Fig. <ref>, together with their band gap and effective masses. All these quantities were calculated with density-functional theory (DFT), as proposed by <cit.>, an ab initio (from first-principles) method that does not require any input parameters, except for the initial atomic unit cell (AUC) of the material under consideration. This provided AUC is first relaxed so that all atoms occupy stable positions. The corresponding electron density is then self-consistently computed with Poisson's equation. Finally, all electronic bands are extracted from the obtained DFT Hamiltonian H_DFT. Although very accurate, DFT still relies on several approximations, among them the exchange-correlation functional, here the PBE one of <cit.>.What can be clearly seen in Fig. <ref> is that the bandstructures of TMDs exhibit complex features such as multiple valleys separated by a small energy interval, strongly non-parabolic bands, and in some cases band anisotropy (the effective masses depend on the crystal orientation). To capture all these effects, a quantum mechanical simulation approach is absolutely necessary. In other words, a Hamiltonian matrix as in Eq. (<ref>) must be assembled to describe the electronic properties of the desired device. On their side, neither classical nor semi-classical methods can shed lighton the physics of ultra-thin 2-D materials where electrons and holes are confined over narrow dimensions with dimensions below 0.5 nm. Despite these shortcomings, drift-diffusion calculations can fairly well reproduce experimental data for large-scale TMD flakes, if the available material parameters are properly calibrated (<cit.>). The first quantum mechanical study of a monolayer MoS_2 transistor was reported in <cit.>: it was shown that this material could outperform Si as logic switch with a sub-15 nm gate length. To come to this conclusion a quantum transport simulator based on the EMA and the Non-equilibrium Green's Function (NEGF) formalism proposed by <cit.> was employed. The functionality of this simulator is similar to the one that produced the data in Fig. <ref>. We would like to emphasize that NEGF is one of the most popular and powerful techniques to examine charge transport in nanoscale devices. The basic NEGF equations will be reviewed in Section <ref>. At about the same time, a simplified solver based on the straightforward top-of-the-barrier model confirmed these results and revealed that other TMDs might be competitive as well (<cit.>). However, because EMA assumes parabolic bands, the satellite conduction band valleys of TMDs could not be properly accounted for in these works. Furthermore, the atomic granularity of single-layer crystals was totally ignored, EMA being continuous.Device simulators implementing a semi-empirical full-band model represents the next level of accuracy and a significant improvement over the EMA as they include more than a single parabolic band. Hence, the k·p as in <cit.> or tight-binding (TB) method as in <cit.> or <cit.> can be used to construct the Hamiltonian matrix of TMDs. As each atom or discretization point is described by a set of N_orb orbitals, the simulation time is in the order of N_orb^3 times longer than with EMA. Such an increase is still acceptable from a computational point of view because the number of neighbors per point, which determines the band width of the Hamiltonian matrix and the size of the numerical blocks to manipulate, usually remains small. On the negative side, both k·p and tight-binding models must be first parameterized, a sometimes tedious operation that is not always successful. Moreover, getting physically meaningful parameters to connect two different materials, e.g. a TMD and the metallic contact attached to it, can be rather complicated. Finally, k·p in its usual forms (4×4 to 8×8) tends to be restricted to one symmetry point in the Brillouin Zone and does not capture the atomic nature of TMDs.Going up the ladder, a DFT solver relying on a localized basis set can be coupled to a quantum transport simulator based on the NEGF formalism (<cit.>). DFT+NEGF ticks all accuracy boxes (atomistic,full-band, ab initio, capable of treating interfaces and defects, ⋯), but the computational burden of such an approach explodes for large structures composed of several hundreds of atoms. On the one hand, more orbitals per atom are needed. On the other hand, the inter-atomic interactions extends over much longer distances than in tight-binding or k·p. In summary, neither H_EMA, H_k· p, H_TB, nor H_DFT Hamiltonian matrices are optimal to simulate the electrical behavior of transistors with a 2-D channel material. §.§ Maximally Localized Wannier Functions (MLWFs) A model that has the same accuracy as DFT and a computational complexity comparable to tight-binding or k·p would be ideal to probe 2-D materials as next-generation logic switches. Maximally localized Wannier functions (MLWFs), as introduced by <cit.>, satisfy both conditions, as was first demonstrated in <cit.>for various TMDs. Since this early work, MLWFs have become very popular among the 2-D research community, as in <cit.>, to mention a few relevant examples. The MLWF method is as accurate as DFT plane-wave (PW) calculations from which they are derived, they require a small number of basis elements per atom, and the distance over which they decay is very short. The numerical treatment of the resulting Hamiltonian matrices, H_MLWF, is therefore facilitated. Thanks to these unique features, MLWFs allow to simulate large transistor structures made of thousands of atoms within reasonable computational times. They can be seen as a first step towards fully ab initio device investigations.The principle of MLWF-based quantum transport simulations is summarized in Fig. <ref>. The whole process starts with a plane-wave DFT calculation of a primitive unit cell that best represents the geometry of the targeted atomic system. Different tools such as VASP (<cit.>) or Quantum ESPRESSO (<cit.>) can be used for that purpose. The produced eigenenergies and eigenvectors are then transformed into a set of MLWFs with the wannier90 package (<cit.>). The required unitary transformation is exact so that the bandstructure obtained in the PW and MLWF basis sets are theoretically identical, the only difference being that MLWFs only return a sub-set of all PW bands, in the present case those required to evaluate transport properties. Practically, small discrepancies might emerge due to the truncation of long-ranging interactions. They have a limited impact on the results. As illustrations, the bandstructures of MoS_2, MoTe_2, and WSe_2 are plotted in Fig. <ref>, as computed with DFT and after a transformation into MLWFs. Excellent agreement between both data sets can be observed.This PW-to-MLWF conversion gives rise to small Hamiltonian blocks that describe the coupling of the chosen unit cell with itself and with its neighboring cells. Those blocks must be upscaled to form a block tri-diagonal Hamiltonian matrix corresponding to the device to be simulated. Such an upscaling scheme is described in <cit.>. The obtained H_MLWF is perfectly suitable for quantum transport simulations, its increased band width being partly compensated by the fact that less orbitals per atom are required. As a consequence, the same numerical algorithms as with tight-binding can still be employed to solve the NEGF equations. A quantum transport solver such as OMEN (see <cit.>) can do that as it has been specifically designed to handle large-scale nanostructures from first-principles (<cit.>). Note that the procedure outlined in Fig. <ref> works for any exchange-correlation functional, e.g. the generalized gradient approximation with PBE parameterization (GGA-PBE) of <cit.>, hybrid functionals (HSE06) of <cit.>, or the GW plus Bethe-Salpeter equation (GW-BSE) of <cit.>. The initial DFT run will be longer with HSE06 or GW, but the time for the transport calculation is not affected by this choice. §.§ Towards Ab Initio Quantum Transport Simulations In the previous Section, the construction of a MLWF-based Hamiltonian matrix was presented. As last step, H_MLWF should be passed to a quantum transport (QT) solver to perform ab initio device simulations. Various QT methods have been proved effective, among them the solution of the Wigner Transport Equation (<cit.>), Pauli's Master Equation (<cit.>), the Quantum Transmitting Boundary Method (QTBM) (<cit.>), and, of course, the Non-equilibrium Green's Function (NEGF) formalism (<cit.>) that was already mentioned above. As all results in Section <ref> have been obtained with NEGF, the equations governing this transport approach will now be briefly introduced, with emphasis on 2-D material applications. To describe electron transport within the NEGF framework, the following non-linear system of equations must be solved{[ (E-H_MLWF(k_z)-Σ^RB(E,k_z)-Σ^RS(E,k_z))· G^R(E,k_z)=I,; G^≷(E,k_z)=G^R(E,k_z)·(Σ^≷ B(E,k_z)+Σ^≷ S(E,k_z))· G^A(E,k_z). ].In Eq. (<ref>), the G's represent the electron Green's Functions. They depend on the electron energy E and momentum k_z as well as on the MLWF Hamiltonian matrix H_MLWF. The k_z momentum models the flake direction that is orthogonal to the transport axis and that is assumed periodic. The G's can be of four different types, retarded (R), advanced (A), lesser (<), or greater (>). The Σ's refer to the corresponding self-energies where the superscript B stands for boundary and S for scattering. With Σ^B, the coupling of the simulation domain with contact electrodes is captured. This self-energy can be computed with so-called decimation techniques (<cit.>) or more advanced schemes, for example through eigenvalue problems (<cit.>) or contour integrals, as in <cit.>. Its scattering counterpart Σ^S can include different interaction mechanisms such as electron-phonon, impurity, or interface roughness scattering, see <cit.>. Equation (<ref>) can be efficiently solved with a recursive algorithm that constructs the Green's Functions from one side of the device to the other in two steps, see <cit.>. In case of ballistic transport, i.e. in the absence of interactions with other carriers, impurities, rough surfaces or crystal vibrations, only the retarded Green's function G^R needs to be calculated. With its knowledge, both the density-of-states (DOS) and transmission function (TE) of the considered system can be evaluated, from which the carrier density and the electronic current can be derived as in <cit.>. The DOS and TE of a MoS_2 monolayer are provided in Fig. <ref> under flat band conditions, evaluated at a single momentum point, k_z=0, for both the conduction and valence bands. The role of H_MLWF and Σ^RBL/Σ^RBR, the boundary self-energies, is highlighted as well. As expected under suchcircumstances, the transmission function displays a step-like behaviorand effectively counts the number of bands available in the left and right contacts. Each time a transmission channel turns on, the DOS peaks, followed by an exponential decay that is only interrupted by the next peak.If scattering should be accounted for, the lesser and greater Green's Functions G^≷ in Eq. (<ref>) must also be computed. These quantities must be solved self-consistently with the scattering self-energies Σ^≷ S as they depend on each other. This has been done for example in <cit.> for single-, double-, and triple-layer MoS_2 where electron-phonon scattering was treated at the ab initio level. The same approach was extended in <cit.> to model self-heating effects and the formation of local hot spots in various field-effect transistors with a TMD monolayer as channel material.All results presented this Chapter have been obtained in the presence of electron-phonon scattering, but the chosen model relies on a simplified and phenomenological approach with one single phonon energy ħω=40 meV and a scattering strength D_e-ph comprised between 25 and 125 meV (<cit.>):Σ^≶ S(k_z,E)=D^2_e-ph(n_ωG^≶(k_z,E+ħω)+(n_ω+1)G^≶(k_z,E-ħω)).In Eq. (<ref>) n_ω is the phonon's Bose-Einstein distribution function. A dissipative scattering mechanism is needed to avoid a negative differential resistance (NDR) behavior in the I_D-V_DS output characteristics of the 2-D FETs, which has never been experimentally observed at room temperature. NDR originates from the bandstructure of 2-D materials, which often exhibits several narrow bands that cannot propagate if the electrostatic potential undergoes large variations from source to drain (<cit.>). It is an artifact of the ballistic approximation. The inclusion of electron-phonon scattering helps get rid of this non-physical effect by connecting bands that would otherwise be independent from each other (<cit.>). Accounting for the “real” electron-phonon interactions would be more accurate, but gathering the required phononbandstructures and coupling elements is computationally very demanding, as the self-consistent calculation of the scattering self-energies. For all these reasons, the model of Eq. (<ref>) was adopted.Finally, it should be noted that all simulations were performed at room temperature with the metal gate work function adjusted so that the OFF-state current is fixed to 0.1 μA/μm. Perfectly ohmic contacts are assumed (no resistance), except if mentioned otherwise.§ 2-D DEVICE PERFORMANCE ANALYSIS§.§ MoS_2 and other TMDs The first 2-D material under the ab initio microscope is MoS_2, the TMD whose monolayer form was initially shown to provideexcellent transistor characteristics in <cit.>. Before assessing the performance of MoS_2 transistors with respect to other 2-D materials, we would like to underline the importance of the channel thickness. In Fig. <ref>, a simplified device structure is schematized. Its channel is either made of a MoS_2 monolayer, bilayer, or trilayer, whose thickness is approximately 0.6, 1.2, and 1.8 nm, respectively. The corresponding transfer characteristics are shown on the right side of the plot. A MLWF Hamiltonian matrix was constructed for each channel configuration. The ab initio simulations reveal that the monolayer structure has the highest ON-state current (1.15 mA/μm), followed by the bilayer (1.06 mA/μm), and finally the trilayer one (0.73 mA/μm). Normally, the opposite order would be expected as the transport effective mass m_trans decreases as the number of stacked layers increases. However, the gate contact loses part of its control efficiency at larger channel thicknesses, as already demonstrated in Fig. <ref>.This deterioration is best reflected in the SS value of each transistor, which goes from 68.6 mV/dec in the monolayer case to 75.6 mV/dec in the bilayer and 82.8 mV/dec in the trilayer. The gate contact can very well modulate the height of the potential barrier in the layer that is the closest to it, but its influence decreases as carriers are situated away from it. Hence, the benefit of smaller effective masses in few-layer structures is washed out by the poorer electrostatics of thicker channels. If we compare these results to those of Fig. <ref>(d), we notice a substantially larger SS than in the FinFET with a 1-nm wide fin, even for the monolayer FET. The presence of a triple-gate in the FinFET case explains the better scalability of this device. Adding a second gate to the 2-D MoS_2 FETs produces the same effect and leads to an almost perfect electrostatic control in mono-, bi- and tri-layer structures down to a gate length L_G=10 nm (<cit.>). Note that in this publication, different DFT models were used than here. MoS_2 has become the most popular TMD, as confirmed by the number of publications dedicated to it, but it is not necessarily the most promising one. Experimentally, no ON-state current larger than 700 μA/μm has ever been reported for monolayer MoS_2 <cit.>. This value was obtained at V_DS= 5 V, V_GS=30 V, and in a device with a gate length L_G=380 nm and an ON/OFF current ratio larger than 10^6. Other TMDs have therefore also been investigated. The proposed ab initio simulation approach can also be applied to them, as shown in Fig. <ref>. The MLWF results perfectly reproduces the plane-wave DFT calculations for all TMDs. Taking advantage of that, the transfer characteristicsI_d-V_gs at V_ds=0.7 V of n- and p-type MoSe_2, MoTe_2, WS_2, and WSe_2 field-effect transistors were computed as well. They are displayed in Fig. <ref> and compared to those of MoS_2. All devices have a structure similar to the one in Fig. <ref>(a) with a gate length L_G=15 nm and a TMD monolayer as channel material.First, it can be seen that almost all 2-D FETs have a sub-threshold slope in the order of 70 mV/dec, despite the relatively short L_G and the presence of a single gate contact. There is one exception, WS_2, whose SS is smaller (∼65 mV/dec). This is not a consequence of a better electrostatic control, but of the influence of narrow energy bands. As explained above, they can lead to the presence of NDR as well as to too low SS values, even below 60 mV/dec. A physical parameter called “pass factor” allows to quantify the importance of these bands, as explained in <cit.>. The electron-phonon scattering model of Eq. (<ref>) is expected to eliminate these artefacts, but for some 2-D materials, e.g. WS_2, it does not fully succeed. Increasing the electron-phonon coupling can improve the situation. However, at the same time, this affects the ON-state current, preventing a fair comparison with other 2-D materials.The ON-state current values are more broadly distributed than the sub-threshold slopes. Generally, it can be observed that the W-based TMDs perform better than the Mo-based ones, due to lower effective masses and therefore faster carriers, as illustrated in Fig. <ref>. This trend is not altered if electron-phonon scattering is included because W atoms have a larger mass than Mo's so that their oscillation amplitude is smaller, as their probability to interact with free carriers (<cit.>). Another ab initio theoretical study came to a different conclusion, predicting that the mobility of WS_2 would be significantly impacted by electron-phonon scattering (<cit.>). The small energy difference between the conduction band minimum of this material and its satellite valleys is the reason behind this discrepancy. As this energy separation strongly depends on the choice of the DFT functional, different calculations might have different outcomes.The I_ON and SS of all simulated 2-D TMD FETs are summarized in Table <ref>. Overall, under ideal conditions, only one 2-D TMD offers an ON-state current significantly larger than 1 mA/μm, both in its n- and p-type configuration, WS_2: the n-type I_ON is 2.71 mA/μm, the p-type one 3.03 mA/μm. Due to the aforementioned issues with the sub-threshold region of these FETs and the possibility that the WS_2 mobility might be lower than expected, it must be concluded that 2-D TMDs are probably not the best candidates to replace Si as future, more-than-Moore transistors. Further challenges, but also opportunities to improve their properties will be discussed in Section <ref>. §.§ Novel 2-D Materials Besides TMDs, other 2-D materials suitable for logic applications have emerged over the years, starting with black phosphorus, a monolayer of phosphorus (BP) atoms with a buckled honeycomb lattice (<cit.>). Its high hole mobility makes it particularly attractive as p-type field-effect transistor, as demonstrated experimentally in <cit.>. While BP has established itself as the most prominent alternative to TMDs, other novel 2-D materials have earned themselves a place under the spotlight, for example silicene in <cit.>, germanene in <cit.>, antimonene in <cit.>, InSe in <cit.>, or Bi_2O_2Se in <cit.>. Many others are also in the pipeline, see <cit.>.On the theoretical side, a high throughput (HT) investigation by <cit.> revealed that more than 1,800 2-D materials might exist, among them about 1,000 easily exfoliable monolayers. To come up with these numbers the authors considered a large set of 3-D parent crystals from the Inorganic Crystal Structure Database (ICSD, <cit.>) and the Crystallographic Open Database (COD, <cit.>). They then applied geometrical criteria to identify layered compounds and extract 2-D children from them, tested the stability of the latterin vacuum by computing their phonon bandstructure, and finally classified them according to their inter-layer binding energy E_b and the influence of van der Waals forces. Those of interest have a low E_b and their layers are kept together by non-covalent, van der Waals bonds. All notable 2-D materials (graphene, TMDs, BP, ⋯) were recovered by the HT study of <cit.>. They are accompanied by components with a huge variety of band gaps (from metals to oxides), effective masses (from isotropic to strongly anisotropic bandstructures), and monolayer thicknesses (from one to several repeatable atomic layers). Few examples are proposed in Fig. <ref>.Independently from this study, the potential of 2-D materials beyond TMDs and BP as field-effect transistors has been evaluated through device simulations, with empirical and ab initio models. The related literature is abundant and not all important contributions can be listed here. Nevertheless, the following examples are deemed representative: Bi_2O_2Se in <cit.>, monochalcogenides in <cit.>, group IV in <cit.>, group V in <cit.>, as well as on more exotic 2-D materials such as Tl_2O in <cit.>. To be able to compare the performance of transistors made of these very different 2-D components, a uniform modeling approach would be preferable, for instance the one described in Section<ref>. It is validated in Fig. <ref> for non-conventional 2-D monolayers, Ag_2N_6, GeSe, and O_6Sb_4. As for TMDs, the DFT and MLWF bandstructure results agree very well so that everything is in place to conduct a large-scale and systematic performance comparison of 2-D FETs. These results were originally published in <cit.>. All approximations, simulation results, and extracted material parameters are presented in its Supplementary Information.In this context, we designed a realistic 2-D transistor structure, as in Fig. <ref>, and defined a set of targeted figures of merits (FOM). The dimensions and specifications of this single-gate FET derive inspiration from the International Roadmap for Devices andSystems (IRDS) in <cit.> for the year 2025, i.e. a gate length L_G=15 nm, a supply voltage V_DD=0.7 V, and an equivalent oxide thickness (EOT) of 0.6 nm. This EOT is achieved through a 3 nm HfO_2 dielectric layer with a relative permittivity ϵ_R=20. The 2-D materials are deposited onto a SiO_2 substrate with a thickness t_box=20 nm and ϵ_R=3.9. To ensure a satisfactory electrostatic control, the source and drain extensions of the FETs are doped with a donor/acceptor concentration N_D/A=5×10^13 cm^-2. Such high values cannot be achieved experimentally for the moment, see <cit.> or <cit.>, but could be in the future, for example by combining different doping techniques. The SiO_2 and HfO_2 domains do not enter the NEGF equations, they are treated as perfectly insulating layers that only impact the electric field profile. In terms of FOM, we are looking for 2-D materials offering an ON-state current larger than 3 mA/μm, both in their n- and p-type configurations, at an OFF-state current I_OFF=0.1 μA/μm. At the same time, the sub-threshold slope SS should not exceed 80 mV/dec at a gate length L_G=15 nm. We selected 100 different 2-D materials from the database of <cit.> that could potentially reach these objectives. As first criterion, we singled out thinsemiconductor monolayers with a small number of atoms in their primitive unit cell (PUC). No compounds thicker that 1.5 nm and with more than 30 atoms in their PUC were considered. Secondly, using the bandstructure calculation results of <cit.>, we further restricted ourselves to 2-D materials with a band gap larger than 1 eV and, if possible, anisotropic conduction band minima and/or valence band maxima so that a low transport and high density-of-states effective mass are simultaneously obtained. Finally, only monolayers that are stable in vacuum were retained, i.e. those whose phonon bandstructure does not have negative branches around the Γ-point.Fig. <ref> reports the n- and p-type transfer characteristics (I_D-V_GS) of 6 promising 2-D materials that satisfy the conditions listed above: Ag_2N_6, As_2, GeSe, O_6Sb_4, black phosphorus (BP), and SiH (silicane). Under ideal conditions, they all deliver I_ON≥3 mA/μm, which is about 3× larger than MoS_2, BP even reaching ON-state currents in the order of 5 mA/μm. The SS of all transistors is around 70 mV/dec at L_G=15 nm, which is 10 mV/dec lower than the target that was set. In total, out of the 100 examined 2-D materials, 13 arrive at the desired level of performance. Their FOM and effective masses are summarized in Table <ref>. None of the conventional TMDs from Fig. <ref> (MoS_2, MoSe_2, MoTe_2, WS_2, and WSe_2) belongs to that group, although WS_2 gets very close to it (n-type I_ON=2.71 mA/μm and p-type I_ON=3.03 mA/μm), as can be seen in Table <ref>. Two less common TMDs, HfS_2 and ZrS_2, seem to have a higher potential, their n- and p-type ON-state currents being above 3 mA/μm in the ballistic limit of transport.To give an overview of all 2-D materials that have been simulated, we put together their “n-type I_ON vs. p-type I_ON” characteristics in Fig. <ref>(a). This plot allows to rapidly identify the 13 best-performing candidates as they are situated in the upper right corner delimited by black dashed lines. Overall, several 2-D materials (39) exhibit anON-state current larger than 3 mA/μm in their n-type form, e.g. P_8Si_4 (5.43 mA/μm), As_8Si_4 (5.23 mA/μm), or Tl_2O (5.09 mA/μm), much less (17) as p-FET, e.g. C_2N_4Pb_2 (4.08 mA/μm) or I_2Nb_2O_4 (3.41 mA/μm). As for the Si-based CMOS technology, the fact that more 2-D compounds have a high n-type rather than p-type ON-state current indicates that fabricating high-performance pFETs might be a challenging task in the flat land as well. Among all 2-D materials that were simulated, black phosphorus stands out as it displays the largest “n-type I_ON vs. p-type I_ON” combination, when the transport direction of theFET is aligned with the Γ-X crystal axis of BP. The conduction and valence band anisotropy of this monolayer (see Fig. <ref>) is at the origin of the high current densities. Its transport effective mass, m_trans, is equal to 0.16 m_0 for electrons (0.14 for holes), whereas its density-of-states counterpart, m_DOS, amounts to 0.42 m_0 (0.82 for holes). Other 2-D materials benefit from anisotropic bandstructures, which is the reason why they deliver I_ON's larger than 3 mA/μm, as can be generally seen in Fig. <ref>(b-c). This is the case of the electrons and holes in Ag_2N_6, As_8Ge_4, or As_8Si_4, and of the holes in I_4O_4Sc_4, for example. Their band extrema have an ellipsoid shape. In fact, almost all 13 best components are characterized by a m_trans lower than their m_DOS, i.e. they are situated above the dashed black lines that correspond to materials with an isotropic bandstructure in Fig. <ref>(b-c). It should be noted that m_trans and m_DOS were not directly extracted from the bandstructure of the 2-D materials, but by calculating their charge and current densities with analytical equations, as in <cit.>.The question that arises with materials having an anisotropic conduction band minimum and/or valence maximum is “what happens if the direction along which the electrical current flows is not perfectly aligned with the most suitable crystal axis?”. For example, it is well-known in BP that transport along the Γ-Y axis is (much) less efficient than along Γ-X (<cit.>). Using the proposed MLWF+NEGF approach, we found in <cit.> that orientation misalignments up to 50^∘ from the ideal case do not significantly alter the ON-state current of BP transistors, with almost negligible performance loss up to a misalignment angle of 20^∘. This means that I_ON does not linearly decrease as a function of the misalignment angle, but rather first stays constant up to 20^∘, slightly decreases up to 50^∘, and finally rapidly drops. This behavior, which occurs both in the ballistic limit of transport and in the presence of electron/hole-phonon and charged impurity scattering, can be explained by considering the angle-dependent value of the m_trans and m_DOS effective masses.It is important to realize that the impact of misalignment angles that was discussed above for black phosphorus transistors can be generalized to any 2-D materials with an anisotropic bandstructure. If the m_Γ-X/m_Γ-Y ratio of the effective masses extracted from the bandstructure along the two main axes of the Brillouin Zone is smaller than 0.1, i.e. if m_Γ-Y≥10m_Γ-X in Fig. <ref>, the ON-state current only marginally decreases up to δ≤20^∘, by 25% if the misalignment is pushed to 40^∘. It is clear that the magnitude of the ON-state current still depends on the DOS of each 2D material. Nevertheless, a region where the current is almost insensitive to the misalignment angle can be expected in all cases, as suggested <cit.>. If 2-D materials should replace Si as the channel of future field-effect transistors, they should be used during several consecutive technology nodes. Consequently, it should be possible to scale their gate length and still get a performance superior to that of Si FinFETs or their potential successors, gate-all-around nano-sheets, see <cit.>. Thus, in Fig. <ref>, we show the sub-threshold slope of all simulated 2-D FETs as histograms for the gate lengths L_G=15, 12.5, 10, 7.5, and 5 nm. The corresponding average values and standard deviations are provided in the right-hand-side of the plot. While SS slowly increases when L_G shrinks from 15 down to 10 nm, it explodes below 10 nm, as does the variations among the different 2-D materials. From these results, it does not seem feasible to scale single-gate 2-D FETs below 10 nm. By adding a second, symmetric gate at the bottom of the structure in Fig. <ref> an improvement of the transistor scalability can be expected. The double-gate architecture is probably the only viable path for 2-D materials to compete with Si FETs. Still, other technology issues remain to be solved, as detailed in the next section. At the same time, new logic switch opportunities could emerge for 2-D materials.§ CHALLENGES AND OPPORTUNITIES§.§ Electrical Contacts between Metals and 2-D Monolayers One of the key challenges 2-D materials are facing is their contacting with metallic electrodes. Different approaches can be used to inject electrons into monolayers, the most common ones being top (<cit.>), side (<cit.>), and phase-engineered contacts (<cit.>). The former two are plotted in Fig. <ref>. Besides these geometries, the choice of the metal (<cit.>), the introduction of an interfacial layer between the metal and semiconductor (<cit.>), or the doping of the channel (<cit.>) represent additional design options. Typically, all these contact configurations are characterized by resistances in the kΩ·μm range (<cit.>) instead of 150 to 200 Ω·μm as in Si FinFETs, see <cit.>. It should nevertheless be mentioned that phase-engineered or nickel-etched graphene electrodes, as in <cit.>, can give lower resistance values, in the order of 200 Ω·μm, but for multilayer, not monolayer MoS_2.A question that remains open concerns the transfer of electrons from the metallic contact to the semiconductor channel, especially in the case of top-contact architectures. The transfer length L_T measures the average distance that an electron needs to completely leave the metal electrode and enter the semiconducting channel. Two different scenarios are possible, one labeled “edge process” (electrons flow through the metal up to the edge of the metal-semiconductor interface, L_T is close to 0), the other one “area-dependent process” (electrons are gradually transferred from the metal to the semiconductor, L_T≫0). For example, in <cit.>, a transfer length L_T≃ 600 nm was found for a monolayer MoS_2 with titanium contacts, while in <cit.> Au electrodes on top of bilayer MoS_2 led to low contact resistances R_C=740 Ω·μm and a short L_T of roughly 30 nm, i.e. a nearly edge process.On the theoretical side, different explanations for such a behavior have been proposed. Relying on DFT, <cit.> came to the conclusion that the MoS_2 layer below the metallic contact metallizes, which creates an area-dependent injection of electrons. Through device simulations performed in the effective mass approximation, <cit.> determined that the transfer length depends on the number of layers composing the 2-D material, going from an edge process in monolayers to an area-dependent one in multi-layer compounds. Using our ab initio quantum transport approach, we managed to reconcile these theories. If the metal-semiconductor interface is perfectly clean, the transmission of electrons from the electrode to the channel tends to be edge-dependent in monolayers, while the transfer length increases if an interfacial layer is present between both materials (<cit.>). While the exact nature of the transfer process is not yet fully understood, it is clear that L_T should be as small as possible to make the 2-D technology fully scalable. Recently, <cit.> demonstrated few-layer MoS_2 FETs with 13-nm-long top contacts and still very good performance in terms of ON-state current, sub-threshold slope, and contact resistance. Thesedevices have therefore an ultra-short L_T, which goes exactly in the right direction. Note that side contacts are intrinsically more scalable, but they do not currently offer the same performance as top ones, as illustrated in <cit.>.§.§ 2-D Mobility Limiting Factors Ab initio calculations have been widely used to predict the phonon-limited mobility of 2-D materials, with a strong focus on TMDs. One of the first such calculations was done in 2012 by <cit.> for monolayer MoS_2. The electron-phonon scattering rates obtained from DFT served as inputs to fit deformation potentials that were then used in a standard linearized Boltzmann Transport Equation (LBTE) solver. A mobility of 410 cm^2/Vs at room temperature was returned by this approach. It is much larger than what has been so far measured experimentally, e.g. 63 cm^2/Vs at 240 K in <cit.> or 35.7±2.6 cm^2/Vs at 300 K in <cit.>, depending on the dielectric environment. More recent MoS_2 mobility calculations based on fully ab initio electron-phonon scattering rates revealed values much closer to experiments, 150 cm^2/Vs in <cit.> or 144 cm^2/Vs in <cit.>. However, other effects such as charged impurity scattering (CIS) (<cit.>) or surface optical phonons (SOP) (<cit.>) could also play a critical role and bring the mobility closer to measurements. In Fig. <ref>(a), we show mobility results for single-layer MoS_2 that were computed with our in-house LBTE solver described in <cit.>, including ab initio electron-phonon interactions as well as CIS and SOP. The only parameter that was neither calculated nor taken from the literature is the concentration of charged impurities, n_imp. This quantity was used as fitting parameter to best reproduce the experimental data of <cit.>, which was achieved with n_imp=2.5×10^12 cm^-2. An excellent agreement between simulations and experiments is obtained for two different electron concentrations, 7.6×10^12 and 1.15×10^13 cm^-2.Getting the mobility gives a lot of information about the transport properties of a material, but being able to determine the influence of this figure of merit on the “current vs. voltage” characteristics of a device is equally important. To do that, we used our LBTE solver to calibrate the magnitude of the scattering mechanisms in our MLWF+NEGF QT simulator, OMEN (<cit.>). This step is necessary as several approximations to the scattering self-energies must be applied in the NEGF formalism. To compensate for them, each scattering rate (electron-phonon, CIS, and SOP) can be scaled by a different factor so that both LBTE and QT produce the same mobility. This is what we did for a MoS_2 monolayer before simulating the transfer characteristics of a device similar to the one in Fig. <ref>(a). Results are presented in Fig. <ref>(b), where the currents with electron-phonon scattering only and with additionally CIS and SOP are compared to each other. It can be seen that CIS and SOP, together, are responsible for a reduction of the ON-state current by a factor 2, as compared to the case without them so that I_ON does not exceed 500 μA/μm at a gate length L_G=15 nm. This finding indicates that there is room for improvement. The impurity concentration might indeed be decreased by improving the 2-D crystal quality and its interface with the dielectric environment, while SOP might be minimized through substrate engineering. Furthermore, strain might have a beneficial impact on the electronic properties of MoS_2, as theoretically demonstrated in <cit.> for other 2-D materials. Overall, there are certainly multiple paths to push the ON-state current of MoS_2 above 1 mA/μm. §.§ 2-D Oxides What has made silicon such an attractive channel material for transistors is the existence of a native oxide, SiO_2, that can be used to separate it from gate contacts or from another dielectric layer. Although not perfect, the Si-SiO_2 interface contains a low defect density, contrary to what is found if SiO_2, Al_2O_3, or HfO_2 is deposited on a TMD monolayer, see <cit.>. There are few exceptions such as ZrO_2 on ZrSe_2 or HfO_2 on HfSe_2: although large ON/OFF current ratios were demonstrated, the extracted carrier mobilities remained small and the I-V characteristics still exhibited an hysteresis (<cit.>) when sweeping the gate voltage back and forth. Such a behavior is usually induced by the presence of interface traps that are charged and discharged as the gate potential varies.Another approach consists of placing a 2-D oxide on a 2-D channel material. The most common 2-D insulator is hexagonal boron nitride (hBN). For example, encapsulating MoS_2 between hBN layers has been shown to produce higher mobility values, both in mono- and few-layer configurations (<cit.>). Using hBN in transistor applications might however not be optimal as the relative dielectric permittivity, ϵ_R, of this materials is in the order of 5. To reach an EOT of 1 nm or less, a hBN thickness of <1.3 nm is needed, which corresponds to 3 to 4 layers. We used our ab initio QT simulator to determine what the implications of such thin oxides might be for gate leakage currents (<cit.>). As testbed, a Au-hBN-Si metal-oxide-semiconductor (MOS) capacitor was constructed at the atomic level and the current that flows through it computed as a function of the applied voltage. It was found that 3 and 4 layers of hBN are not sufficient to satisfy the IRDS requirements. The situation dramatically worsens if a defect is present in the hBN dielectric. Bridges can form between adjacent hBN layers, which locally increases the leakage current, as can be seen in Fig. <ref>. All current trajectories converge towards the defect location.Other 2-D oxides have been deposited on TMD channels, e.g. CaF_2 in <cit.>. The advantages of such crystals are that they have a perfectly ordered structure, contrary to SiO_2 or HfO_2, which are amorphous, and that they form a quasi van der Waals interface with 2-D channels. Furthermore, the dielectric constant of CaF_2 is higher than that of hBN such that 2 nm of this oxide yields an EOTof roughly 1 nm. Given the fact that the 2-D material space includes more than 1,800 compounds (<cit.>), it can be expected that several other 2-D oxides might be exfoliated in the future and that some of them are competitive with HfO_2 in terms of ϵ_R and band offsets, while preserving clean oxide-semiconductor interface. §.§ Advanced Logic Concepts 2-D materials do not only face challenges, they also offer opportunities in advanced logic applications. Their excellent electrostatic control properties are particularly appealing for the realization of band-to-band tunneling field-effect transistors (TFETs). An entire chapter of this book is dedicated to these logic concepts that could theoretically exhibit a sub-threshold slope below 60 mV/dec at room temperature (<cit.>). Such a feature enables for a drastic reduction of the supply voltage and therefore of the power consumption of integrated circuits. However, due to the tunneling nature of the injection mechanism, TFETs tend to suffer from very low ON-state currents. Typically, a large band gap is needed to ensure a steep SS and a low I_OFF, while a small band gap is necessary to boost I_ON. This dichotomy can be partly addressed through the usage of heterojunctions, as demonstrated in <cit.>. Through device simulation, it has been shown in <cit.> and <cit.> that conventional TMD monolayers are probably not the best TFET candidates as theirlarge band gaps do not allow the ON-state current to reach large values. This behavior is confirmed in Fig. <ref>(a): if the OFF-state current is fixed to ∼0.1 nA/μm and the supply voltage V_DD to 0.5 V, I_ON does not exceed 1 μA/μm, except for WTe_2, a TMD that is rather difficult to stabilize in the 2H phase. Even worse, the SS is superior to 60 mV/dec for several of the considered TMDs. This can be explained by the fact that even in the OFF-state, the tunneling channel is already open, i.e. the conduction band below the gate contact has already been pushed below the valence band of the source, see <cit.>. Once this condition is satisfied, the I_ON increase with respect V_GS slows down and becomes almost linear instead of exponential.There are different solutions to obtain a better TFET performance, for example by combining different TMDs and forming van der Waals heterostructures. The benefit of such approaches has been highlighted both theoretically in <cit.>, <cit.>, or <cit.> and experimentally in <cit.>, <cit.>, or <cit.> combining different TMDs together or one TMD with another material, e.g. germanium. Alternatively, the huge variety of properties encountered in 2-D materials (see <cit.>) can be taken advantage of to identify compounds with low band gaps, compatible with a high ON-state current. With the help of our MLWF+NEGF solver, we have simulated the electrical behavior of relevant examples. Their transfer characteristics are displayed in Fig. <ref>(b). All have I_ON's in the order of 100 μA/μm at V_DD=0.5 V and I_OFF=0.1 nA/μm, which is 100× larger than most TMDs. At the same time, a steep SS is obtained. Further investigations would be needed to screen the available design space, which may contain many more 2-D materials with band gaps comprised between 0.5 and 1 eV.§ CONCLUSION AND OUTLOOK In this Chapter, the potential of 2-D materials as field-effect transistors has been discussed from a modeling perspective, starting from the key features of monolayers. The importance of being able to simulate their electrical characteristics has been introduced. Among all possible approaches, the combination of plane-wave density functional theory, maximally localized Wannier functions, and quantum transport has been selected for its versatility and accuracy. By applying it, it has been revealed that transition metal dichalcogenides cannot currently provide a switching performance that is comparable to that of Si FinFETs. They suffer from technical difficulties that will probably disappear with time, such astheir electrical contacting, as well as from inherent deficiencies, e.g. relative large effective masses that negatively impact their carrier mobilities. We have shown theoretically that novel 2-D materials represent an attractive alternative to TMDs with excellent figures of merits as logic switches. It remains to confirm experimentally the predicted performance, which requires first to isolate the desired mono- or few-layer components.§ ACKNOWLEDGMENTThe work presented in this Chapter was supported by ETH Zurich (grant ETH-32 15-1) and by the Swiss National Science Foundation (SNSF) under grant no. 200021_175479 (ABIME) and under the NCCR MARVEL. We acknowledge PRACE for awarding us access to Piz Daint at CSCS under Project pr28, PRACE for the allocated computational resources on Marconi at CINECA under Project 2016163963, and CSCS for Project s876.IEEEtran empty
http://arxiv.org/abs/2310.17724v1
{ "authors": [ "Mathieu Luisier", "Cedric Klinkert", "Sara Fiore", "Jonathan Backman", "Youseung Lee", "Christian Stieger", "Áron Szabó" ], "categories": [ "cond-mat.mes-hall" ], "primary_category": "cond-mat.mes-hall", "published": "20231026183629", "title": "Field-Effect Transistors based on 2-D Materials: a Modeling Perspective" }
CREST Osaka University, Japan [email protected] University, Japan [email protected] Osaka University, Japan [email protected] University, EgyptOsaka University,[email protected] university, Japan [email protected] growing demand for ride-hailing services has led to an increasing need for accurate taxi demand prediction. Existing systems are limited to specific regions, lacking generalizability to unseen areas.This paper presents a novel taxi demand forecasting system that leverages a graph neural network to capture spatial dependencies and patterns in urban environments. Additionally, the proposed system employs a region-neutral approach, enabling it to train a model that can be applied to any region, including unseen regions. To achieve this, the framework incorporates the power of Variational Autoencoder to disentangle the input features into region-specific and region-neutral components. The region-neutral features facilitate cross-region taxi demand predictions, allowing the model to generalize well across different urban areas. Experimental results demonstrate the effectiveness of the proposed system in accurately forecasting taxi demand, even in previously unobserved regions, thus showcasing its potential for optimizing taxi services and improving transportation efficiency on a broader scale. One Model Fits All: Cross-Region Taxi-Demand ForecastingHirozumi Yamaguchi January 14, 2024 ===========================================================§ INTRODUCTIONThe increasing popularity of ride-hailing services has revolutionized urban transportation, providing convenient and efficient mobility options for individuals in bustling cities worldwide. However, the success of these services heavily relies on the ability to accurately predict taxi demand, allowing companies to effectively allocate resources, reduce customer waiting time, and enhance overall transportation efficiency. Traditional approaches to taxi demand prediction have predominantly focused on utilizing historical demand data and incorporating various machine-learning techniques <cit.>. However, these methods often suffer from limitations. Firstly, privacy leakage of the customers' data, which has been addressed by recent pioneering work in <cit.>. However, the most challenging limitation learning-based techniques face is their inability to capture the complex spatial dependencies and patterns inherent in urban environments.Specifically, a significant limitation of existing taxi demand prediction systems is their lack of generalizability to unseen regions. Most models are trained and tailored to specific geographical areas, rendering them ineffective when applied to new or unobserved regions<cit.>. This restricts the scalability and applicability of these systems, hindering their potential for broader deployment.0To address these challenges, we propose a novel taxi demand forecasting system that leverages the power of multi-view graph neural networks to capture spatial dependencies and patterns in urban environments. Graph neural networks (GNNs) have gained considerable attention in recent years due to their ability to effectively model complex relationships in graph-structured data. By representing the city's road network as a graph, with nodes representing intersections or regions and edges representing connectivity, we can capture the spatial relationships and proximities between different locations.Our proposed system adopts a region-neutral approach, aiming to develop a model that can be trained on one region but effectively applied to any region, even previously unobserved ones. This generalizability is crucial for the scalability and practicality of the system, as it allows for the seamless deployment of the model in new cities or regions without the need for region-specific training. To achieve this, we incorporate a disentanglement technique using Variational Autoencoder (VAE)  <cit.> to separate the input features into region-specific and region-neutral components. This disentanglement enables the model to extract the essential factors that are specific to each region while extracting region-neutral characteristics that transcend geographical boundaries. By leveraging these region-neutral features, our system can make accurate predictions of taxi demand across diverse urban areas, irrespective of their unique characteristics.The proposed system underwent a thorough evaluation using a real-world open datasetto assess the system's effectiveness in demand prediction performance. The obtained results validate the system's capability to achieve a remarkable taxi-demand prediction performance of 80.2% in previously unseen regions. This represents a substantial improvement over the existing state-of-the-art techniques, surpassing them by up to 28.6%.These results demonstrate the feasibility of predictive accuracy in real-world applications. 0 Our contribution is fourfold: 1) To our knowledge, this is the first work to address cross-region taxi demand prediction. 2) The proposed design allows us to learn region representation and guarantees its region-independency. 3) We evaluated the impact of Multi-view Graph processing and the combinations of the several kinds of it. 4) We conducted extensive experiments on real-world large-scale datasets. Our proposed system achieved 80.2% results and clearly outperforms all other spatio-temporal methods. This paper is organized as follows: Section <ref> clarifies the problem statement and gives the definition.Section <ref> presents our methodology, including graph representation, the architecture of our multi-view graph network, and the region-independent feature extraction. Section <ref> discusses the experimental setup, and analyzes the results. Section <ref> reviews related work in taxi demand prediction and graph-based models. Finally, Section <ref> concludes the paper. § PRELIMINARY AND PROBLEM DEFINITION §.§ Definitions * Region:A spatial region, denoted as r ∈R, pertains to a well-defined area within a geographical context. It comprehensively encompasses diverse administrative divisions. To illustrate, we can denote the city of Tokyo as r_Tokyo, representing the entirety of Tokyo's area, which encompasses its administrative divisions, neighborhoods, landmarks, and the surrounding regions. * Cell:A cell, within the context of the taxi demand prediction task, is defined as the smallest unit on which the region is partitioned.* Time slot: The continuous time is partitioned into sequential and equal time intervals. Each slot is denoted as T_k, 1 ≤ k ≤ K, where K represents the number of time intervals during a continuous period of time.* Taxi Demand: In the region r, passengers locally request or pick up a taxi and travel to their destination. The number of taxi demands in each cell is defined as the sum of the number of pick-ups in each time slot.0 §.§ Problem FormulationTaxi demand prediction is a problem that aims to predict the demand at time slot t+1, given the data until time slot t. In addition to historical demand data, we can also use other statistical and meta-features of the region such as mobility statistics, Points of Interest (POI), and meteorological data. We define those external features for a cell i and time step t as a vector e^i_t∈ℝ^l, where l is the number of features.Therefore, our target application is formulated as eq.<ref>. y_t+1^i = ℱ(X_i)= ℱ(y^i_t-h,⋯,t, e^i_t-h,⋯,t) where y^i_t-h,⋯,t is historical demand, e^i_t-h,⋯,t is external feature for a cell i from the time slot t-h to t, and h is a fixed number of preceding time slots (historical).In this paper, our goal is to train a deep-learning model that extracts region-independent representation to predict taxi demand in unseen regions. Specifically,given labeled data from N source regions {(X^r, Y^r)}^N_r=1 where X^r is set of feature in region r and Y^r is the demand for region r.To make sure that the representation is independent of region-specific knowledge, we evaluate the prediction model by the test data X^r̃ from the unseen region, where r̃∉R_source, where R_source is the set of regions used for training the model. 0 The main challenge in predicting taxi demand in unseen regions lies in the fact that the demand patterns differ significantly between regions. This difference arises due to various factors such as spatial distribution, temporal trends, and geographic characteristics. To highlight the inherent difficulty of this problem, Fig.<ref> provides an insightful illustration. The t-SNE embedding of the original features in Fig.<ref> distinctly showcases the separation of data points based on their respective regions. This visualization underscores that the taxi demand prediction model ℱ^r_Chicago, trained using data from region r_Chicago (green dots), struggles to accurately predict the demand Y^r_NYC for region r_NYC when provided with inputs X^r_NYC (blue dots). This performance limitation arises from the challenge posed by out-of-distribution data <cit.>. Conversely, when the model is trained using region-agnostic features, as depicted in Figure <ref>, it demonstrates the ability to perform well in previously unseen regions. This observation highlights the importance of extracting region-independent feature representations to achieve robust and general taxi demand prediction.Our primary objective is to achieve region-agnostic taxi demand prediction by leveraging a deep learning framework that can effectively transform region-specific features into region-independent representations, as exemplified in Figure <ref>. By training this framework using data from the source regions, we aim to create a model that can accurately predict demand in any target region. This unified model eliminates the need for region-specific models and enables the utilization of a single model for predicting demand across various regions. § THE PROPOSED FRAMEWORK 0§.§ Framework Overview The proposed framework, as illustrated in Figure <ref>, operates in two stages: an offline training stage and an online inference stage. During the offline training stage, the framework leverages a large dataset consisting of labeled samples collected from diverse regions. These samples are characterized by cell information, timeslots, and semantics of location, transforming the labeled samples into an interpretable format suitable for machine learning models.These spatio-temporal data are then represented as a graph structure by the Graph Processing module. This module captures the inherent relationships and dependencies among the features, enabling a comprehensive understanding of the region's dynamics. Following this, the graph is inputted into the Region-Independent Feature Extraction module, which facilitates the extraction of region-agnostic and region-specific features separately. This step ensures a clear distinction between factors that are influenced by the region itself and those that are independent of the region. Lastly, the framework trains the Taxi Demand Prediction Model using the region-agnostic features, enabling the model to generalize and make accurate predictions across different regions.In the online stage, the framework enables taxi service providers to query the system for forecasting demand patterns in any region at specific time intervals, even those previously unobserved. This involves processing associated views through the graph processing module to construct a graph representing the region. The resulting graph is then passed to the pre-trained encoder model to extract region-independent features. These features are subsequently utilized by the unified taxi demand prediction model to forecast the demand within the target region.0 §.§ Hexagonal Virtual GriddingThis Module is a pivotal component of the data generation system aimed at accurately computing the taxi demand for inputs of machine learning models. This module plays a vital role in transforming raw trajectory data into a manageable and interpretable format suitable for utilization by machine learning models. The primary objective of this module is to partition the map into evenly spaced hexagonal cells, wherein each cell represents a distinct area on the map. Subsequently, the module calculates the number of demand events that transpired within each hexagonal cell during a specific time-slot in a day. Notably, the module does not differentiate between pick-up and drop-off events, solely focusing on the aggregate count of demand events within each cell. The gridding process entails superimposing a virtual hexagonal grid onto the map. This approach enables the system to provide a comprehensive overview of taxi demand in different areas of the city, facilitating predictions regarding the number of demand events in distinct areas. Moreover, it allows for the facile visualization of demand patterns and the identification of regions characterized by high or low demand. We adopt a hexagonal grid as opposed to a square grid due to its efficiency and effectiveness in representing geographic regions compared to squares. Specifically, hexagons provide balanced neighboring as each hexagon shares a common edge with six neighboring hexagons. This property ensures a more equitable distribution of neighboring cells, reducing-edge effects and accurately representing spatial relationships. Additionally, hexagons allow for more compact packing, covering a given area with fewer cells, resulting in a more accurate depiction of the geographic space and reduced redundancy within the grid.Moreover, hexagonal cells have equidistant centers, ensuring consistent and regular spacing throughout the grid. This characteristic facilitates precise distance calculations and enables robust spatial analysis.Furthermore, hexagons offer directional flexibility, allowing movement in six possible directions.This flexibility enhances the system's ability to capture and analyze spatial patterns, making it particularly advantageous in modeling transportation systems and understanding travel patterns.§.§ Graph ProcessingThis module is responsible for representing the input spatio-temporal features in graph structures. This is due to the ability of graph representation to capture and represent complex relationships and dependencies among various components of the data, which enhances the effectiveness of predicting taxi demand, even in previously unobserved regions. To achieve this, we employ the historical demand graph. This captures past demand trends and spatial distributions, providing valuable insights for predicting future demand.Each node in it encodes spatio-temporal information, such as the relative latitude/ longitude of the containing cell and the day of the week, which are represented as one-hot vectors, and the time slot.The edges connect nodes that are spatially related, such as adjacent cells, and the weight of each edge corresponds to the connectivity of the nodes in terms of vehicle mobility. The weight is determined by the volume of traffic during a specific time period, indicating the strength of connectivity between nodes on the map.0 §.§.§ Spatio-temporal view Since taxi demand strongly depends on spatio-temporal features, past demand trends, and spatial distributions are useful to predict future taxi demand<cit.>. Every time slot, we consider one cell as a node and the region as a graph that represents the taxi demand map.Our proposed model incorporates STGCN<cit.> to extract features from time-series graph data to consider the spatial and temporal relationship simultaneously. To characterize the spatial correlation, cells are connected with the nearest six cells by edges. The weight of the edge is a two-dimensional value. The first weight is the geographical distance between the centroid of cells that is normalized by cell size. The second weight of the edge is the distance along the road network. We calculate the shortest path along the road network by using the Open Street Map API.From the temporal view, we feed the recent 6-time slot historical demand graphs to the STGCN.Although the user needs the recent historical taxi demand of the target regions due to this module, it can not be a heavy task because our framework requires only the recent 6-time slot historical demand records in the entire target region. For example, crowd-sourcing can solve this problem easily.§.§.§ Mobility view While short-term demand trends are essential to predict next timestep demand, mobility trends within the region, which are long-term tendencies, are also useful for demand prediction. For example, taxi demand from the restaurant district to a residential area at night may be high compared to other origin-destination (OD) pair. Different from the graph of the spatio-temporal view,we first calculate the OD pair matrix which represents the number of demands from a cell toward the drop-off cell, and from each cell, we connect edges to the at most top five cells. This connectivity equates to the mobility tendency of the region. This statistical mobility tendency is easy to access because some administrative agencies provide a survey of it, or the users can substitute the values they can get from Map monitoring APIs.§.§.§ Semantic view Intuitively, cells with similar functionality in regions may have similar demand patterns. For example, residential areas may have a high number of demands in the morning when people commute, and commercial areas may expect to have high demands on weekends. Although a similar area may not necessarily be close geographically, this similarity is useful for predicting taxi demand. To represent this relationship, we construct a functional similarity graph named semantic graph among regions. We set up the functionality of the cell(called "landuse") as a feature by using the APIs provided by Foursquare[https://foursquare.com/]. 0 §.§.§ Network architecture Fig.<ref> shows the overview of the graph processing module of our proposed framework. As we mentioned above, we employ three types of view graphs to predict taxi demand accurately. To leverage each view graph, we design the graph convolution network for each view. In order to treat the region as a graph G, we define the graph for each region as G = (V,E), where the set of locations L are nodes V = |L|, E ∈ V × V is the edge set. The spatio-temporal graph processing network takes the most recent h time steps historical taxi demand sequence {D_t-h+1, D_t-h+2, …, D_t} as input to learn the historical spatial-temporal patterns. These h slots demands are combined and organized into a 3-D matrix, shape h × V × d, where d is the node feature. In this paper, since we only use demand as a node feature for spatio-temporal view, the dimension of d is 1. We convert spatio-temporal graph (historical demand sequence) to feature representation (size is ℝ^V × h_ST) by using STGCN<cit.>, where h_ST is the number of hidden channel. Semantic and mobility view graphs are extracted features by GCN<cit.>. The semantic graph processing network takes the static graph as a 2-D matrix, shape V × S_n to learn the relationship of functional similarity and demand pattern, where S_n is the dimension of the semantic node feature. The output channel of the semantic graph network is h_S. The mobility view graph is processed in the same way as the mobility view. The output channel of the mobility graph network is h_M. In order to treat three different views of the graph as features simultaneously, we concatenate these graphs and make 2-D matrix, shape V × H, where H = h_ST + h_S + h_M.§.§ Region-Independent Feature Extraction The region neutralization module plays a crucial role in the effective separation of region-dependent components from the region-independent latent features associated with taxi demand prediction. The high coupling between the input data and its corresponding region poses a challenge for generalization. Consequently, this network is trained to perform the separation task, facilitating the projection of input data into two distinct spaces: region-specific and region-independent.§.§.§ Network Architecture The network architecture of the proposed region-independent approach is depicted in Fig. <ref>. It comprises five sub-networks: two encoders, a decoder, a demand prediction module, and a region classifier. To extract latent features that capture the region-independent and region-specific factors, we utilize two encoders: q(z|x^r) and q_r(z_r|x^r) parameterized by Φ and Φ_r, respectively. Here, x^r represents the input feature. Both encoders share the same structure, consisting of three Graph Convolutional Network (GCN) convolutional layers and ReLU activation functions, as shown in Fig. <ref>. The decoder, denoted as p(x^r|z_r,z;ϕ), aims to reconstruct the input as x̂^r using the extracted latent features z and z_r. It is constructed using the InnerProductDecoder <cit.>. The demand prediction module SR_y is responsible for predicting taxi demand, while the region classifier SC_r determines the region to which a given latent feature belongs. Both modules are implemented as fully connected (FC) neural networks. As the region classifier performs graph classification, a global mean pooling layer is applied before the FC model.§.§.§ Loss function To effectively extract both region-independent and region-specific factors, we employ three loss functions: Variational Graph Autoencoder (VGAE) loss (L_elbo), task-specific loss (L_TS), and Independent Excitation loss (L_IE). The VGAE architecture in <cit.>, is utilized to reduce the dimensionality of feature graphs without losing important information. The learning process involves training probabilistic encoders and a decoder, which is achieved through the following loss function (eq.<ref>): L_elbo = E_r,q_r(z_r|x^r; Φ_r), q(z|x^r; Φ)[log p(x^r|z_r,z;ϕ)] -KL(q_r(z_r|x_r;Φ_r)||p(z_r)) -KL(q(z|x_r;Φ)||p(z)) where the first term is the reconstruction error, which measures the deviation between the original features x^r and the reconstructed features p(x^r|z_r,z;ϕ). The last two terms calculate Kullback-Leibler (KL) divergence between the sampled latent features and corresponding priors, which are interpreted as regularizers on the latent feature spaces. Since we adopt VGAE architecture, this learning procedure is in a self-supervised manner, where KL[q(·)||p(·)] is the KL divergence between q(·) and p(·). The prior distribution p(z) is assumed to be a Gaussian distribution N(·) with zero mean and unit variance for each dimension, as:p(z) = ∏_i p(z_i) = ∏_i N(z_i | 0, I)To guide the feature learning process, we introduce two tasks by two classifiers: the demand prediction regressor SR_y and the region classifier SC_r with parameters w_y and w_r, respectively. These modules take the latent features obtained from their respective encoders as input and predict their corresponding labels. Specifically, SC_r classifies region label r from the region-specific features z_r, while SR_y predicts demand from the region-agnostic features z. To achieve task-specific representations (i.e., two representations one for region-specific information and another for region-agnostic demand prediction), we define the task-specific loss function (L_TS) as: L_TS = 1/N_s∑_r=1^N∑^N_r_i=1[ MAE(y_i^r, SR_y(z; w_y))+ ℓ(r, SC_r(z_r; w_r))]This function, expressed in eq.<ref>, calculates the average loss over all regions and data points using categorical cross-entropy ℓ(·) and Mean absolute error MAE(·) where N_r is the number of data for region r. The objective of this loss function is to optimize the classifiers SC_r and regressor SR_y to accurately predict their respective labels. Minimizing this loss plays a critical role in facilitating the disentanglement of different task-specific information (encompassing both region-specific details and region-agnostic demand information) within the latent features. To further ensure the separation of the input into distinct encoded representations of non-overlapping information, we incorporate an Independent Excitation mechanism. This mechanism is designed to minimize the accuracy of the classifier SC_r when the feature z is fed into it, as follows: L_IE = -1/N_s∑_r=1^N∑^N_r_i=1[ MAE(y_i^r, SR_y(z_r; w_y)) + ℓ(r, SC_r(z; w_r))]Where N_s=∑_i=1^r N_r. It is noteworthy that the minimization of equations (<ref>) and (<ref>) leads to latent representations z and z_r becoming more representative of each respective task while being less representative for another task. Specifically, z is designed to be highly informative for predicting taxi demand, while its capability for distinguishing regions is intentionally diminished. As such, z serves as a region-independent representation specifically tailored for taxi demand prediction. Moreover, when eq.<ref> is minimized, both z and z_r can be effectively utilized for reconstructing the original information. Consequently, z and z_r not only function as independent features but also encompass complete information necessary for reconstruction, incorporating the reconstruction loss outlined in equation (<ref>).§.§ Online Demand PredictionDuring the online stage, an end user, such as a taxi service provider, can utilize the framework to forecast the unknown demand pattern in any arbitrary region, including previously unobserved regions. This predictive capability is achieved through a series of systematic steps. Firstly, the framework prepares and processes the historical views using a graph processing module. Subsequently, the processed graph is inputted into a pre-trained encoder model to extract region-independent features. Finally, the trained taxi demand prediction model utilizes these region-independent features to forecast the taxi demand within that target region. This predictive capability provides valuable insights to the end user, aiding in decision-making processes and resource allocation.§ EVALUATION §.§ Dataset DescriptionThe proposed system is evaluated using publicly available datasets for the purpose of benchmarking, as described below. We make use of open data collected from various cities in the United States, with each city considered as a distinct region. While these datasets may contain additional features for other applications, we have extracted the specific attributes required for our taxi demand application, namely latitude, longitude, and timestamp indicating the pick-up or drop-off time of each taxi passenger.We use these datasets, NYC Yellow Taxi Trip Data[https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.page], Taxi Trips Reported to the City of Chicago[https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew], and Mobility Traces of Taxi Cabs in San Francisco, USA<cit.>.0§.§ Evaluation MetricsTo assess our proposed method's performance, we calculate the prediction "accuracy" accounting for one-class error. A prediction is considered correct if the absolute error between predicted and actual taxi demand is less than two. This aligns with real-world taxi dispatching systems where estimating demand within a certain margin of error is acceptable<cit.>.We evaluate the region-agnostic approach by measuring the "accuracy" of the taxi demand prediction model in an unseen region. We employ a leave-one-city-out strategy for evaluating the models in both our dataset and the open dataset. The system parameters are as follows, learning rate is 10^-4, batch size is 64, max epoch number is 50, hidden channel of z and z_r is 32, hexagon cell edge is 1.4 km, and the time interval is 30 minutes. 0§.§ Analysis of System Parameters and ModulesIn this section, we study the impact of the main system parameters and modules. The default parameters are reported in Table <ref>.§.§ Robustness Evaluation§.§.§ Performance Stability In this section, we assess the robustness of the proposed approach to preserve the performance of region-specific taxi demand prediction in region-agnostic settings. It is well-known that making the model more general by eliminating specialized components (i.e., region-dependent) from input leads to a significant drop in accuracy. Therefore, we compare the performance of the model trained and tested using the same region 80-20 strategy, referred to as the "region-specific" approach[In this model, the region classifier is deactivated as it is trained with only one region.], with the model trained using a leave one-city-out strategy, known as the "region-agnostic" approach. Figure <ref> illustrates the accuracy comparison between the two test cases. The results indicate that the region-agnostic model achieves nearly identical prediction accuracy compared to the region-specific model, with a difference of less than 1%. This outcome can be attributed to the incorporation of a task-specific loss and excitation loss during the training of the proposed model. These loss functions penalize the model for disentangling region-dependent components while simultaneously maintaining the accuracy of demand prediction. §.§.§ Performance Generalization This section evaluates the system's ability to consistently predict taxi demand across diverse regions with varying geographical structures and mobility patterns. Figures <ref> and <ref> depict the accuracy of our proposed method in different cities and datasets. From Figure <ref>, our approach demonstrates consistent accuracy in predicting taxi demand across four regions (r_1, r_2, r_3, and r_5). Notably, region r_4 exhibits slightly higher accuracy due to its distinct characteristics, such as a homogeneous distribution of taxi demand patterns and minimal external factors influencing demand. These factors contribute to improved predictive accuracy compared to other regions. Similarly, Figure <ref> shows comparable accuracy in NYC, Chicago, and San Francisco. These results indicate the generalizability of our proposed approach, which effectively learns transferable feature representations to predict taxi demand across diverse regions. §.§ Comparative EvaluationIn this section, we compare the proposed method with the most relevant state-of-the-art techniques:Graph neural network (GCN), Node2Vec<cit.>, Spatial-Temporal Graph Auto-Encoder (STGAE) <cit.>.GCN <cit.> is a traditional method, and we fed a semantic graph into GCN to represent the input data.Node2Vec<cit.> is an unsupervised node embedding method that learns continuous feature representations for nodes. We employ Node2Vec to obtain node representations from each region graph, and then train an MLP model to predict taxi demand based on the obtained node representations. STGAE<cit.> employsGraph Auto-Encoder to extract low-dimensional latent representations from spatio-temporal graphs by minimizing the graph reconstruction loss. 0Figure <ref> illustrates the performance of the proposed approach in comparison to the existing approaches.The results demonstrate the effectiveness of the proposed approach and its ability to achieve an accuracy of 80.2%. With this accuracy, it outperforms the existing methods by up to 28.6%. Notably, our approach exhibits superiority even when compared to the current state-of-the-art techniques, i.e., MSNE and Node2Vec, by a minimum margin of 3.5%. The success of our approach can be attributed to its unique capability to capture region-independent representations. This feature allows our taxi demand prediction approach to generalize effectively to unseen regions. In contrast, other methods rely solely on neural networks and multi-view representational abilities, or they emphasize similarities between different regions. However, these approaches cannot ensure general representation and dependable cross-region performance. 0 § RELATED WORK§.§ Taxi Demand PredictionThe prediction of taxi demand has recently garnered considerable attention, owing to the abundance of large-scale spatio-temporal data that facilitates the training of deep neural networks, such as Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) networks.Recent studies have leveraged neural networks to predict taxi demand with greater accuracy<cit.>. For example, the method proposed in <cit.> employs a CNN to capture spatial features and an LSTM to capture temporal features, resulting in improved accuracy compared to methods that only consider semantic, spatial, or temporal information. The author of <cit.> improves accuracy by leveraging spatio-temporal correlations between pick-up and drop-off locations through multitask learning. The author of <cit.> integrated semantic data that represent external factors influencing road conditions like weather, urban events, and their text descriptions to improve the accuracy of taxi demand prediction. These machine learning-based methods have shown promising results.However, it is difficult to transfer these models to another unseen region because these methods focus on predicting taxi demand accurately within the target region. Different from these existing methods, our proposed method aims to predict taxi demand accurately even in unseen regions.§.§ Spatial-Temporal data processing using Graph Neural NetworkIn traffic prediction problems, there are many kinds of applications to predict any traffic-related data, such as traffic volume (collected from GPS or loop sensors), traffic flow, crowd, and taxi demand (our problem).The problem formulation process for these different types of traffic data is the same in terms of spatio-temporal application. Essentially, the goal is to predict a traffic-related value for a location at a timestamp.To capture spatial and temporal correlation simultaneously and improve the accuracy of traffic-related prediction, some research employed CNN (convolutional neural network) and GCN (Graph convolutional network) <cit.>. Since CNNs were designed for Euclidean space, such as images and grids, they have limitations in transportation networks with non-Euclidean topology and thus cannot essentially characterize the spatial correlation of traffic flow and road network. On the other hand, graph convolutional neural networks (GCNs) are dedicated to processing network structures<cit.>, which can better model the spatial dependence of road segments on traffic networks<cit.>.Since traffic-related value also depends on human activity and other external factors, prediction based on historical spatial information alone has limits because these methods cannot consider them, such as Point of Interest(POI) and weather. In fact, some previous works improve the accuracy of traffic-related prediction by leveraging multi-view information that includes POI and weather<cit.>. Especially this multi-view information is expected to be important when the model is used in the unseen region because spatio-temporal traffic pattern depends on each region.Based on this analysis of existing work, we employ multi-view graph processing modules in the proposed framework. Through the experiments in Section<ref>, we confirm that this multi-view graph processing contributes to the performance of model in the unseen region. §.§ Spatial-Temporal data processing on cross-regionThe technique to realize cross-region machine learning model can be roughly grouped into two categories: transfer learning and representation learning. Machine learning models for spatio-temporal applications don't work well in regions where the data is scarce.Many research addresses this problem by transfer learning which is a method to transfer knowledge from regions with lots of data to regions where the data is scarce<cit.>. In spatio-temporal application, transfer learning successfully produces results. However, transfer learning needs labeled data in target regions. Therefore, transfer learning is not necessarily practical and applicable to region where there are no labeled data.Another technique to realize a cross-region machine learning model is graph representation learning. Graph representation learning aims to find a good mapping function to represent the location among a region into low-dimensional latent embedding.Since a good data representation is useful for multiple downstream tasks, the model trained by such embedding is expected to be less affected by region-specific knowledge.One of the network embedding methods Node2Vec <cit.> aims to learn the latent representation of a node to capture local structures by random walks. However, this method uses a shallow network and can't contain node feature information. To consider node feature information, a GCN-based representation learning method is proposed <cit.>.Spatio-temporal graph autoencoder (STGAE) extracts spatio-temporal graph representation that is useful for multi-task <cit.>. Multi-view Spatial Network Embedding (MSNE) <cit.> is unsupervised spatial representation learning via dual-adversarial auto-encoder framework using multi-view graphs. These existing graph representation learning successfully extract effective representation for spatial application within the region where the feature extractor was trained. However, these existing representation learning methods don't necessarily lead to a cross-region model performance because these methods only consider graph reconstruction loss, and can't guarantee region independency of latent features and the utility of the target task.In contrast, our proposed system focuses on the idea of one model for all regions, that processes multi-view graphs with a novel region agnostics mechanism consisting of two encoders responsible for extracting region-specific features and region-independent features. This method allows for predicting taxi demand in unseen regions with high accuracy.§ CONCLUSION In this paper, we have introduced a taxi demand prediction framework that effectively forecasts taxi demand even in previously unseen regions by leveraging a region-agnostic mechanism. The proposed framework incorporates a GCN to represent the spatio-temporal features of the region. Additionally, we have proposed a region-independent feature extraction mechanism utilizing two independent encoders to separate region-specific and region-independent factors.We conducted experiments using well-known real-world open datasets. The results demonstrate that our approach achieves the highest accuracy in taxi demand prediction for unseen regions compared to state-of-the-art approaches. These findings demonstrate the feasibility and practicality of achieving accurate predictions in real-world applications, thereby paving the way for enhanced decision-making, resource allocation, and operational efficiency in the taxi service industry.§ ACKNOWLEDGMENTThis work was partially funded by JST, CREST Grant JPMJCR21M5, and JSPS, KAKENHI Grant22K12011, and NVIDIA award. ACM-Reference-Format
http://arxiv.org/abs/2310.18215v1
{ "authors": [ "Ren Ozeki", "Haruki Yonekura", "Aidana Baimbetova", "Hamada Rizk", "Hirozumi Yamaguchi" ], "categories": [ "cs.LG" ], "primary_category": "cs.LG", "published": "20231027154204", "title": "One Model Fits All: Cross-Region Taxi-Demand Forecasting" }
Moscow Institute of Physics and Technology, 141701 Dolgoprudny, Russia L. D. Landau Institute for Theoretical Physics, 142432 Chernogolovka, RussiaIoffe Institute, 194021 St. Petersburg, RussiaWe have developed a theory of the anomalous Hall effect in two-dimensional electron gas in the case where the time of electron-electron collisions is much smaller than the transport relaxation time. The transition between the diffusion transport regime, when the momentum relaxation length of electrons is much smaller than the channel width, and the hydrodynamic regime, when the momentum relaxation length exceeds the channel width, has been traced. The contributions of the anomalous velocity, wave packet shifts, and asymmetric scattering to the anomalous Hall field and voltage have been calculated. It has been shown that the anomalous Hall voltage caused by the asymmetric scattering can have a nontrivial coordinate dependence and change its sign depending on the specific scattering mechanism.Key words: anomalous Hall effect, magnetotransport, spin-orbit interaction, electron-electron collisions, electron fluid hydrodynamics, anomalous velocity, wave packet shift, asymmetric scattering.Diffusive-hydrodynamic transition in the anomalous Hall effect M.M. Glazov January 14, 2024 ==============================================================§ INTRODUCTION In recent years, the studies of the spin and anomalous Hall effects have became topical <cit.>. In these phenomena the spin degrees of freedom of charge carriers and the spin-orbit interaction is clearly manifested. The spin Hall effect (SHE) consists of generating a spin flux perpendicular to the applied electric field <cit.>. In multi-valley semiconductors, such as atomically thin transition metal dichalcogenides, a valley Hall effect (VHE) is possible, where carriers in different valleys propagate in opposite directions <cit.>. In the anomalous Hall effect, the spin flux is converted into electric current in presence of external magnetic field. This leads to anomalous contributions to the Hall voltage and Hall constant, unrelated to an action of the Lorentz force on the charge carriers, but proportional to the spin polarization of the system <cit.>.The microscopic mechanisms of the spin and anomalous Hall effects are closely related to each other and have been actively discussed in the literature over the last half a century <cit.>. By now, it is firmly established that there are three main mechanisms for these effects in non-magnetic semiconductors: (i) asymmetric scattering on impurities or phonons (skew scattering), (ii) shifts of electron wave packets during scattering (side-jump) and (iii) anomalous velocity, induced by an external electric field <cit.>. Since the nature of the anomalous velocity and the side-jump contribution is the same and these effects are caused by the action on the electron of either an external field or a field created by a static defect or phonon, then under the steady state conditions a part of the side-jump contribution compensates the contribution of the anomalous velocity <cit.>, see also <cit.>.Nowadays, the spin and anomalous Hall effects have been studied in detail for the diffusive regime of the electron transport. This is a typical situation for sufficiently large, macroscopic samples, whose geometric dimensions exceed both the electron mean free path l, caused by the scattering of electrons on impurities and phonons, and the spin relaxation length l_s. However, the development of nanotechnology and progress in materials science make it possible to create ultraclean structures with two-dimensional electron gas, where the mean free path exceeds the width of the conducting channel l ≫ w. In this case, the electron transport turns out to be qualitatively different compared to the diffusive case. A striking example is the hydrodynamic flow of electrons, recently discovered in ultraclean electronic systems <cit.>. In this regime, the mean free path with respect to the interparticle collisions l_ee is small compared to the channel width. Accordingly, the loss of momentum of charge carriers occurs mainly at the scattering on the channel edges, while the interparticle collisions provide the viscosity of the electron liquid. This leads to a number of nontrivial effects in the transport and magnetotransport of electrons in ultraclean systems  <cit.>, see also reviews <cit.>.In ultraclean electronic channels, a significant modification of the anomalous Hall effect is expected <cit.>. In recent works <cit.>, within the framework of the kinetic equation for the spin density matrix, a theory of the spin and anomalous Hall effects in ultraclean channels with a two-dimensional electron gas has been developed and the contributions of microscopic mechanisms have been studied in case where momentum relaxation occurs on the edges of the channel, and the field acts on electrons over the entire area of the structure. At the same time, the cases of ballistic transport, when l_ee≫ w (and interparticle collisions practically do not play a role), and hydrodynamic transport, when l_ee≪ w ≪ l, were considered separately. The purpose of this article is to develop the theory of transition between hydrodynamic and diffusion regimes of electron transport in the anomalous Hall effect. Here weconsider an electron gas with a parabolic spectrum and assume an arbitrary relation between l and l_ee, but assume that l_ee≪ w, i.e. that electron-electron collisions are very efficient.In Sec. <ref> we present a model of the anomalous Hall effect within the framework of the spin density matrix method and present the general form of the kinetic equation. Section <ref> discusses the normal Hall effect at the transition from the diffusive to the hydrodynamic one. Section <ref> is devoted to the anomalous Hall effect, it contains themain results of the work. The results are summarized in section  <ref>.§ MODELThe geometry of the system is presented in Fig. <ref>. An external electric field is applied along the axis of the channel with a quasi-two-dimensional electron gas, 𝐄∥ y (in this direction the channel is assumed to be infinite, electric current can flow along y), the channel width along x is equal to w, the external magnetic field is applied perpendicular to the plane of the two-dimensional electron gas 𝐁∥ z. Scattering on the channel edges is assumed to be diffusive. We describe electron transport in the channel in terms of the kinetic equation for the spin density matrixρ̂_𝐩 = f_𝐩Î + 𝐬_𝐩·σ̂,where 𝐩 is the electron momentum, Î is the identity matrix of size 2× 2, σ̂=(σ̂_x,σ̂_y,σ̂_z) is a vector composed of Pauli matrices, f_𝐩={ρ̂_𝐩/2} is a spin-averaged electron distribution function, 𝐬_𝐩 = {ρ̂_𝐩σ̂/2 } is a spin distribution function. The density matrix satisfies the kinetic equation <cit.>∂/∂r𝐯̂_𝐩ρ̂_p+e(E+ 𝐄_H)∂ρ̂_p/∂p + e/c[v×B]∂ρ̂_p/∂p =- ρ̂_p-ρ̂_p/τ + Q̂_ee{ρ̂_𝐩} + Ĝ_𝐩,where 𝐯̂_𝐩 is the electron velocity operator (taking into account the anomalous contributions given in Sec.), 𝐄_H is the Hall field arising due to the redistribution of electrons induced by the Hall effect , e < 0 is the electron charge, c is the speed of light, ρ̂_𝐩 = (2π)^-1∫_0^2πρ̂_𝐩 dφ is the density matrix averaged over the angles of the vector 𝐩, τ is the electron scattering time due to electron-impurity and electron-phonon collisions (momentum relaxation time), Q_ee{ρ̂_𝐩} is the electron-electron collision integral, Ĝ_𝐩 is the generation rate of anomalous Hall current. Here and below, ∂ /∂𝐫 and ∂/∂𝐩 denote gradients in spatial coordinate and momentum. We neglect Fermi-liquid renormalizations. The kinetic equation (<ref>) should be supplemented with the boundary conditions. For the diffusive scattering on the channel edges, these conditions have the formρ̂_𝐩(± w/2) = const,p_x>0,x=-w/2, const,p_x<0,x=w/2,where p_x = pcosφ is the x-component of the momentum. The physical sense of the boundary condition (<ref>) is that electrons after scattering on the edges have an isotropic distribution function. Also, we require that the electron flux through the edges is zero, ∑_𝐩𝐯̂_𝐩,xρ̂_𝐩=0.We are interested in case of degenerate electron gas, where T ≲ε_F, where ε_F is the Fermi energy, and the Boltzmann constant is set equal to unity. Let us introduce the energy-integrated particle and spin distribution functions, which depend only on the angle between 𝐩 and the x axis, namely,F_φ = 𝒟∫_0^∞ f_𝐩dε,𝐒_φ = 𝒟∫_0^∞𝐬_𝐩dε,where 𝒟 = m/2πħ^2 is the density of states per spin, ε = p^2/2m is the electron spectrum, which we consider to be parabolic, m is the electron effective mass.Since the Hall field is parallel to the x axis (see Fig. <ref>), we introduce the electrostatic potential Φ(x) according to𝐄_H = -∂Φ(x)/∂ x𝐱̂,where 𝐱̂ is the unit vector in the x direction. This potential can be included in the renormalized distribution function by defining F̃_φ(x) asF̃_φ(x) = F_φ(x) + e𝒟Φ(x).Under typical conditions, the electric current caused by the imbalance of charges significantly exceeds the diffusive current caused by the gradient of the chemical potential, therefore <cit.>𝐄_H ≈ -1/e𝒟∂F̃_φ/∂ x𝐱̂.Our goal is to find the Hall field in the linear-𝐄 and linear-𝐁 approximation. Taking the trace of the equation (<ref>) and integrating it over energy taking into account the replacement (<ref>), we obtain the following kinetic equation:∂/∂ x(v_x F̃_φ + v_aS_z,φ^0) + ω_c∂F̃_φ/∂φ + F̃_φ - F̃_φ/τ =Q_ee{F̃_φ} + e𝒟 Ev_y + G_φ.Here v_x = vcosφ, v_y = vsinφ are the velocity components, where v = √(2ε_F/m) is the Fermi velocity, ω_c = -eB/mc is the cyclotron frequency, v_a ∝ E is the anomalous part of the velocity (see section <ref>), and S_z,φ^0 ≡ S_z^0 = -1/2𝒟 g μ_BBis equilibrium spin polarization in a magnetic field, g is the g-factor of the electron, μ_B is the Bohr magneton. § NORMAL HALL EFFECTIn this section, to illustrate the general approach, we briefly discuss the kinetic equations for the ordinary and spin distribution functions and their solutions without taking into account anomalous contributions. Here we describe the normal Hall effect caused by the action of the Lorentz force on electrons. We also analyze how the Hall field changes during the transition from the diffusive to the hydrodynamic regimes of transport. §.§ Distribution function and Hall fieldThe kinetic equation (<ref>), neglecting anomalous contributions, takes the form∂/∂ x(v_x F̃_φ) + ω_c∂F̃_φ/∂φ + F̃_φ - F̃_φ/τ =Q_ee{F̃_φ} + e𝒟 Ev_y .To describe electron-electron collisions, we use the relaxation time approximation. Then the integral of interparticle collisions is written as <cit.>Q_ee{F̃_φ} = F̃_φ - F̃_φ- F̃_φ^ccosφ- F̃_φ^ssinφ/τ_ee,whereF̃_φ^c = 1/π∫_0^2πF̃_φcosφdφ, F̃_φ^s = 1/π∫_0^2πF̃_φsinφdφ.Here τ_ee is the effective time of electron-electron collisions. We emphasize that this simplified form of the electron-electron collision integral takes into account the conservation of the number of particles (the zero harmonic of the distribution function harmonic) and momentum (the first harmonic of the distribution function).In our case where l_ee/w ≪ 1, l/l_ee is arbitrary, we can solve the kinetic equation by expanding the integral distribution function and taking into account only angular harmonics up to the second order, at any ratio l/l_ee. Assuming the magnetic field to be weak ω_c τ_ee≪ 1, we will look for a solution to the equation (<ref>) by iterations over the magnetic field. Setting 𝐁=0, we write the distribution function in the form δF̃_φ(x) = δ F_1(x) sinφ + δ F_2(x) sin2φ∝ E,Substituting the expansion (<ref>) and the collision integral (<ref>) into the kinetic equation (<ref>), we obtain the equations on angular harmonics l_1/2∂δ F_2/∂ x+δ F_1=eEl_1 𝒟, l_2/2∂δ F_1/∂ x+δ F_2=0, where l_1,2 = vτ_1,2, and τ_1≡τ, τ_2 = (τ^-1 +τ_ee ^-1)^-1, respectively, are the relaxation times of the first and second angular harmonics. Solving this system taking into account the condition of vanishing current at the boundary,[For l_ee≪ w the condition (<ref>) is reduced to the requirement that the electron velocity at the boundary vanishes.] δ F_1 (x = ± w/2) = 0, we getδ F_1,2 = eE 𝒟 l_1 λ_1,2(x),with functions λ_1(x)= l_1(1 - cosh(2x/√(l_1l_2))/cosh(w/√(l_1l_2)) ), λ_2(x) = √(l_1l_2)sinh(2x/√(l_1l_2))/cosh(w/√(l_1l_2)). In the limit l→∞ or, more precisely, at ll_ee≫ w^2, the hydrodynamic regime of electron transport and the Poiseuille flow are realized, characterized by a parabolic dependence of the electron velocity on the coordinate (Fig. <ref>). In the limit l ≪ w the diffusive regime is realized, and the flow profile becomes flat and does not depend on the coordinate (strictly speaking, the velocity vanishes near the walls on a negligibly small scale ∼ l_2). The transition between these regimes is illustrated in Fig. <ref>, which shows the velocity profile calculated according to (<ref>) and (<ref>) for different ratios l/l_ee. It can be seen that as the mean free path l decreases, the electron velocity decreases, and the spatial distribution of velocities changes qualitatively: from parabolic at l→∞, which corresponds to the Poiseuille flow, to flat at l→ 0, which corresponds to diffusive regime. Now, let us take into account the magnetic field and find the Hall field. We substitute the solution in presence of only the electric field (<ref>) into the Lorentz term with ω_c in (<ref>) and determine the linear-𝐁 contribution to the integral distribution function. The expansion of this magnetic field induced contribution into angular harmonics will be as followsΔF̃_φ(x) = ΔF̃_0(x) + Δ F_1(x) cosφ + Δ F_2(x) cos2φ,and the coefficients for angular harmonics satisfy the equations v/2∂Δ F_1/∂ x = 0,v∂ΔF̃_0/∂ x + v/2∂Δ F_2/∂ x + ω_cδ F_1 + Δ F_1/τ = 0,v/2∂Δ F_1/∂ x + 2ω_cδ F_2 + Δ F_2/τ_2 = 0. Solving this set of equations and using the relation (<ref>), we obtain an expression for the normal Hall field:E_H = E·ω_cτ_2(λ_1/ł_2 - ∂λ_2/∂ x),where the functions λ_1, λ_2 are introduced in (<ref>). Figure <ref>(a) shows Hall field E_H(x), calculated using the equation (<ref>) for various ratios l/l_ ee. Panel (b) of Fig. <ref> shows the Hall voltage obtained by integrating E_H into (<ref>), for the same values of l/l_ee. In the limiting case of the diffusive regime E_H = ω_c τ_1 E, and in the hydrodynamic regime, when l l_ee≫ w^2 and the Hall field has a nontrivial coordinate dependence in accordance with Ref. <cit.>. §.§ Spin distribution functionNow let us determine the spin distribution function s_z,𝐩, which is necessary for further calculation of the anomalous Hall effect (cf. Ref. <cit.>). As above, to calculate the spin distribution function we neglect the anomalous contributions to the kinetic equation and take into account only the relaxation of the spin to the equilibrium one, Eq. (<ref>). Let us write down the kinetic equation for the spin distribution function integrated over energy, expanding it similarly to the ordinary distribution function, up to the second angular harmonic:cosφ∂ S_z,φ/∂ x + S_z^1sinφ/τ_1^s + S_z^2sin2φ/τ_2^s = e𝒟 ES_z^0/Nsinφ,where S_z^1, S_z^2 are expansion coefficients of the first and second angular harmonics respectively, τ_1^s,τ_2^s are their relaxation times, N is the average electron density of the system.It is important to note that the relaxation times of the spin distribution function depends on the magnetic field. In the case of a low magnetic field, where|gμ_BB|≪ T≪ε_F,relaxation of both the first and second harmonics occurs due to both electron-impurity and electron-electron collisions, with τ_1^s = τ_2^s = (τ^-1+τ_ ee^-1)^-1 <cit.>. As we will see later, in this situation, the diffusive regime is always realized, as l_ee≪ w. For moderate magnetic fields, whenT ≪|gμ_BB| ≪ε_F, relaxation of the first harmonic during electron-electron collisions is suppressed <cit.>, as collisions between electrons with opposite spins become extremely ineffective. In this case, τ_1^s = τ and τ_2^s = (τ^-1+τ_ee^-1)^-1.Solution of Eq. (<ref>) is similar to determination of δ F_1,2 from Eqs. (<ref>). By analogy with Eq. (<ref>) we obtainS_z^1,2(x) = S_z^0/NeE𝒟λ_1,2^s(x),where the functions λ_1,2^2 differ from the functions λ_1,2 in  (<ref>) by replacing the relaxation lengths l_1,2 with “spin” ones l_1,2^s = v τ_1,2^s.In the case of a low magnetic field (<ref>) the second harmonic (<ref>) is zero everywhere except for a narrow regions with width l_2 near the channel edges. These stripes do not make a noticeable contribution to the anomalous Hall effect, so we will neglect it in follows. The first angular harmonic (<ref>) in this limit does not depend on the coordinate. This result corresponds to the diffusion mode of spin transport, despite the fact that l≫ l_ee.In moderate magnetic fields (<ref>) both harmonics remain and, generally speaking, significantly depend on the coordinate. In this case, the evolution of the S_z^1 profile with a change in the ratio τ/τ_ee is similar to that shown in Fig. <ref> for the spatial distribution of velocities. § ANOMALOUS HALL EFFECT §.§ Model It is known that anomalous Hall effect is caused by spin-orbit interaction. There are three main mechanisms of the effect <cit.>: anomalous velocity, shifts of electron wave packets caused by scattering on impurities, and asymmetric electron scattering. These three mechanisms are accounted for in what follows. Following the works <cit.>, where these mechanisms are discussed in detail, we present expressions for the corresponding contributions to the kinetic Eq. (<ref>).Let us start with mechanisms whose contributions does not depend on the transport regime and coordinate. The first one is anomalous velocity, that arises in an electric field for spin-polarized electrons in the presence of non-zero Berry curvature of energy bands. It has a different sign for electrons with different spins and included in the velocity operator in the kinetic equation (<ref>) as:𝐯̂_a,B = σ̂_z 𝐯_a,B, 𝐯_a,B = -2ξ e/ħ[ẑ× E],where ẑ is the unit-vector perpendicular to the structure, and the subscript B in V_a,B means that this is a contribution to the anomalous velocity due to the presence of Berry curvature. The parameter ξ characterizes the strength of the spin-orbit coupling <cit.>. The corresponding contribution to the anomalous velocity in the equation (<ref>) can be represented asv_a,B = 2ξ e/ħE.The second coordinate and transport regime independent contribution is due to the anomalous distribution of electrons that arises due to scattering on impurities or phonons, taking into account the scattering asymmetry induced by shifts of wave packets: the contribution of anomalous distribution. The anomalous distribution effect contributes to the generation term in the kinetic Eq. (<ref>)G_φ,adist = G_adistcosφ,whereG_adist=-(1+ν)2ξ e/lħE S_z^0,and ν is a parameter that depends on the scattering mechanism <cit.>. Shifts of wave packets during scattering also lead to a contribution in anomalous velocity, called the side jump accumulation. The corresponding anomalous contribution to the velocity in (<ref>) depends on the coordinatev_a,sj(x) = -(1+ν)λ_1^s(x)/lξ/ħ eE,where the function λ_1^s(x) describes the spatial profile of the first angular harmonic of the spin distribution function, see Eqs. (<ref>) and (<ref>).The last mechanism of the anomalous Hall effect is associated with asymmetric scattering (skew scattering) on impurities. It makes the following contribution to the generation term in Eq. (<ref>)G_φ,sk(x) = G_skcosφ,whereG_sk = S_impτ^2/ħλ_1^s(x)/lξ e/lħEN S_z^0/2,and the coefficient S_imp determines the degree of scattering asymmetry. The expression for S_imp according to <cit.> (see also <cit.>) includes two contributions [The expression (<ref>) corrects a typo in the formula (26) from <cit.>.]S_imp = 2π U_v/τ + 8νħ/Nτ^2,where in the case of scattering by short-range defects U_v is the Fourier transform (power) of the potential of a single defect. The first term in (<ref>) corresponds to the third-order contribution in the defect potential, and second term corresponds to the contribution to two-impurity coherent scattering <cit.>. We emphasize that electron-electron collisions conserve the total momentum of the pair, so they cannotcontribute to the anomalous Hall effect, but can lead to the generation of spin currents.Since in the linear-𝐁 approximation the normal and anomalous contributions to the Hall effect are additive, the anomalous part of the distribution function (responsible for the anomalous Hall effect) satisfies the following equation∂/∂ x[v_x F̃_φ + (v_a,B + v_a,sj)S_z,φ^0] + F̃_φ - F̃_φ/τ =Q_ee{F̃_φ} + (G_adist+G_sk)cosφ.This equation is obtained from (<ref>) by eliminating the contributions containing the external electric field e𝒟 E v_y and the Lorentz force ω_c ∂F̃_φ/∂φ. As in Sec. <ref>, the distribution function, can be expanded into three angular harmonics:F̃_φ = F̃_φ+F_1cosφ + F_2cos2φ.The coefficients for angular harmonics satisfy the system of equations [cf. Eq. (<ref>)]∂/∂ x(v/2 F_1 + v_a S_z^0) = 0,∂/∂ x[v(F̃_φ+F_2/2)] = - F_1/τ_1 + G,∂/∂ x(v/2F_1) + F_2/τ_2=0, Generalizing the solution obtained in <cit.> in the hydrodynamic regime, and taking into account the relation (<ref>), we obtain the expression for the anomalous Hall field:E_H,a = -2S_z^0/ev𝒟v_a/l_1+l_2/2ev𝒟∂^2/∂ x^2(v_a S_z^0)-G/ev𝒟.where v_a = v_a,B + v_a,sj, G = G_adist +G_sk. Equation (<ref>) is the key result of our work; it generalizes the formulas obtained in <cit.> to the case of an arbitrary relation between l and l_ee. §.§ Analysis of results Now we turn to the analysis of the obtained results. We will consider two cases of low and moderate magnetic fields, where the condition (<ref>) or (<ref>) is satisfied, respectively. §.§.§ Low magnetic fieldIn case of low magnetic fields, where Eq. (<ref>) is satisfied, the first harmonic of the spin distribution function effectively relaxes due to interparticle collisions. For the spin distribution function in this case, the diffusive regime is realized regardless of the relationship between l and l_ee, and the contributions of all mechanisms of the anomalous Hall effect turn out to be coordinate independent. The quantities v_a,sj and G_φ,sk take the form (44a) and (50a) from <cit.>. For l →∞ from (<ref>) we obtain for the anomalous contributions formulas (68a), (68b) and (68d) from <cit.>. For an arbitrary l/l_ee, the sum of all contributions will be as follows:E_H,a = -S_z^0E/ev𝒟·2ξ e/ħ (l+l_ee)(1 - ν + π U_vN/2v ħl_ee). The anomalous Hall voltageV_H,a = - ∫_0^x E_H,a(x')dx',[cf. Eq. (<ref>)] turns out to be a linear function of the coordinate in this case. The coordinate dependencies of individual contributions to the anomalous Hall voltage are shown in panels (a) and (b) of Fig. <ref>, respectively, for two-impurity coherent scattering and scattering on impurities in the third order in the impurity potential. The sum of contributions from other mechanisms to the Hall voltage is shown in Fig. <ref>(c). Depending on the main mechanism of the anomalous Hall effect, the signs of the parameters ξ, ν and U_v, the anomalous Hall field, Eq. (<ref>), can be either collinear or opposite to the normal one, Eq. (<ref>), and does not change direction when passing from the limiting case l→ 0 to the limiting case l→∞ (for a fixed l_ee). §.§.§ Moderate magnetic fieldsIn the case of moderate magnetic fields, where the condition (<ref>) is satisfied, collisions of electrons with opposite spins are suppressed. In this case, the contributions from v_a,B and G_adist are the same as in the previous case, since they do not depend on the transport regime. The anomalous velocity associated side jump accumulation v_a,sj in coordinate dependent, however, the sum of the contributions from the first and second terms in Eq. (<ref>) turns out to be coordinate independent. Finally, for the sum of contributions from anomalous velocities and anomalous distribution in case of moderate magnetic field, we haveẼ_H,a,B + Ẽ_H,a,sj + Ẽ_H,adist = 4E S_z^0 /ev 𝒟ξ eE/ħ lν,The Hall voltage dependence corresponding to the field in (<ref>) is shown in panel (f) of Fig. <ref>. Note that the contributions of asymmetric scattering associated with both two-impurity coherent scattering and third-order asymmetric scattering on single impurities have a nontrivial coordinate dependence:Ẽ_H,sk,coh + Ẽ_H,sk,III = -ES_z^0/ev𝒟ξ e/ħ lλ_1^s(x)/lν(4+π U_vNl/vħν),described by the function λ_1^s(x). The corresponding coordinate dependencies of the Hall voltage V_H, presented in Fig. <ref>(d,e), deviate from linear l≳ w^2/l_ee. In the opposite limit they become linear, as expected in the diffusion regime. In the limit l→ 0 (but within the applicability of the kinetic equation, i.e. when mvl/ħ≫ 1) the first contribution to (<ref>) from coherent scattering on impurity pairs reduces the contributions from anomalous velocities and anomalous distribution (<ref>). This is a feature of scattering on a short-range potential, see <cit.>. As a result, only the contribution from third-order asymmetric scattering with respect to the impurity potential remains, which in the deeply diffuse limit l ≪ l_ee≪ w have ceases to depend on l:Ẽ_H,a^d = E S_z^0 /ev𝒟ξ eE/ħ l·π U_v N/vħ. It is interesting to note that when passing from l→ 0 to l→∞ for a fixed l_ee and the conditionπ U_v N w^2/8vħ l_ee < 1,the direction of the Hall field can change as a function of the x coordinate. In this case, the dependence of the voltage V_H,a, caused by the anomalous Hall effect, on the coordinate becomes non-monotonic, as shown in Fig. <ref> in a certain range of ratios l/l_ee. This is due to the competition between the contributions of third-order asymmetric scattering (the second term in Eq. (<ref>)) and the anomalous contributions, Eq. (<ref>). We emphasize that in narrow edge strips (marked in gray in Fig. <ref>), the expansion of the distribution function over three angular harmonics turns out to be insufficient. A full analysis of the role of such edge regions is beyond the scope of this article; preliminary estimates show that these narrow stripes do not really make a noticeable contribution to the anomalous Hall effect.Thus, the observation of a nonlinear and, moreover, nonmonotonic dependence of the anomalous contribution to the Hall voltage on the coordinate can serve as evidence of a hydrodynamic regime of electron transport, with suppressed collisions of electrons with opposite spins (moderate magnetic fields). § CONCLUSION To conclude, we developed a theory of the anomalous Hall effect for two-dimensional electron gas in the case of transition between the diffusive and hydrodynamic transport regimes. All the main mechanisms of the anomalous Hall effect are taken into account: anomalous velocity, the effect of accumulation of wave packet shifts, the contribution of the anomalous distribution and the contribution from the skew scattering both on single impurities and on pairs of impurities. All these mechanisms make, generally speaking, comparable contributions to the anomalous Hall field and voltage. In the case of low magnetic fields, where the Zeeman splitting of the electron spectrum is much smaller than the thermal energy of electrons T, all anomalous contributions lead to a coordinate-independent Hall field and a linear coordinate dependence of the Hall voltage for any ratio between the scattering lengths on impurities l and interelectronic collisions l_ee. On the contrary, the contribution of asymmetric scattering to the Hall field and voltage has a nontrivial coordinate dependence at l_ee≲ l and moderate magnetic fields, where the Zeeman splitting exceeds temperature. In the case of both low and moderate magnetic fields, the sign of the potential difference at the edges of the sample, due to anomalous contributions, can be different depending on the relationships between the parameters of the system. The effects considered are also preserved in the general case of nonparabolic electron dispersion, but specific expressions for the contributions of various mechanisms to the anomalous Hall effect differ.Experimentally, anomalous contributions to the Hall voltage and field can be identified on top of the normal Hall effect in samples containing magnetic impurities, where the equilibrium spin polarization turns out to be a nonlinear function of the external field, or using the electron paramagnetic resonance method: By applying a weak alternating magnetic field in the channel plane with a frequency |gμ_B B|/ħ, it is possible to depolarize the charge carriers and, thereby, eliminate the anomalous contribution to the Hall effect.§ ACKNOWLEDGEMENTS This work has been supported by the RSF project 22-12-00211. We are grateful to K.K. Grigoryan for valuable discussions.55 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Dyakonov(2017)]dyakonov_book editor M. I. Dyakonov, ed.,@nooptitle Spin physics in semiconductors,edition 2nd ed., Springer Series in Solid-State Sciences 157(publisher Springer International Publishing, year 2017)NoStop [Nagaosa et al.(2010)Nagaosa, Sinova, Onoda, MacDonald, and Ong]RevModPhys.82.1539 author author Naoto Nagaosa, author Jairo Sinova, author Shigeki Onoda, author A. H. MacDonald,and author N. P. Ong, title title Anomalous Hall effect, 10.1103/RevModPhys.82.1539 journal journal Rev. Mod. Phys. volume 82, pages 1539–1592 (year 2010)NoStop[Dyakonov and Perel'(1971a)]dyakonov71 author author M.I. Dyakonov and author V.I Perel', title title Possibility of Orienting Electron Spins with Current, @noopjournal journal JETP Lett. volume 13, pages 657 (year 1971a)NoStop [Dyakonov and Perel'(1971b)]dyakonov71a author author M. I. Dyakonov and author V. I. Perel', title title Current induced spin orientation of electrons in semiconductors, https://www.sciencedirect.com/science/article/pii/0375960171901964 journal journal Phys. Lett. A volume 35A, pages 459 (year 1971b)NoStop [Kato et al.(2004)Kato, Myers, Gossard, and Awschalom]kato04 author author Y. K. Kato, author R. C. Myers, author A. C. Gossard,andauthor D. D. Awschalom,title title Observation of the spin all effect in semiconductors, https://science.sciencemag.org/content/306/5703/1910 journal journal Science volume 306, pages 1910 (year 2004)NoStop [Wunderlich et al.(2005)Wunderlich, Kaestner, Sinova, andJungwirth]wunderlich05 author author J. Wunderlich, author B. Kaestner, author J. Sinova, and author T. Jungwirth,title title Experimental observation of the spin-Hall effect in a two-dimensional spin-orbit coupled semiconductor system, https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.94.047204 journal journal Phys. Rev. Lett. volume 94, pages 47204 (year 2005)NoStop [Xiao et al.(2012)Xiao, Liu, Feng, Xu, andYao]Xiao:2012cr author author Di Xiao, author Gui-Bin Liu, author Wanxiang Feng, author Xiaodong Xu,and author Wang Yao, title title Coupled spin and valley physics in monolayers of MoS_2 and other group-VI dichalcogenides, 10.1103/PhysRevLett.108.196802 journal journal Phys. Rev. Lett. volume 108, pages 196802 (year 2012)NoStop [Glazov and Golub(2020a)]2020arXiv200405091G author author M. M. Glazov and author L. E. Golub, title title Valley Hall effect caused by the phonon and photon drag, 10.1103/PhysRevB.102.155302 journal journal Phys. Rev. B volume 102, pages 155302 (year 2020a)NoStop [Hall(1881)]Hall:1881aa author author E. H. Hall, title title XXXVIII. On the new action of magnetism on a permanent electric current, 10.1080/14786448008626936 journal journal The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science volume 5, pages 157 (year 1881)NoStop [Karplus and Luttinger(1954)]PhysRev.95.1154 author author Robert Karplus and author J. M. Luttinger, title title Hall effect in ferromagnetics, 10.1103/PhysRev.95.1154 journal journal Phys. Rev. volume 95, pages 1154–1160 (year 1954)NoStop [Smit(1955)]SMIT1955877 author author J. Smit, title title The spontaneous Hall effect in ferromagnetics I, https://doi.org/10.1016/S0031-8914(55)92596-9 journal journal Physica volume 21, pages 877 – 887 (year 1955)NoStop [Adams and Blount(1959)]Adams:1959aa author author E. N. Adams and author E. I. Blount, title title Energy bands in the presence of an external force field—II: Anomalous velocities,https://doi.org/10.1016/0022-3697(59)90004-6 journal journal Journal of Physics and Chemistry of Solidsvolume 10, pages 286–303 (year 1959)NoStop [Gurevich and Yassievich(1963)]gy61 author author L. E. Gurevich and author I. N. Yassievich, title title Theory of ferromagnetic Hall effect, @noopjournal journal Sov. Phys. Solid. State volume 4, pages 2091 (year 1963)NoStop[Abakumov and Yassievich(1972)]abakumov72 author author V.N. Abakumov and author I.N. Yassievich, title title Anomalous Hall effect for polarized electrons in semiconductors, @noopjournal journal JETP volume 34, pages 1375 (year 1972)NoStop [Nozières, P. and Lewiner, C.(1973)]nozieresAHE author author P. Nozières,and author C. Lewiner, title title A simple theory of the anomalous Hall effect in semiconductors, 10.1051/jphys:019730034010090100 journal journal J. Phys. France volume 34, pages 901–915 (year 1973)NoStop [Sinitsyn et al.(2007)Sinitsyn, MacDonald, Jungwirth, Dugaev, and Sinova]PhysRevB.75.045315 author author N. A. Sinitsyn, author A. H. MacDonald, author T. Jungwirth, author V. K. Dugaev,and author Jairo Sinova, title title Anomalous Hall effect in a two-dimensional Dirac band: The link between the Kubo-Streda formula and the semiclassical Boltzmann equation approach, 10.1103/PhysRevB.75.045315 journal journal Phys. Rev. B volume 75, pages 045315 (year 2007)NoStop [Sinitsyn(2007)]Sinitsyn_2007 author author N. A. Sinitsyn, title title Semiclassical theories of the anomalous Hall effect, 10.1088/0953-8984/20/02/023201 journal journal Journal of Physics: Condensed Matter volume 20,pages 023201 (year 2007)NoStop [Ado et al.(2015)Ado, Dmitriev, Ostrovsky, and Titov]Ado_2015 author author I. A. Ado, author I. A. Dmitriev, author P. M. Ostrovsky,andauthor M. Titov, title title Anomalous Hall effect with massive Dirac fermions, 10.1209/0295-5075/111/37004 journal journal EPL volume 111,pages 37004 (year 2015)NoStop [Keser et al.(2019)Keser, Raimondi, and Culcer]PhysRevLett.123.126603 author author Aydın Cem Keser, author Roberto Raimondi,and author Dimitrie Culcer, title title Sign change in the anomalous Hall effect and strong transport effects in a 2D massive Dirac metal due to spin-charge correlated disorder, 10.1103/PhysRevLett.123.126603 journal journal Phys. Rev. Lett. volume 123, pages 126603 (year 2019)NoStop [Belinicher et al.(1982)Belinicher, Ivchenko, and Sturman]belinicher82 author author V. I. Belinicher, author E. L. Ivchenko,and author B. I. Sturman, title title Kinetic theory of the displacement photovoltaic effect in piezoelectrics, http://www.jetp.ac.ru/cgi-bin/e/index/e/56/2/p359?a=list journal journal JETP volume 56,pages 359 (year 1982)NoStop [de Jong and Molenkamp(1995)]PhysRevB.51.13389 author author M. J. M.de Jong and author L. W.Molenkamp, title title Hydrodynamic electron flow in high-mobility wires, 10.1103/PhysRevB.51.13389 journal journal Phys. Rev. B volume 51, pages 13389–13402 (year 1995)NoStop [Bandurin et al.(2016)Bandurin, Torre, Kumar, Ben Shalom, Tomadin, Principi, Auton, Khestanova, Novoselov, Grigorieva, Ponomarenko, Geim, and Polini]Bandurin1055 author author D. A. Bandurin, author I. Torre, author R. Krishna Kumar, author M. Ben Shalom, author A. Tomadin, author A. Principi, author G. H. Auton, author E. Khestanova, author K. S. Novoselov, author I. V. Grigorieva, author L. A. Ponomarenko, author A. K. Geim,and author M. Polini, title title Negative local resistance caused by viscous electron backflow in graphene, 10.1126/science.aad0201 journal journal Science volume 351, pages 1055–1058 (year 2016)NoStop [Crossno et al.(2016)Crossno, Shi, Wang, Liu, Harzheim, Lucas, Sachdev, Kim, Taniguchi, Watanabe, Ohki, and Fong]Crossno:2016aa author author Jesse Crossno, author Jing K. Shi, author Ke Wang, author Xiaomeng Liu, author Achim Harzheim, author Andrew Lucas, author Subir Sachdev, author Philip Kim, author Takashi Taniguchi, author Kenji Watanabe, author Thomas A. Ohki,and author Kin Chung Fong, title title Observation of the Dirac fluid and the breakdown of the Wiedemann-Franz law in graphene, 10.1126/science.aad0343 journal journal Science volume 351, pages 1058–1061 (year 2016)NoStop [Moll et al.(2016)Moll, Kushwaha, Nandi, Schmidt,and Mackenzie]Moll1061 author author Philip J. W.Moll, author PallaviKushwaha, author NabhanilaNandi, author BurkhardSchmidt,and author Andrew P.Mackenzie, title title Evidence for hydrodynamic electron flow in PdCoO_2, 10.1126/science.aac8385 journal journal Science volume 351, pages 1061–1064 (year 2016)NoStop [Alekseev(2016)]PhysRevLett.117.166601 author author P. S. Alekseev, title title Negative magnetoresistance in viscous flow of two-dimensional electrons, 10.1103/PhysRevLett.117.166601 journal journal Phys. Rev. Lett. volume 117, pages 166601 (year 2016)NoStop [Krishna Kumar et al.(2017)Krishna Kumar, Bandurin, Pellegrino, Cao, Principi, Guo, Auton, Ben Shalom, Ponomarenko, Falkovich, Watanabe, Taniguchi, Grigorieva, Levitov, Polini, and Geim]Krishna-Kumar:2017wn author author R. Krishna Kumar, author D. A. Bandurin, author F. M. D. Pellegrino, author Y. Cao, author A. Principi, author H. Guo, author G. H. Auton, author M. Ben Shalom, author L. A. Ponomarenko, author G. Falkovich, author K. Watanabe, author T. Taniguchi, author I. V. Grigorieva, author L. S. Levitov, author M. Polini,and author A. K. Geim, title title Superballistic flow of viscous electron fluid through graphene constrictions, 10.1038/nphys4240 journal journal Nature Physics volume 13, pages 1182–1185 (year 2017)NoStop [Gusev et al.(2018)Gusev, Levin, Levinson, and Bakarov]Gusev:2018tg author author G. M. Gusev, author A. D. Levin, author E. V. Levinson,andauthor A. K. Bakarov,title title Viscous electron flow in mesoscopic two-dimensional electron gas, 10.1063/1.5020763 journal journal AIP Advancesvolume 8, pages 025318 (year 2018)NoStop [Pusep et al.(2022)Pusep, Teodoro, Laurindo, Cardozo de Oliveira, Gusev, and Bakarov]PhysRevLett.128.136801 author author Yu. A. Pusep, author M. D. Teodoro, author V. Laurindo, author E. R. Cardozo de Oliveira, author G. M. Gusev,andauthor A. K. Bakarov,title title Diffusion of photoexcited holes in a viscous electron fluid, 10.1103/PhysRevLett.128.136801 journal journal Phys. Rev. Lett. volume 128, pages 136801 (year 2022)NoStop [Gurzhi(1963)]gurzhi63 author author R. N. Gurzhi, title title Minimum of resistance in impurity-free conductors, @noopjournal journal JETP volume 17,pages 521 (year 1963)NoStop [Gurzhi(1968)]Gurzhi_1968 author author R. N. Gurzhi, title title Hydrodynamic effects in solids at low temperatures, 10.1070/pu1968v011n02abeh003815 journal journal Soviet Physics Uspekhi volume 11, pages 255–270 (year 1968)NoStop [Andreev et al.(2011)Andreev, Kivelson, and Spivak]PhysRevLett.106.256804 author author A. V. Andreev, author Steven A. Kivelson,and author B. Spivak, title title Hydrodynamic description of transport in strongly correlated electron systems, 10.1103/PhysRevLett.106.256804 journal journal Phys. Rev. Lett. volume 106, pages 256804 (year 2011)NoStop [Torre et al.(2015)Torre, Tomadin, Geim, and Polini]PhysRevB.92.165433 author author Iacopo Torre, author Andrea Tomadin, author Andre K. Geim,andauthor Marco Polini, title title Nonlocal transport and the hydrodynamic shear viscosity in graphene, 10.1103/PhysRevB.92.165433 journal journal Phys. Rev. B volume 92, pages 165433 (year 2015)NoStop [Levitov and Falkovich(2016)]Levitov:2016aa author author Leonid Levitov and author Gregory Falkovich, title title Electron viscosity, current vortices and negative nonlocal resistance in graphene,10.1038/nphys3667 journal journal Nature Physics volume 12, pages 672–676 (year 2016)NoStop [Scaffidi et al.(2017)Scaffidi, Nandi, Schmidt, Mackenzie, and Moore]PhysRevLett.118.226601 author author Thomas Scaffidi, author Nabhanila Nandi, author Burkhard Schmidt, author Andrew P. Mackenzie,and author Joel E. Moore, title title Hydrodynamic electron flow and Hall viscosity, 10.1103/PhysRevLett.118.226601 journal journal Phys. Rev. Lett. volume 118, pages 226601 (year 2017)NoStop [Apostolov et al.(2019)Apostolov, Pesin, and Levchenko]PhysRevB.100.115401 author author S. S. Apostolov, author D. A. Pesin,and author A. Levchenko, title title Magnetodrag in the hydrodynamic regime: Effects of magnetoplasmon resonance and Hall viscosity, 10.1103/PhysRevB.100.115401 journal journal Phys. Rev. B volume 100, pages 115401 (year 2019)NoStop [Narozhny et al.(2017)Narozhny, Gornyi, Mirlin, andSchmalian]Narozhny:2017vc author author Boris N.Narozhny, author Igor V.Gornyi, author Alexander D.Mirlin,and author JörgSchmalian, title title Hydrodynamic approach to electronic transport in graphene, https://doi.org/10.1002/andp.201700043 journal journal Annalen der Physik volume 529, pages 1700043 (year 2017)NoStop [Narozhny(2022)]Narozhny:2022ud author author Boris N.Narozhny, title title Hydrodynamic approach to two-dimensional electron systems, 10.1007/s40766-022-00036-z journal journal La Rivista del Nuovo Cimento volume 45,pages 661 (year 2022)NoStop [Pesin(2018)]PhysRevLett.121.226601 author author D. A. Pesin, title title Two-Particle Collisional Coordinate Shifts and Hydrodynamic Anomalous Hall Effect in Systems without Lorentz Invariance, 10.1103/PhysRevLett.121.226601 journal journal Phys. Rev. Lett. volume 121, pages 226601 (year 2018)NoStop [Funaki et al.(2021)Funaki, Toshio, and Tatara]PhysRevResearch.3.033075 author author Hiroshi Funaki, author Riki Toshio, and author Gen Tatara,title title Vorticity-induced anomalous Hall effect in an electron fluid, 10.1103/PhysRevResearch.3.033075 journal journal Phys. Rev. Res. volume 3, pages 033075 (year 2021)NoStop [Tatara(2021)]PhysRevB.104.184414 author author Gen Tatara, title title Hydrodynamic theory of vorticity-induced spin transport, 10.1103/PhysRevB.104.184414 journal journal Phys. Rev. B volume 104, pages 184414 (year 2021)NoStop [Hasdeo et al.(2021)Hasdeo, Ekström, Idrisov, and Schmidt]PhysRevB.103.125106 author author Eddwi H.Hasdeo, author JohanEkström, author Edvin G.Idrisov,and author Thomas L.Schmidt, title title Electron hydrodynamics of two-dimensional anomalous Hall materials,10.1103/PhysRevB.103.125106 journal journal Phys. Rev. B volume 103, pages 125106 (year 2021)NoStop [Glazov(2022)]Glazov_2021b author author M M Glazov, title title Valley and spin accumulation in ballistic and hydrodynamic channels, 10.1088/2053-1583/ac3e04 journal journal 2D Materials volume 9, pages 015027 (year 2022)NoStop [Afanasiev et al.(2022a)Afanasiev, Alekseev, Danilenko, Greshnov, andSemina]PhysRevB.106.L041407 author author A. N. Afanasiev, author P. S. Alekseev, author A. A. Danilenko, author A. A. Greshnov,and author M. A. Semina, title title Rotational viscosity in spin resonance of hydrodynamic electrons, 10.1103/PhysRevB.106.L041407 journal journal Phys. Rev. B volume 106, pages L041407 (year 2022a)NoStop [Grigoryan et al.(2023)Grigoryan, Zohrabyan, and Glazov]grigoryan2023anomalous author author K. K. Grigoryan, author D. S. Zohrabyan,and author M. M. Glazov, @nooptitle Anomalous Hall effect in ultraclean electronic channels,(year 2023), http://arxiv.org/abs/2309.05401 arXiv:2309.05401 NoStop [Alekseev(2018)]PhysRevB.98.165440 author author P. S. Alekseev, title title Magnetic resonance in a high-frequency flow of a two-dimensional viscous electron fluid, 10.1103/PhysRevB.98.165440 journal journal Phys. Rev. B volume 98,pages 165440 (year 2018)NoStop [Alekseev and Semina(2019)]PhysRevB.100.125419 author author P. S. Alekseev and author M. A. Semina, title title Hall effect in a ballistic flow of two-dimensional interacting particles, 10.1103/PhysRevB.100.125419 journal journal Phys. Rev. B volume 100, pages 125419 (year 2019)NoStop [Alekseev and Semina(2018)]PhysRevB.98.165412 author author P. S. Alekseev and author M. A. Semina, title title Ballistic flow of two-dimensional interacting electrons, 10.1103/PhysRevB.98.165412 journal journal Phys. Rev. B volume 98, pages 165412 (year 2018)NoStop [Alekseev and Dmitriev(2021)]PhysRevB.104.085434 author author Yu. O. Alekseev and author A. P. Dmitriev, title title Giant Hall effect in the ballistic transport of two-dimensional electrons, 10.1103/PhysRevB.104.085434 journal journal Phys. Rev. B volume 104, pages 085434 (year 2021)NoStop [Afanasiev et al.(2022b)Afanasiev, Alekseev, Danilenko, Dmitriev, Greshnov, and Semina]PhysRevB.106.245415 author author A. N. Afanasiev, author P. S. Alekseev, author A. A. Danilenko, author A. P. Dmitriev, author A. A. Greshnov,and author M. A. Semina, title title Hall effect in Poiseuille flow of two-dimensional electron fluid, 10.1103/PhysRevB.106.245415 journal journal Phys. Rev. B volume 106, pages 245415 (year 2022b)NoStop[Glazov and Ivchenko(2002)]glazov02 author author M. M. Glazov and author E. L. Ivchenko, title title Precession spin relaxation mechanism caused by frequent electron–electron collisions, @noopjournal journal JETP Letters volume 75, pages 403 (year 2002)NoStop [D'Amico and Vignale(2003)]amico:045307 author author Irene D'Amico and author Giovanni Vignale, title title Spin Coulomb drag in the two-dimensional electron liquid, http://link.aps.org/abstract/PRB/v68/e045307 journal journal Phys. Rev. B volume 68,eid 045307 (year 2003)NoStop [Weng and Wu(2003)]wu03prb author author M. Q. Weng and author M. W. Wu,title title Spin dephasing in n-typequantum wells, @noopjournal journal Phys. Rev. B volume 68,pages 75312 (year 2003)NoStop[Glazov and Ivchenko(2004)]glazov04a author author M. M. Glazov and author E. L. Ivchenko, title title Effect of electron-electron interaction on spin relaxation of charge carriers in semiconductors, @noopjournal journal JETP volume 99, pages 1279 (year 2004)NoStop [Alekseev(2022)]alekseev:2 author author P. S. Alekseev, title title Viscous flow of two-component electron fluid in magnetic field, journal journal Semiconductors volume 56, pages 650 (year 2022)NoStop [Glazov and Golub(2020b)]Glazov2020b author author M. M. Glazov and author L. E. Golub, title title Skew Scattering and Side Jump Drive Exciton Valley Hall Effect in Two-Dimensional Crystals, 10.1103/PhysRevLett.125.157403 journal journal Phys. Rev. Lett. volume 125, pages 157403 (year 2020b)NoStop
http://arxiv.org/abs/2310.17738v1
{ "authors": [ "D. S. Zohrabyan", "M. M. Glazov" ], "categories": [ "cond-mat.mes-hall", "cond-mat.other" ], "primary_category": "cond-mat.mes-hall", "published": "20231026190829", "title": "Diffusive-hydrodynamic transition in the anomalous Hall effect" }
FCLTS for dynamic point processes A]Efe Onaran[label=e1, mark][email protected], B,C]Omer Bobrowski[label=e2,mark][email protected] B]Robert J. Adler[label=e3,mark][email protected][A]Coordinated Science Laboratory, University of Illinois Urbana-Champaign[B]Viterbi Faculty of Electrical and Computer Engineering, Technion–Israel Institute of Technology e1,e2,e3 [C]School of Mathematical Sciences, Queen Mary University of London We establish functional limit theorems for local, additive, interaction functions of temporally evolving point processes. The dynamics are those of a spatial Poisson process on the flat torus with points subject to a birth-death mechanism, and which move according to Brownian motion while alive. The results reveal the existence of a phase diagram describing at least three distinct structures for the limiting processes, depending on the extent of the local interactions and the speed of the Brownian motions. The proofs, which identify three different limits, rely heavily on Malliavin-Stein bounds on a representation of the dynamic point process via a distributionally equivalent marked point process.[class=MSC] [Primary ]60G55, 60F17[; secondary ]60D05, 60G15birth-motion-death process dynamic Boolean model functional limit theorems Ornstein-Uhlenbeck process random geometric graphs Malliavin-Stein approximation § INTRODUCTIONOur interest lies in functional limittheorems for local functionals defined on dynamic point processes, where the dynamics involve both birth-death and Brownian components. More specifically, at time t=0 we are given a homogeneous Poisson point process of intensity n on the d-dimensional flat torus . Each point has an independent, exponentially distributed lifetime, after which it is removed from . In addition, new points are added uniformlyonaccording to a Poisson process with rate n. They too have independent, exponential lifetimes and are removed at death. This is the birth-death structure.In addition, each point, while alive, movesonaccordingtoan independent Brownian motion with variance σ^2.Wedenote the finite set of locations of all thepoints alive at time t≥ 0 byη_n(t)⊂.We denote `interaction functionals' by ξ_r, where ξ_r is a real valued function on finite subsets ofand the `locality parameter' r>0 plays the role of a `maximum interaction distance'. Somewhat more precisely, ξ_r(A)≡ 0 if the diameter of A is greater than r. A simple example of such a functional would be the number of cliques of fixed size, and of diameter no larger thanr, in a geometric graph. Our interest then would be in the time evolution of the number of cliquesin such agraph when the nodes are the points inη_n(t). In particular, we are interested in limit behavior as n→∞.It turns out that these limits depend on a delicate balance between the mean number of points, n, and the locality and motion rates. Given an n, let r_n be the locality parameter, and σ_n the speed of the Brownian motions. We will want both r_n→ 0 and σ_n→ 0 as n→∞ in order to obtain non-trivial limits. To save on notation, will drop the subscript on both r_n and σ_n.The delicate balance just mentioned leads to a phase diagram for the pair (r,σ) for which we do not yet have a full description, but we can show that it contains at least three distinct regimes.If σ≪ r, which we call the `slow regime', thespeed of the Brownian motions with respect to the maximum interaction distance is negligible, and we obtain the same limiting process for an appropriately normalized version of f_n(t) ∑_⊆η_n(t)ξ_r ()as in <cit.>, which studied the same model as here but without the Brownian motions. This limit is Gaussian, and isrepresentable as a weighted sum of independent Ornstein-Uhlenbeck processes with different parameters. If σ/ r→ c∈ (0,∞), which we call the`moderate regime'we prove that the corresponding limit is a special type of Gaussian process, again representable via a sum, although in this case there is no simple analytic form for the covariances of the summed processes.Finally, in the `fast regime', in which σ≫ r, it turns out that f_n, normalized,convergesto white noise,in the sense thatits integral over time converges to Brownian motion. The main tool we use in the proofs is the marked point process representation for the dynamic models introduced in <cit.>. In addition, for finite dimensional convergence, we use results from the Stein normal approximation theory through Malliavin calculus that has been developing over the last decade <cit.>. The normal approximation techniques we use differ from the ones we used in <cit.> since the functional is no longer local in the strict sense (defined there) due to the movements of the particles. As we show, however, Malliavin-Stein theory is still applicable to our problem through its applications on U-statistics <cit.>, exhibiting yet another way this theory is useful in the study of dynamic point processes. Furthermore, the generality in the Malliavin-Stein theory for U-statistics allows us to extend our results in <cit.> to sparser and denser regimes (determined by the choice of r) than the thermodynamic regime.We conclude this section with some comments relating our model to another one which has been studied in some detail,and mentioning a practical use of the local functionals that we consider.Specifically, our model is related to the `dynamic Boolean model' introduced in <cit.>, where the authorsassume that a homogeneous Poisson point process inis given at t=0 and, subsequently, each point moves according to a general continuous, stationary process. Concepts such as dynamic percolation, coverage, and detection were developed and studied in <cit.> and in the many papers that were based on this model, including <cit.> and <cit.>. Theoretical results have also found various applications, such as in mobile sensor networks <cit.>. The major difference between the model studied in the current paper andthe dynamic Boolean model is, of course, the inclusion of the birth-death dynamics. On a final note to the Introduction, we remark that a motivation to study the distribution of local interaction functionals of networks is that their distribution (also called `motifs' in the graph mining literature) serves as an important benchmark to detect anomalies (see <cit.> and references).The remainder of the paper is structured as follows: Section <ref> sets up the required notation and gives the main results of the paper. Section <ref> relates the process we are studying to a particular marked point process that allows us to use results, e.g., Mecke's formula to write out formidable, but very useful, explicit expressions for various moments. The hard work is in Section <ref>, which, among other things, exploits these expressions to prove the results of Section <ref>.§ NOTATION AND MAIN RESULTSLetbe a d-dimensional flat torus, taken as thequotient space /ℤ^d. Specifically, we will think ofas the cube Q[-1/2,1/2)^d⊂, under the relation 0∼ 1, and endowed with the metric ρ(x,y) min_ν∈ℤ^dx-y+ν,forx,y∈ Q and · the standard Euclidean metric.Let () denote the set of all finite subsets of . Then, as defined in the previous section, η_n(t), theset of points “alive”at time t, is an element of (). We are interested in the statistical properties ofthe additive functional f_n(t)defined on η_n(t) by f_n(t) ∑_⊆η_n(t)ξ_r (),where ξ_r:()→ℝ^+ [0,∞) satisfies ξ_r() = 0 for all ||≠ k for some k≥ 2.The value of k will be fixed throughout this paper, and is therefore suppressed in the notation. In addition, ξ_r is required to satisfy the following assumptionsfor all0<r≤ 1.[Translation and scale invariance] For any ∈() and x∈, define set translation and scalar multiplication inby⊕ x{π( π^-1(y)+π^-1(x)): y∈}, α⊙ {π(απ^-1(y)): y∈},where π:→ is the natural projection induced by the quotient operation and (with some abuse of notation) π^-1: → Q is the corresponding natural inverse. That is, π^-1 is the inverse of π restricted to Q. For any 0<α≤ 1 we assumeξ_r(⊕ x) = ξ_r( ),and ξ_r(0,) =ξ_α r(0,α⊙),where ξ_r(0,) is a shorthand notation for ξ_r({0}∪).Note that, throughout the paper, we will let 0 denote both the origin inand its projection on , since it will be clear from the context to which one we are referring. [Localization] There exists an r-independent constant δ∈ (0,1/2) such that ξ_r() = 0 if the diameter of the set satisfies ()> δ r. [Boundedness] The function ξ_r satisfiesξ_∞sup_∈()sup_0<r≤ 1 |ξ_r()|<∞.[Feasibility]Let 𝔐 beproduct Haar measure on ()^k-1. There exists a set ⊂ ()^k-1 satisfying 𝔐()>0 such that the function ξ_1 satisfies,ξ_1(0,x) >0 for all x∈.Note that here, and throughout the paper, we abuse notation somewhat, and allow ξ_r to be applied to either finite subsets, or k-tuples. The latter will always be shown in bold face.[Almost everywhere continuity] ξ_1(0, x) is 𝔐-almost everywhere continuous.A simple example should suffice to motivate both our assumptions and our results. [Subgraph counts of geometric graphs] Let G(η, r) be a geometric graph built over a finite set η⊂ with distance parameter r; i.e. an edge is placed between any two points in η with distance less than or equal to r. Let 𝒢 be a feasible geometric graph with k vertices on ; i.e.for iid uniform points x_1,…,x_k∈, 𝒢 satisfies [G({x_1,…,x_k}, 1) is graph isomorphic to 𝒢]>0. Define ξ_r() = {G(,rδ/k) is graph isomorphic to 𝒢}.Then ξ_r satisfies Assumptions <ref>–<ref>, and f_n(t) counts the number of subgraphs of G(η_n(t),rδ/k) graph isomorphic to 𝒢.The limit theorems we prove in this paper will involve two different normalizations of the process f_n(t), defined asf̅_n(t) f_n(t) - [f_n(t)] /√(var [f_n(t)])andf̃_n(t) f̅_n(t)/√(2 M_n )whereM_n∫_0^1[f̅_n(0) f̅_n(t)] t. In order to state our results succinctly, we use {U_j(t):t≥ 0}, for some positive integer j, to denote the stationary, Gaussian, zero mean, Ornstein-Uhlenbeck (OU) process withcovariance functioncov[U_j(t_1), U_j(t_2)] = e^-j|t_1-t_2|.For a given sequence = (c_1,c_2,…,c_k) we also define the following weighted superpositionU_(t) := ∑_j=1^k c_jU_j(t)of independent OU processes. We will denote the ℓ^2 norm of the vectorby . Next, for a given positive integer j,let ζ_j:ℝ→ (0,1] be apositive semi-definite, even functiondecreasing in [0,∞) with ζ_j(0)=1 and lim_t→∞ζ_j(t)=0, and let β>0. We thendefine 𝒱_j^β to be the zero mean, stationary Gaussian process withcovariance function cov[V_j^β(t_1), V_j^β(t_2)] = e^-j|t_1-t_2|ζ_j(β|t_1-t_2|).For a given = (c_1,c_2,…, c_k), we define theprocessV_^β(t) := c_1U_1(t) + ∑_j=2^k c_jV_j^β(t), a weighted superposition of independent processes {V_j^β(t)}_j=2^k and U_1(t). Note that for both the V_j^β and V_^β, the functions ζ_j implicit in their definitions do not appear explicitly in the notation. Throughout the paper, we use f≲ g to denote that f(n) = O(g(n)), and f≪ g to denote that f(n) = o(g(n)). In addition, ≍ denotes same order; i.e. lim_n→∞ f(n)/g(n)=C for some constant C>0. In particular, if C=1, then we denote this as f(n)≈ g(n).As already mentioned in the Introduction, for the following theorems andthroughout the paper we adopt the conventionthat, despite the fact thatr (the locality parameter) and σ (the speed parameter) are actuallyassumed to be functions of n, we drop the subscript n. Furthermore, we shall always assume thatlim_n→∞ r=lim_n→∞σ = 0. We will also assume throughout thatn^kr^d(k-1)→∞. This assumption ensures a central limit theorem in the static case <cit.>. Finally, we shall use the notation f_nf to denote the convergence of finite dimensional distributions of the stochastic processes f_n to those of f.We can now state our main results.If σ/r→ 0, then{f̅_n(t):t≥ 0}{U_(t):t≥ 0}, for somewith =1.Furthermore, if nr^d → 0, then c_k=1. If nr^d→∞, then c_1=1. If nr^d→γ∈ (0,∞), then c_j>0 for all 1≤ j≤ k. If σ /r →√(β)∈(0,∞), then {f̅_n(t):t≥ 0}{𝒱_^β(t):t≥ 0}, forwith =1. The characterization of the entries ofwith respect to the asymptotics of nr^d given in Theorem <ref> also holds here. Recall that the definition of the process 𝒱_^βinvolves a collection offunctions ζ_jas described prior to (<ref>), and the existence of these functions, along with the properties listed there, is implicit in the above theorem. They are dependent on moment properties of the interaction functionals, and are defined in Section <ref>, at (<ref>). If1≪σ /r ≪ (n^kr^d(k-1))^1/4-ϵ,for some 0<ϵ<1/4, and if nr^d→ 0, nσ^d ≲ 1, and d(k-1)≥ 3, then {∫_0^tf̃_n(s) s:t≥ 0}{B(t):t≥ 0},where B is a standard Brownian motion. Informally, these three theoremscharacterize the relative effects of the birth-death and Browniandynamics on the limiting local functional. In the slow regime, Theorem <ref> states that the impact of the motion is negligible, consistent with the results of <cit.>. In the moderate regime,where σ and r are comparable, both the birth-deathand the motion of the points impact on the limit. The pure OU component (<ref>) in the superposition defining the limit process in this regime comes fromcorrelations between the total numbers of points in the system across different time instances, therefore playing the same roleas it did in the slow regime. The other components, reflecting the effect of the Brownian motions, have a covariance function with faster decay than that of the pure OU component. In the fast regime, the influence ofthe birth-death component weakens in all the components but the first. To obtain a meaningful convergence result in this regime, however, we need to assume thatthe locality is strong,in the sense that nr^d→ 0. With the correct normalization, this weakens the correlation over time, resulting in a `white noise' limit. Lastly, it is plausible that the additional asymptotic upper bounds on σ /r and nσ^d in the conditions of Theorem <ref> are not essential, but rather an artefact of our proof techniques. § PRELIMINARIESIn this section we introduce the main tools that we will use in our proofs.§.§ Marked process modelHere we describe a marked Poisson process model that is stochastically equivalent to η_n(t) on any predetermined time interval [0,T]. This will make the notation easier and the proofs more intuitive. In this marked model, which we will denote as η_n,T for some T>0, we are given a homogeneous Poisson point process on the unit cube Q with the rate n(1+T). Each point, x, in the configuration is also given three independent marks (B_x,L_x,Z_x). The birth-time B_x is defined asB_x = Y_xU_x,where Y_x,U_x are independent random variables, [Y_x=0] = 1-[Y_x = 1] = 1/1+ T,and U_x is uniformly distributed on [0,T]. The lifetime L_x follows an exponential distribution with unit mean. Finally, the path mark Z_x∈ C_[0,T] is an independent Brownian motion inwith variance σ^2 in each dimension, satisfying Z_x(0)=0. Here C_[0,T] denotes the set of continuous functions [0,T]→. Note that, equivalently, η_n,T can be described as a Poisson point process on the product space Q×ℝ_+ ×ℝ_+ × C_[0,T] due to the Marking theorem <cit.>. Next, foreach point x(x,(B_x,L_x,Z_x)) of the process η_n,T, defineτ_t(x) := {B_x ≤ t < B_x+L_x}, τ_t() := ∏_x∈τ_t(x),t≥0. Thus τ_t(x) is the indicator function registering whether or not x is alive at time t, and τ_t()registers whether or notall points in the collectionare alive at t.We also use (t){x+Z_x(t)-Z_x(B_x):x∈},to denote the locations of each point inat time t, and the shorthand notationξ_r( (t))ξ_r(π( (t))), where the projection π acts on (t) element-wise. With this notation, the random process that is of core interest to us is therefore η_n,T, and the main candidate for our limit theorems is f_n(t)∑_⊆η_n,Tξ_r( (t))τ_t(). The followingproperties of ξ_r,which play a major rolein our proofs, follow from thenatural assumptions on ξ_r. Translation and Scale Invariance. For any ∈(), x∈, and 0≤ r≤ 1,ξ_r( + x) = ξ_r(π( + x)) = ξ_r(π() ⊕π(x)) = ξ_r(π() ) = ξ_r( )Furthermore, if ∈(Q),ξ_1(0,)=ξ_1(0,π() ) = ξ_r(0,π(r π^-1( π())) ) = ξ_r(0,π(r ) )= ξ_r(0,r ).Feasibility.Assumption <ref> implies the existence of a nonempty set π^-1()⊂ Q^k-1 with positive Lebesgue measure, for which ξ_1(0,y)>0 for all y∈π^-1(), a fact that we will use often in our proofs. With ξ_1(0,y)>0, we abuse the notation between k-tuples and the sets, as explained in Assumption <ref>. Locality.Due to Assumption <ref>, for all 0<r≤1, there existsa set 𝒵⊂ Q^k-1 with positive Lebesgue measure such that, ξ_r(0,y) =0 for all y∈𝒵.In particular, 𝒵⊇ Q∖ B_δ r(0), where B_δ r(ν){x∈: x-ν≤δ r }, ν∈.Continuity. Due to Assumption <ref>, ξ_1(0, y) is Lebesgue almost everywhere continuous on Q^k-1. Two identities related to the marked process construction given below will prove essential in our proofs.Spatial homogeneity. Let X be uniformly distributed in the cube Q = [-1/2,1/2)^d⊂ and let Y∈ be a random vectorindependent of X. Then, π(X) has the same distribution as π(X+Y). Stationarity in time. Due to the spatial homogeneity, independence of the marks of each point, and the Markov property of Brownian motion and exponential lifetimes, joint distribution of f_n(t_1) and f_n(t_2) is the same as that of f_n(0) and f_n(t_2-t_1) for all 0≤ t_1≤ t_2≤ T. The next observation will enablecalculation of the moments and the distribution of f_n(t) via those of f_n(t). For any T>0, {f_n(t): 0≤ t≤ T} and {f_n(t): 0≤ t≤ T } have the same finite dimensional distributions. For m≥ 1, take 0≤ t_1<t_2<…< t_m≤ T and ω_1,…, ω_m∈ℝ_+, Define η_n,T to be the subset ofcomposed of the initial locations of the points of η_n(t) that are born in the interval 0< t≤ T. Note that, η_n, T is a Poisson point process onwith rate nT, andtherefore η_n, T∪η_n(0) is also Poisson process on , but with rate n(1+T). (c.f. The superposition theorem for Poisson processes, e.g. Theorem 3.3 in <cit.>). From (<ref>) we can write f_n(t) ∑_⊆η_n(0)∪η_n, Tξ_r ((t))∏_y ∈{y is alive at t}.where (t) is a shorthand notation for the locations of points ofat time t.Note that (t) and {y is alive at t} are independent, and due to the construction of η_n,T, they have the same distributions as π((t)) and τ_t( x) given in (<ref>) and (<ref>), respectively. Therefore, comparing (<ref>) and (<ref>), the following holds[e^-∑_i=1^m ω_i f_n(t_i)| η_n, T∪η_n(0) ] = [e^-∑_i=1^m ω_if_n(t_i)| η_n,T].Equivalence of the joint distributions f_n(t_1),…, f_n(t_m) and f_n(t_1),…,f_n(t_m) follows from the fact that η_n, T∪η_n(0) and η_n,T have the same distributions, and that the distributions of random vectors are determined through their Laplace transforms (see Proposition B.4 in <cit.>).§.§ Counting lemmaThe followinglemmageneralizes common counting techniques for Poisson processes, when the counted objects assume a specific structure of intersections. The following notation is needed for its statement.Let _ℓ denote a collection of natural numbers indexed by the nonempty subsets of [ℓ] = {1,…,ℓ}, i.e. _ℓ = (I_J)_J⊂ [ℓ], J∅. Given a sequence of finite subsets _1,…,_ℓ of a set , suppose that for all nonempty J⊂[ℓ] we have| (⋂_j∈ J_j )∩(⋂_j∉J_j )| = I_J,where _j = ∖_j. In this case we say that _1,…,_ℓ obey the intersection pattern _ℓ, and denote this by (_1,…,_ℓ) ∈_ℓ. In what follows, we will typicallywritefor _ℓ, unless ℓ is explicitly required. Set || := ∑_J I_J= |_1∪⋯∪_ℓ|.Fixing , and given a tuple of points x = (x_1,…,x_||) in ,letΨ_( x)(_1,…,_ℓ)be a splitting of the tuple x into(_1,…,_ℓ)∈, in an arbitrary but fixed manner.The following statement is a generalization of the well-known Mecke's formula for Poisson point processes. For any n>0, let _n be a Poisson point process on the point spacewith intensity measure nμ, where μ is a probability measure on . Let h( ) be a bounded measurable real function with (_1,…,_ℓ) and ℐ an intersection pattern. Then,[∑__1⊆_n⋯∑__ℓ⊆_n h(){∈}]=n^||[h(Ψ_( X)) ] /∏_JI_J!,where X is a tuple of || iid points in , with distribution μ. See the proof of Lemma 3.4 in<cit.> for the special case =, which can be generalized in a straightforward manner. § PROOFS §.§ Covariance characterizationAs a first step towards provingthe finite dimensional weak convergence of f̅_n(t) and ∫_0^tf̃_n(s) s, wederive expressions for their limiting covariance functions.The lemmas belowcharacterize the limit of the covariance function [f̅_n(t) f̅_n(t+Δ)] in different regimes of r and σ with respect to n, where Δ may also depend on n, but Δ≲ 1. Note that for the proofs of Theorems <ref> and <ref> we need only the special cases of these lemmas when Δ>0 is a constant. We state them in the more generality since we will need them where Δ is also allowed to change with n for the proof of Theorem <ref>.If σ√(Δ)/r → 0 then there exist non-negative constants λ_1,…,λ_k, with ∑_j=1^kλ_j=1, such that, for all t≥ 0,[f̅_n(t) f̅_n(t+Δ)] ≈∑_j=1^k λ_j e^-jΔ.If lim_n→∞ nr^d ∈(0,∞) then the constants λ_1,…, λ_k dependon the function ξ_1.If σ√(Δ)/r →√(β)∈(0,∞) then there exists a set of strictly positive, decreasing functions ζ_j:[0,∞)→ (0,1], 2≤ j≤ k, such that, for all t≥ 0,[f̅_n(t) f̅_n(t+Δ)] ≈λ_1 e^-Δ + ∑_j=2^k λ_j e^-jΔζ_j(β).The functions ζ_2,…, ζ_k depend on ξ_1, and the constants λ_1,…, λ_k are the same as those in Lemma <ref>.If σ√(Δ)/r →∞, then, for all t≥ 0,[f̅_n(t), f̅_n(t+Δ)] ≈∑_j=1^k e^-jΔκ̃_1/j!((k-j)!)^2 (2πΔ)^-d(j-1)/2j^-d/2 (nσ^d)^-j(σ/r)^d/∑_j=1^kκ_j (nr^d)^-jfor positive constants κ̃_1 and κ_j that depend on ξ_1. In the next lemma we give the limiting covariance for the processes ∫_0^t f̃_n(s) s.Under the assumptions of Theorem <ref>, for any t_1,t_2>0, lim_n→∞[∫_0^t_1∫_0^t_2f̃_n(s_1)f̃_n(s_2)s_1s_2] =t_1∧ t_2.where t_1∧ t_2 is the minimum of t_1 and t_2.Before we present the proofs for Lemmas <ref>–<ref>, we calculate the first and second moments of f_n(t), which is related to f̅_n(t) and f̃_n(t) through Observation <ref>. The proof of Lemma <ref> will be presented later as it requires more information regarding the normalization M_n appearing in the definition of f̃_n(t). §.§ Regime independent expressions for the moments Here we derive expressions for the first and the second moments of f_n(t) which will serve as a starting point for the proofs of Lemmas <ref>-<ref> and introduce important notation. We will not make any assumptions on the asymptotic relationships between n, r, and Δ yet, except that r →0. Therefore the expressions found for the moments will be valid for all regimes.Note that, due to stationarity in time (Remark <ref>), and Mecke's formula,[ f_n(t)] = [ f_n(0)] = n^k/k!α(r), where we use α(r) as a shorthand notation forα(r)[ξ_r( X)],and X (X_1,…,X_k) is a k-tuple of iid points uniformly distributed in the cube Q = [-1/2,1/2)^d⊂. Furthermore, we have[ f_n(t)^2]= [ f_n(0)^2]=[ ∑__1, _2⊆η_n,T|_1 ∩_2| = jξ_r(_1(0)) ξ_r(_2 (0) ) τ_0(_1)τ_0(_2)].Using Mecke's formula, and the independent distributions of the marks, we obtain[ f_n(t)^2]= ∑_j=0^k[n(1+T)]^2k-j/j!((k-j)!)^2α_j(r)1/(1+T)^2k-j, whereα_j(r)[ξ_r( X)ξ_r ( X' )],and where X (X_1,…,X_k)and X' (X_1,…, X_j,X_k+1,…, X_2k-j) are both k-tuples of uniform iid pointsin Q, sharing j points in common. Therefore, we obtain[ f_n(t)] =[ f_n(t)^2] - n^2k/(k!)^2[α(r)]^2= ∑_j=1^kn^2k-j/j!((k-j)!)^2α_j(r),for all t>0. Furthermore,[ f_n(0) f_n(Δ)]=[∑__1, _2⊆η_n,Tξ_r(_1(0)) ξ_r(_2 (Δ) ) τ_0(_1)τ_Δ(_2)].Counting through the intersection |_1 ∩_2| = j as in the calculation of the variance, and using Mecke's formula, along with the independence of the marks,[ f_n(0) f_n(Δ)]=[∑^k_j=0∑__1, _2⊆η_n,T|_1 ∩_2| = jξ_r(_1(0)) ξ_r(_2 (Δ) ) τ_0(_1)τ_Δ(_2) ]=∑_j=0^k[n(1+ T)]^2k-j/j!((k-j)!)^2p_0^k-j p_Δ^k-j p_0,Δ^j [ξ_r( X) ξ_r( X' + Z_σ^2Δ)],whereZ_σ^2Δ is made up of k independent,d-dimensional,Gaussian vectors, representing the displacements of the points in X' between times 0 and Δ. Thus each vector has zero mean and independent entries with variance σ^2Δ. In addition,p_0[B_x=0] = 1/1+ T, p_Δ [B_x≤Δ < B_x+L_x] = 1/1+ Te^-Δ +T/1+ T∫_0^Δ1/T e^-(Δ-b)db = 1/1+ T, p_0,Δ [B_x=0, Δ < L_x] = e^-Δ/1+ T.Inserting above expressions to (<ref>), we get[ f_n(0) f_n(Δ)]=∑_j=0^kn^2k-j e^-jΔ/j!((k-j)!)^2[ξ_r( X) ξ_r( X' +Z_σ^2Δ)].Note that, for j=0, [ξ_r( X) ξ_r( X' +Z_σ^2Δ)] = [ξ_r( X)] [ξ_r( X' +Z_σ^2Δ)].Furthermore, [ξ_r( X)] =[ξ_r( X' +Z_σ^2Δ)] = α(r),due to the spatial homogeneity (Remark <ref>). Thus, the j=0 term in the covariance cancels out to give[ f_n(0),f_n(Δ)]= [ f_n(0) f_n(Δ)] - ([ f_n(0)])^2= ∑_j=1^kn^2k-j e^-jΔ/j!((k-j)!)^2[ξ_r( X) ξ_r( X' +Z_σ^2Δ)].Note that due to Observation <ref>,[f̅_n(0), f̅_n(Δ)] = [ f_n(0),f_n(Δ)]/[ f_n(0)]= ∑_j=1^k e^-jΔ/j!((k-j)!)^2 n^-j[ξ_r( X) ξ_r( X' +Z_σ^2Δ)]/∑_j=1^k1/j!((k-j)!)^2 n^-jα_j(r). Clearly, then→∞ limit of this covariance depends on the asymptotic behavior of α_j(r) and [ξ_r( X) ξ_r( X' +Z_σ^2Δ)]. Firstly, consider α_j(r). Note thatα_j(r) =∫_Q^2k-jξ_r( x_1, …,x_k) ξ_r( x_1, …, x_j,x_k+1,…,x_2k-j) ∏_i=1^2k-j x_i.Making the change of variables x_i → x_1+y_i for 2≤ i ≤ 2k-j, and using the translation invariance of ξ_r, Remark <ref>, we getα_j(r) = ∫_Q x_1 ∫_(Q-x_1)^2k-j-1ξ_r(0,y) ξ_r(0, y') ∏_i=2^2k-j y_i,where y (y_2,… ,y_k) and y' (y_2,…,y_j,y_k+1,…,y_2k-j).Note that ξ_r(0,y) is a periodic function of y with period 1 due to its definition (<ref>). Thus, we obtainα_j(r) =∫_Q^2k-j-1ξ_r(0,y) ξ_r(0, y') ∏_i=2^2k-j y_i.Due to Remark <ref>, for r small enough, namely r<1/2δ,α_j(r) = ∫_ B_δ r^2k-j-1ξ_r(0,y) ξ_r(0,y') ∏_i=2^2k-j y_i,where we use B_δ r as a shorthand notation for B_δ r(0). Taking the change of variables y→ r y and y'→ ry', α_j(r) = r^d(2k-j-1)∫_ B_δ^2k-j-1ξ_r(0,ry) ξ_r(0,ry') ∏_i=2^2k-j y_i. Using Remark <ref>, that is the scale invariance of ξ_r, for y,y'⊂ B_δ⊂ Q, we conclude thatα_j(r) = r^d(2k-j-1)κ̃_j,whereκ̃_j∫_B_δ^2k-j-1ξ_1(0,y) ξ_1(0,y') ∏_i=2^2k-j y_i >0, which is due to Remark <ref>. Let us now focus on the asymptotics of the correlation termθ_j(r)[ξ_r( X) ξ_r( X' +Z_σ^2Δ)].Using the spatial homogeneity (Remark <ref>),θ_j(r) =∫_()^j x' ∫_Q^2k-j xx”ξ_r(x)ξ_r(x',x”)∏_i=1^jφ(x'_i- x_i),where x (x_1,… ,x_k), x' (x'_1,… ,x'_j), x” (x”_1,…,x”_k-j), and φ:→ℝ^+ is the symmetric Gaussian density,φ( z)1/(2πΔ)^d/2σ^d e^-z^2/2σ^2Δ.Note that θ_j(r) depends on σ^2Δ through φ. Now, we make the following change of variables in (<ref>),x_1 → u',x'_1 → u'+ u, x_i→ u'+z_i-1and x'_i → u'+u+z'_i-1for 2≤ i≤ j,x_ℓ → u'+ y_ℓ-jand x”_ℓ-j→ u'+ u + y'_ℓ-jfor j+1≤ℓ≤ k,and, using the translation invariance of ξ_r we get,θ_j(r) =∫_Q u'∫_ -u' u ∫_(-u-u')^j-1 z' ∫_(Q-u-u')^k-j y' ∫_(Q-u')^k-1 yz ×ξ_r(0, z,y) ξ_r(0,z',y') φ(u) ∏_i=1^j-1φ(z'_i-z_i+u ) ,where y(y_1,…, y_k-j),z(z_1,…, z_j-1),y'(y'_1,…, y'_k-j),z'(z'_1,…, z'_j-1).Due to the periodicity of ξ_r, and using the shorthand notationφ( z,u)φ(u) ∏_i=1^j-1φ(z_i+u),we rewrite (<ref>) as follows:θ_j(r) =∫_()^j uz'∫_Q^2k-j-1 yzy' ξ_r(0, z,y) ξ_r(0,z',y')φ( z'- z,u). We will derive explicit formulae for θ_j(r) in the different regimes in the proofs that follow.§.§ Covariance for the slow regimeWe prove Lemma <ref> here using the notation and identities established in the previous section.When σ√(Δ)/r → 0, we make the following change of variables in (<ref>),u →σ√(Δ) ufollowed byz'→σ√(Δ) v - σ√(Δ)u + z,to obtainθ_j(r) = 1/(2π)^dj/2∫_()^j uv∫_Q^2k-j-1 yzy'ξ_r(0, z, y) ξ_r(0,σ√(Δ) (v -u) + z,y') ×exp( -v^2/2) exp( - u^2/2),where v (v_1,…, v_j-1), andexp( -v^2/2) exp( - 1/2∑_i=1^j-1 v_i^2).Due to the locality of ξ_r, the integration over Q in the second integral in (<ref>) can be restricted to B_δ r, and, with the change of variablesy→ r y,y'→ r y',z→ r z, we can writeθ_j(r)=r^d(2k-j-1)/(2π)^dj/2∫_()^j uv∫_B_δ^2k-j-1 yzy' ξ_r(0,r z,r y)×ξ_r(0,σ√(Δ) ( v -u) +r z, r y' ) exp( -v^2/2) exp( - u^2/2).Note that the integrand in (<ref>) isbounded above byξ_∞^2 { y, y',z∈ B_δ}exp( -v^2/2) exp( - u^2/2),which is integrable over ()^j× B_δ^2k-j-1. Using the Dominated Convergence Theorem (DCT), this is enough to conclude that θ_j(r)≍ r^d(2k-j-1). However, in order to find the exact constant to which θ_j(r)/ r^d(2k-j-1) converges, we need to use the scale invariance of ξ_r (<ref>). As (<ref>) is valid only in Q, we separate the integral in (<ref>) into two parts, as follows: ∫_B_(r/σ)^1/2^j uv∫_B_δ^2k-j-1 yzy' ξ_r(0,r z,r y) ξ_r(0,σ√(Δ) ( v -u) +r z, r y' )e^- v^2 + u^2/2 + ∫_()^j ∖ B_(r/σ)^1/2^j uv∫_B_δ^2k-j-1 yzy' ξ_r(0,r z,r y) ξ_r(0,σ√(Δ) ( v -u) +r z, r y' ) e^- v^2 + u^2/2.For the first integral in (<ref>) we note that for all v ∈ B_(r/σ)^1/2^j-1, u ∈ B_(r/σ)^1/2, z ∈ B_δ^2k-j-1, all the points that compose z,y,y', and σ√(Δ)/r ( v -u) + z are inside Q. Therefore, using the scale invariance of ξ_r, we obtain that the first term in (<ref>) is equal to∫_B_(r/σ)^1/2^j uv∫_B_δ^2k-j-1 yzy' ξ_1(0, z, y) ξ_1(0, σ√(Δ)/r ( v -u) + z,y') e^- v^2 + u^2/2. Using DCT for the expression above and since σ√(Δ)/r → 0, we conclude that 3 lim_n→∞∫_B_(r/σ)^1/2^j uv∫_B_δ^2k-j-1 yzy' ξ_r(0,r z,r y) ξ_r(0,σ√(Δ) ( v -u) +r z, r y' ) e^- v^2 + u^2/2=lim_n→∞∫_()^j uv∫_B_δ^2k-j-1 yzy' ξ_1(0, z, y) lim_n→∞ξ_1(0, σ√(Δ)/r ( v -u) + z,y')×{ v∈ B_(r/σ)^1/2^j}{ u∈ B_(r/σ)^1/2} e^- v^2 + u^2/2=∫_()^j uv∫_B_δ^2k-j-1 yzy' ξ_1(0, z, y) ξ_1(0, z,y')e^- v^2 + u^2/2= κ̃_j(2π)^dj/2,with κ̃_j as defined in (<ref>). Note that in the last step above we used Remark <ref>. Now, we observe that the second term of (<ref>) can be bounded above by ∫_()^j ∖ B_(r/σ)^1/2^j uv∫_B_δ^2k-j-1 yzy' ξ_∞^2e^- v^2 + u^2/2≤ K √(σ/r) e^-r/σ d→ 0,for some constant K>0, using the classical Mills inequality on the tail of the Gaussian density <cit.>. With these, we conclude that θ_j(r) /r^d(2k-j-1)≈κ̃_j.Definingκ_jκ̃_j/j!((k-j)!)^2 ,we obtain,[f̅_n(0) f̅_n(Δ)] ≈∑_j=1^kκ_j e^-jΔn^-j r^d(2k-j-1)/∑_j=1^kκ_j n^-j r^d(2k-j-1) = ∑_j=1^kκ_j e^-jΔ(nr^d)^-j/∑_j=1^kκ_j (nr^d)^-j≈e^-kΔ nr^d → 0, ∑_j=1^kλ_j e^-jΔ nr^d →γ>0,e^-Δ nr^d →∞,whereλ_jκ_jγ^-j/∑_ℓ=1^kκ_ℓγ^-ℓ.This concludes the proof of Lemma <ref>. §.§ Covariance for the moderate and fast regimesWe first give a lemma for the asymptotics of θ_j(r) that is valid in both regimes, which willeventually make the proofs of Lemmas <ref> and <ref> somewhat more concise.Assume 1≲σ√(Δ)/r, and take some R→ 0 as n→∞. Then θ_j(r)=ϑ_j(r)+ϑ_j'(r), whereϑ_j(r) ≈ r^d(2k-1)/(√(2π)σ)^djΔ^dj/2∫_B_δ^2k-2 yzy' z' ξ_1(0,z,y) ξ_1(0,z',y')× ∫_B_2δ +R/rexp( -r^2/2Δσ^2(u^2 + ∑_i=1^j-1z'_i+u-z_i^2)) u,andϑ_j'(r) ≲ r^d(2k-2)σ^-djΔ^-dj/2exp( -jR^2 /2Δσ^2). From (<ref>), and the locality of ξ_r, we obtainθ_j(r) = ∫_(⋃_ϱ∈ℤ^d B_δ r (ϱ) )^j-1 z'∫_ B_δ r^2k-j-1 yzy'ξ_r(0, z, y) ξ_r(0,z',y')∫_φ( z'- z,u) u.Due to the periodicity of ξ_r,θ_j(r) = ∫_ B_δ r^2k-2 yzy' z' ξ_r(0, z, y) ξ_r(0,z',y')∫_∑_ϱ∈ℤ^d(j-1)φ( z'+ϱ- z,u) u .We define the following shorthand notationB̃_R+2δ r⋃_ν∈ℤ^d B_R+2δ r(ν),for some R such that lim_n→∞R= 0, andseparate the domain of integration of u into two parts,2θ_j(r) =∫_B_δ r^2k-2 yzy' z' ξ_r(0, z,y)ξ_r(0,z',y')∫_B̃_R+2δ r∑_ϱ∈ℤ^d(j-1)φ ( z'+ϱ- z,u) u+ ∫_B_δ r^2k-2 yzy' z' ξ_r(0,z,y)ξ_r(0,z',y') ∫_∖B̃_R+2δ r∑_ϱ∈ℤ^d(j-1)φ ( z'+ϱ- z,u) u = ∫_B_δ r^2k-2 yzy' z' ξ_r(0, z,y)ξ_r(0,z',y')×∫_B_R+2δ r∑_ϱ∈ℤ^d(j-1)∑_ν∈ℤ^dφ ( z'+ϱ- z,u+ν) u + ∫_B_δ r^2k-2 yzy' z' ξ_r(0,z,y)ξ_r(0,z',y')×∫_Q∖ B_R+2δ r∑_ϱ∈ℤ^d(j-1)∑_ν∈ℤ^dφ ( z'+ϱ- z,u+ν) u.Introducing the shorthand notation,φ̃(u)∑_ν∈ℤ^dφ(u+ν),we observe that2∑_ϱ∈ℤ^d(j-1) ∑_ν∈ℤ^dφ( z'+ϱ- z,u+ν) = ∑_ν∈ℤ^dφ( u+ν) ∑_ϱ∈ℤ^d(j-1)∏_i=1^j-1φ( z'_i+ϱ_i + u + ν-z_i) = ∑_ν∈ℤ^dφ( u+ν) ∏_i=1^j-1φ̃( z'_i+u-z_i) ,and, therefore,2θ_j(r)= ∫_B_δ r^2k-2 yzy' z' ξ_r(0, z,y)ξ_r(0,z',y')∫_B_R+2δ rφ̃( u) ∏_i=1^j-1φ̃( z'_i+u-z_i) u+ ∫_B_δ r^2k-2 yzy' z' ξ_r(0,z,y)ξ_r(0,z',y')∫_Q∖ B_R+2δ rφ̃( u) ∏_i=1^j-1φ̃( z'_i+u-z_i)u.Next, we consider the asymptotic behavior of φ̃ in each integral in (<ref>). Note that, for any u∈,φ̃( u)=∑_ν∈ℤ^d1/(√(2π)σ)^dΔ^d/2exp( -u+ν^2/2Δσ^2)= 1/(√(2π)σ)^dΔ^d/2[ exp( -u^2/2Δσ^2)+ ∑_ℓ=1^∞∑_ν∈{∑_i=1^d |ν_i|= ℓ}exp( -u+ν^2/2Δσ^2) ]. For all ν∈ such that ∑_i=1^d |ν_i|=ℓ, u≤1/4d, and ℓ≥ 1u+ν^2≥ν^2 + u^2 -2uν ≥ℓ^2/d + u^2 - 2uℓ≥ℓ^2/d + u^2 - ℓ/2d≥ℓ/2d + u^2.Therefore,∑_ℓ=1^∞∑_ν∈{∑_i=1^d |ν_i|= ℓ} exp( -u+ν^2/2Δσ^2) ≤ ∑_ℓ=1^∞ (2ℓ+1)^dexp( -ℓ/4 d Δσ^2 - u^2/2Δσ^2) ≤3^d exp( - u^2/2Δσ^2) ∑_ℓ=1^∞ℓ^d exp( -ℓ/4 d Δσ^2).Note that, ∑_ℓ=1^∞ℓ^d exp( -ℓ/4 d Δσ^2)=∑_ℓ=1^d^2ℓ^d exp( -ℓ/4 d Δσ^2) + ∑_ℓ=d^2+1^∞ℓ^dexp( -ℓ/4 d Δσ^2).The first sum on the right hand side above converges to 0 as σ goes to 0, and sinced logℓ≤√(ℓ)logℓ≤ℓfor all ℓ> d^2, the second sum is bounded above by∑_ℓ=1^∞ e^ℓ e^ -ℓ/4 d Δσ^2,which is a geometric sum that also converges to 0. Therefore,φ̃( u) ≈1/(√(2π)σ)^dΔ^d/2exp( -u^2/2Δσ^2, )for all u≤1/4d. Note that in the domain of integration for the first term in (<ref>) u ≤ R+2δ r =o(1), z'_i+u-z_i ≤ R+4δ r = o(1).Therefore, we can apply (<ref>) to the first term in (<ref>), which we writeasϑ_j(r) ≈∫_B_δ r^2k-2 yzy' z'ξ_r(0,z,y) ξ_r(0,z',y') 1/(√(2π)σ)^djΔ^dj/2×∫_B_R + 2δ rexp( -∑_i=1^j-1z'_i+u-z_i^2/2Δσ^2 - u^2/2Δσ^2) u .We make the change of variables y→ r y,y'→ r y',z→ r z,z'→ r z' and u→ ru, and use the scale invariance property of ξ_r to obtain (<ref>).Now consider the second term in (<ref>), which will be denoted by ϑ_j'(r). The following upper bound holds for any z',z∈ B_δ r and u∈ Q∖ B_R+2δ r:max(φ̃(u), φ̃(z'+u - z)) ≤φ̃( w)where w∈ is an arbitrary point with w=R. Therefore,ϑ_j'(r)≤ [φ̃(w)]^j∫_B_δ r^2k-2ξ_r(0,z,y)ξ_r(0,z',y')yzy' z'.Using (<ref>), we can writeϑ_j'(r)≲1/(√(2π)σ)^djΔ^dj/2exp( -jR^2 /2Δσ^2)(∫_B_δ r^k-1ξ_r(0, y)y )^2.With the change of variables, y→ r y, we obtain (<ref>). Next, we make specific assumptions on R to prove Lemmas <ref> and <ref> separately. In the moderate regime where σ√(Δ)/r →√(β)∈(0,∞), we take Rσ√(Δ r^-ϵlog r^-1),for a fixed 0<ϵ<1. Note that R→ 0 as required by Lemma <ref>, due to the definition of R and the regime we work in. Furthermore,u^2 + ∑_i=1^j-1z'_i+u-z_i^2=u^2 + ∑_i=1^j-1[u^2 - 2u^⊤ (z'_i-z_i) + z'_i-z_i^2 ]=j(u^2 - 2u^⊤∑_i=1^j-1z'_i-z_i/j)+ ∑_i=1^j-1z'_i-z_i^2 =j u- ∑_i=1^j-1z'_i-z_i/j^2 +∑_i=1^j-1z'_i-z_i^2 -j ∑_i=1^j-1z'_i-z_i/j^2.Therefore, using R/r →∞,lim_n→∞∫_B_2δ +R/rexp( -r^2/2Δσ^2(u^2 + ∑_i=1^j-1z'_i+u-z_i^2)) u =lim_n→∞∫_B_2δ +R/rexp( -jr^2/2Δσ^2u- ∑_i=1^j-1z'_i-z_i/j^2 ) u×lim_n→∞exp( -r^2/2Δσ^2(∑_i=1^j-1z'_i-z_i^2 -j ∑_i=1^j-1z'_i-z_i/j^2))=(2πβ/j)^d/2exp( -1/2β(∑_i=1^j-1z'_i-z_i^2 -j ∑_i=1^j-1z'_i-z_i/j^2)).Thus, from (<ref>),ϑ_j(r) · r^-d(2k-j-1)≈ζ̃_j(β),where ζ̃_j(β)(2πβ)^-d(j-1)/2 j^-d/2∫_B_δ^2k-2ξ_1(0,z,y) ξ_1(0,z',y') ×exp( -1/2β(∑_i=1^j-1z'_i-z_i^2 -j ∑_i=1^j-1z'_i-z_i/j^2))yzy' z'.Note that for j=1, ζ̃_j(β) = κ̃_j (see (<ref>)). For j≥ 2, we can write ζ̃_j(β) more concisely asζ̃_j(β)=∫_B_δ^2k-2ξ_1(0,z,y) ξ_1(0,z',y') × ∏_i=1^d 1/(2π)^(j-1)/2√(β^j-1j)exp(-1/2∑_ℓ, l=1^j-1 (z_ℓ,i - z'_ℓ,i) M^j,β_ℓ,l (z_l,i - z'_l,i) ) yzy' z',where z_ℓ,i∈ℝ denotes the i-th entry of z_ℓ∈, and M^j,β_ℓ,lj-1/jβ ifℓ=l-1/jβ ifℓ≠ l.The (invertible) matrix M^j,β can also be written asM^j,β = 1/jβ (jI_j-1 - J_j-1 ),where I_j-1 denotes the (j-1)×(j-1) identity matrix, and J_j-1 denotes the matrix with the same dimensions but with all entries1. Note that|M^j,β| = 1/(β j)^j-1 j^j-2 = 1/β^j-1j,an observation that will be useful later on. Due to the feasibility of ξ_1, ζ̃_j(β) is strictly positive for all β>0, which gives usϑ_j(r) ≍r^d(2k-j-1).With thedefinition of R given in (<ref>), it follows thatexp( -jR^2 /2Δσ^2) = exp( -j/2 r^-ϵlog r^-1)≪ r^d.Therefore ϑ_j'(r) ≪ r^d(2k-1)σ^-djΔ^-dj/2,so that ϑ_j'(r)≪ϑ_j(r), which, due to (<ref>), gives[f̅_n(0) f̅_n(Δ)]≈∑_j=1^k e^-jΔζ̃_j( β)/j!((k-j)!)^2 n^-j r^d(2k-j-1)/∑_j=1^kκ_j n^-j r^d(2k-j-1)≈∑_j=1^k e^-jΔζ_j( β) λ_j,whereζ_j(β)ζ̃_j(β)/κ̃_j.Recall that κ_j and λ_j were defined in (<ref>) and (<ref>), respectively.Since ζ̃_1(β) = κ̃_1 for all β>0, ζ_1(β)=1.With this, we conclude that if σ√(Δ)/r→√(β)∈(0,∞),[f̅_n(0) f̅_n(Δ)]≈λ_1e^-Δ + ∑_j=2^k e^-jΔζ_j( β) λ_jfor strictly positive functions ζ_j(β). In the fast regime, where r≪σ√(Δ)≪ 1, we will again apply the expansion (<ref>) to (<ref>).Note thatexp( -r^2/2Δσ^2(∑_i=1^j-1z'_i-z_i^2 -j ∑_i=1^j-1z'_i-z_i/j^2)) → 1,and (2πΔ/j)^-d/2(r/σ)^d ∫_B_2δ +R/rexp( -jr^2/2Δσ^2u- ∑_i=1^j-1z'_i-z_i/j^2 ) u =(2πΔ/j)^-d/2(r/σ)^d ∫_B_2δ+R/r(m_ z)exp( -jr^2/2Δσ^2v^2 ) v =(2π)^-d/2∫_exp( -v^2/2) {v-√(j/Δ)r/σ m_ z≤(2δ +R/r) √(j/Δ)r/σ} v,wherem_ z∑_i=1^j-1z'_i-z_i/j∈.Under the choice ofRσ√(Δ)log1/σ√(Δ), note also thatR/r√(j/Δ)r/σ→∞.Therefore (<ref>) converges to 1, which together with (<ref>), establishes the asymptotic behaviour of the second integral in (<ref>), givingϑ_j(r)/r^d(2k-2)(σ√(Δ))^-d(j-1) →(2π)^d(1-j)/2 j^-d/2∫_B_δ^2k-2ξ_1(0,z,y) ξ_1(0,z',y') yzy' z'= (2π)^-d(j-1)/2 j^-d/2κ̃_1.Thus, ϑ_j(r) ≍ r^d(2k-2)(σ√(Δ))^d(1-j). For ϑ_j'(r), due to (<ref>) and the choice of R we can write ϑ_j'(r)≪ r^d(2k-2) (σ√(Δ))^-dj(σ√(Δ))^√(log(σ√(Δ))^-1)≪ r^d(2k-2) (σ√(Δ))^d(1-j)≪ϑ_j(r). Therefore, we obtain,[f̅_n(0) f̅_n(Δ)] ≈∑_j=1^k e^-jΔκ̃_1/j!((k-j)!)^2 (2πΔ)^-d(j-1)/2j^-d/2 n^-j r^d(2k-2)σ^d(1-j)/∑_j=1^kκ_j n^-j r^d(2k-j-1).Rearranging the terms conclude the proof. §.§ Covariance of the integrated process in the fast regimeBefore we present the proof of Lemma <ref> we give some results that concern the asymptotic behavior of the normalization term used to define f̃_n(t), viz.M_n= ∫_0^1[f̅_n(0) f̅_n(t)] t. Calculating exact asmyptotics of this integral is impossible without the explicit characterization of the covariance in the transition regimes, which is out of the scope of this paper. However we can findupper and lower bounds as given in the following proposition.Under the assumptions of Theorem <ref>(r/σ)^2+ϵ≲ M_n ≲(r/σ)^2 - 4/d(k-1)+2for any given ϵ>0.For the lower bound we writeM_n= ∫_0^(r/σ)^2+ϵ[f̅_n(0) f̅_n(t)] t + ∫_(r/σ)^2+ϵ^1[f̅_n(0) f̅_n(t)] t,and due to Lemma <ref> we observe[f̅_n(0) f̅_n(t)] ≥ c, for all 0≤ t≤ (r/σ)^2+ϵ and some constant c>0 arbitrarily close to, but less than, 1. For the upper bound, we writeM_n = ∫_0^(r/σ)^2-ρ[f̅_n(0) f̅_n(t)] t + ∫_(r/σ)^2-ρ^1[f̅_n(0) f̅_n(t)] t,where ρ4/d(k-1)+2.Furthermore, due to Lemma <ref>,[f̅_n(0) f̅_n(t)]≲max_1≤ j≤ k(r/σ)^-d(j-1)(2-ρ)/2 (nσ^d)^-j(σ/r)^d/ (nr^d)^-k,for all t≥ (r/σ)^2-ρ. Note that due to the assumptions of Theorem <ref>, r/σ→ 0 and nσ^d≲ 1, the maximum in the numerator is asymptotically achieved for j=k. Therefore,[f̅_n(0) f̅_n(t)] ≲(r/σ)^-d(k-1)(2-ρ)/2 (nσ^d)^-k(σ/r)^d (nr^d)^k= (r/σ)^ρ d(k-1)/2 = (r/σ)^2-ρ. For 0≤ t≤(r/σ)^2-ρ , we use the constant bound[f̅_n(0) f̅_n(t)]≤ 1, and the upper bound on M_n follows from (<ref>).Under the assumptions of Theorem <ref>, for any t,Δ≥ 0, lim_n→∞[∫_0^t+Δf̃_n(t)f̃_n(s)s] =1/2{t>0} + 1/2{Δ>0}.To prove (<ref>), we first show that the integral and the expectation in (<ref>) are interchangeable, that is,[∫_0^t+Δf̃_n(t)f̃_n(s)s] = ∫_0^t+Δ[f̃_n(t)f̃_n(s)]sfor all n. Note that[|f̃_n(t)f̃_n(s)|]= [|f_n(t)- [f_n(t)]||f_n(s)- [f_n(t)]| ] /2M_n[f_n(t)]≤[f_n(t)f_n(s)] + 3([f_n(t)])^2 /2M_n[f_n(t)] ,for all n and t,s ≥ 0. Using stationarity in time,[f_n(t)f_n(s)] ≤[f_n(t)^2] + [f_n(s)^2]/2 = [f_n(t)^2],which gives[|f̃_n(t)f̃_n(s)|] ≤[f_n(t)] + 4([f_n(t)])^2 /[f_n(t)] 2M_n.For fixed n, [f_n(t)] and [f_n(t)] were given in terms of n before. SinceM_n>0,(<ref>) follows by Fubini.Next, we focus on the right hand side of (<ref>). First assume t,Δ>0, write ut∧Δ/2,and note that∫_0^t+Δ[f̅_n(t)f̅_n(s)]s =∫_0^t-u[f̅_n(t)f̅_n(s)]s + ∫_t-u^t+u[f̅_n(t)f̅_n(s)]s + ∫_t+u^t+Δ[f̅_n(t)f̅_n(s)]s =∫_u^t[f̅_n(0)f̅_n(s)]s + 2 ∫_0^u[f̅_n(0)f̅_n(s)]s+ ∫_u^Δ[f̅_n(0)f̅_n(s)]s .Since σ /r→∞ and 1≲ s for all 0< u≤ s≤ t, we can use Lemma <ref> to obtain,[f̅_n(0)f̅_n(s)] ≤∑_j=1^k e^-js g(n),for some bounded function g(n) satisfying g(n)≲max_1≤ j≤ k (nσ^d)^-j(σ/r)^d/(nr^d)^-k≲(r/σ)^d(k-1)≲ 1,from which we conclude that [f̅_n(0)f̅_n(s)] is bounded by an integrable function. Therefore, using Reverse Fatou,lim sup_n→∞∫_u^t[f̅_n(0)f̅_n(s)] s ≤∫_u^tlim sup_n→∞[f̅_n(0)f̅_n(s)] s≲(r/σ)^d(k-1)for all 0<u<t. Furthermore, from Proposition <ref>,∫_u^t[f̅_n(0)f̅_n(s)]s /M_n≪(r/σ)^d(k-1)-2-0.5≪ 1,since d(k-1)≥ 3. The same asymptotic upper bound holds for the integral ∫_u^Δ[f̅_n(0)f̅_n(s)]s. Thus, from (<ref>),lim_n→∞[∫_0^t+Δf̃_n(t)f̃_n(s)s]=lim_n→∞∫_0^t+Δ[f̃_n(t)f̃_n(s)]s = lim_n→∞∫_0^t+Δ[f̅_n(t)f̅_n(s)]s/2 M_n= lim_n→∞∫_0^u[f̅_n(0)f̅_n(s)]s/M_n= 1 - lim_n→∞∫_u^1[f̅_n(0)f̅_n(s)]s/M_n =1where we used (<ref>) in the last step. If t=0 or Δ=0 but t+Δ>0, write ut+Δ, and note that lim_n→∞[∫_0^t+Δf̃_n(t)f̃_n(s)s]=lim_n→∞∫_0^u[f̃_n(t)f̃_n(s)]s = lim_n→∞∫_0^u[f̅_n(0)f̅_n(s)]s/2M_n= 1/2where we again used Fubini in the first step since the upper bound (<ref>) holds for all t,x≥ 0. Therefore (<ref>) follows. Assume t_1≤ t_2 without loss of generality. We write,lim_n→∞[∫_0^t_1∫_0^t_2f̃_n(s_1)f̃_n(s_2)s_2s_2]= lim_n→∞∫_0^t_1∫_0^t_2[f̃_n(s_1)f̃_n(s_2)]s_2s_1=∫_0^t_1lim_n→∞∫_0^t_1[f̃_n(s_1)f̃_n(s_2)]s_2s_1= t_1,where we used Fubini and (<ref>) to interchange the expectation and the integral. The rest of the proof follows from the fact that [f̃_n(s_1)f̃_n(s_2)]≥ 0 for all s_1,s_2≥ 0 and from(<ref>). §.§ Finite dimensional distributionsIn this subsection, we prove that the finite dimensional distributions of the processes of interest to us converge weakly to multivariate Gaussian distributions under the relevant assumptions of Theorems <ref>-<ref>. Suppose that σ /r ≲ 1 and n^kr^d(k-1)→∞. Then the finite dimensional distributions of f̅_n(t) converge to multivariate Gaussian.In proving Lemma <ref> we will use the Cramer-Wold theorem and the marked point process structure we built. Note that, using (<ref>), the linear combination of f_n(t) across different time samples can be written as ∑_i=1^m ω_i f_n(t_i) = ∑_⊆η_n,T∑_i=1^m ω_i ξ_r( (t_i))τ_t_i(),for some coefficients ω_i≠ 0, 1≤ i≤ m, and time instances t_1<⋯<t_m<T. Our proof of Lemma <ref> uses the normal limit theory developed in <cit.> for functions of finite Wiener chaos expansion on the Poisson space. §.§.§ Wiener Chaos and U-statisticsRecent developments combining Malliavin calculus withStein method on the Wiener chaos space led to fascinating normal approximation results which eventually found extensive use in several problems of stochastic geometry. The next statement is the essential component in all these results. (See <cit.> for a proof.) Take a Poisson point measure η inwith intensity measure μ. Denote the associated compensated Poisson measure as η̃η - μ. Every square integrable random variable G with respect to η admits a unique chaos decomposition, G = [G] + ∑_ℓ=1^∞ I_ℓ(g_ℓ) ,where each g_ℓ:^ℓ→ℝ is a square integrable function and I_ℓ(g) denotes the multiple Wiener-Itô integral of order ℓ,viz.I_ℓ(g)∫_^ℓ g( x_ℓ) η̃^⊗ℓ( x_ℓ). G is called a U-statistic of order k on the Poisson point process η if it satisfiesG(η) = ∑_ x∈η_≠^k h( x)for some kernel function h, where η_≠^k is the set of all k-tuples of distinct points in η.Note that if we assign the following function of the k-tuple of marked pointsh( x) = 1/k!∑_i=1^m ω_i ξ_r( x (t_i))τ_t_i( x) with the notation x (t)(x_1+Z_x_1(t)-Z_x_1(B_x_1), …, x_k+Z_x_k(t)-Z_x_k(B_x_k)),then we obtain G(η_n,T) =∑_i=1^m ω_i f_n(t_i).Therefore, the linear combination ∑_i=1^m ω_i f_n(t_i), which is of interest to us in the Cramer-Wold theorem, is a special case of a U-statistic with a symmetric kernel.The application of the Malliavin-Stein methods toU-statistics was studied in <cit.>. The crucial observation that led tocentral limit theorems forU-statistics on Poisson space is the following.Assume the kernel h in (<ref>) is such that G is square integrable. Then h is also square integrable and G has a finite Wiener chaos expansion. That is, it can be written in the form (<ref>) with g_ℓ=0 for ℓ>k and each g_ℓ for 1≤ℓ≤ k admits the formg_ℓ( x_ℓ) = kℓ∫_^k-ℓ h( x_ℓ,y_k-ℓ) μ^k-ℓ.§.§.§ CLT for U-statisticsBefore we present the CLT that we will use to prove Lemma <ref>, we define contractions, constructs that appear in the quantitative normal approximations of U-statistics on Poisson processes. [Contractions] Let ψ:^i→ℝ, ϕ:^j→ℝ be two symmetric functions (for some i,j≥ 1) that are square integrable with respect to μ^i and μ^j respectively. For every 0≤ℓ≤ m ≤ i∧ j, a contraction of ψ and ϕ is the function ψ_m^ℓϕ:^i+j-m-ℓ→ℝ given byψ_m^ℓϕ ( x_i-m,x'_j-m, y_m-ℓ) = ∫_^ℓψ( x_i-m,y_m-ℓ,z_ℓ) ϕ( x'_j-m,y_m-ℓ,z_ℓ) ∏_q=1^m μ( z_q).The normal approximation result we will use is in terms of Wasserstein distance, which we define below. [Wasserstein distance] The Wasserstein distance between two random variables X and Y is defined asd_W(X,Y)sup_f∈Lip_1|[f(X)]-[f(Y)]|,where Lip_1 denotes the set of Lipschitz functions with Lipschitz constant less than or equal to 1. The following central limit theorem is a combination of two previous results and is succinctly expressed as Theorem 2.4 in <cit.>. Let {G_n} be a collection of random variables with finite Wiener chaos expansions, so thatG_n = [G_n] + ∑_ℓ=1^k I_ℓ(g_ℓ)for some fixed k>0, where g_ℓ implicitly depends on n. Assign ρ_n^2[G_n] and assume there exists ρ^2 >0 such thatlim_n→∞ρ_n^2 = ρ^2.Letbe a standard Gaussian random variable. For every n,d_W(G_n - [G_n], ρ)≤C/ρ(maxg_i_m^ℓ g_j_L^2(μ^i+j-m-ℓ) + max_i g_i^2_L^4(μ^i))+ √(2/π)/max{ρ_n,ρ}|ρ_n^2-ρ|for some constant C. Note that, ·_L^2(·) and ·_L^4(·) here denote the second and fourth moments, with respect to a given measure.Next we state an important inequality regarding contraction kernels given as part of Lemma 2.4 in <cit.>.Let ψ and ϕ be as in Definition <ref>. For all 0≤ℓ≤ m ≤ i∧ j we have that ψ_m^ℓϕ_L^2(μ^i+j-m-ℓ)≤ψ_L^4(μ^i)ϕ_L^4(μ^j)Combining Proposition <ref> and Lemma <ref>, we obtain the following corollary for normalized random variables with afinite chaos expansion. Let G_n be as in Proposition <ref> and assume [G_n]→ρ^2>0. Then d_W(G_n-[G_n], ρ)≲max_1≤ i≤ kg_i^2_L^4(μ^i). Now that we have given the necessary background, we are ready to present the proof of finite dimensional convergence of f̅_n(t).The proof follows from calculating the right hand side of (<ref>) in Corollary <ref> for the normalized U-statistic G̅_n∑_i=1^m ω_i f_n(t_i) /√([f_n(0)] ) .The kernel associated with G̅_n ish̅( x) = 1/k!·√([f_n(0)] )∑_i=1^m ω_i ξ_r( x (t_i))τ_t_i( x).Using Proposition <ref>, the Wiener kernel of degree ℓ of interest to us can be written as a function of ng̅_ℓ( x_ℓ,n) =kℓ/k!·√([f_n(0)])∫_^k-ℓ∑_i=1^m ω_i ξ_r( x_ℓ (t_i),y_k-ℓ (t_i))τ_t_i( x_ℓ, y_k-ℓ ) (μ_n,T)^k-ℓ,wherenow is a shorthand notation for Q×ℝ^+ ×ℝ^+ × C_[0,T] and μ_n,T is the product intensity measure of the marked process η_n,T.Next we will find an asymptotic bound on the square of the fourth moment of g̅_ℓ under (μ_n,T)^ℓ in order to use Corollary <ref>. First, note that ∫_^ℓ[∫_^k-ℓ∑_i=1^m ω_i ξ_r( x_ℓ (t_i),y_k-ℓ (t_i))τ_t_i( x_ℓ, y_k-ℓ )(μ_n,T)^k-ℓ]^4(μ_n,T)^ℓ≤max_j|ω_j|^4 ·∫_^ℓ[∫_^k-ℓ∑_i=1^m ξ_r( x_ℓ (t_i),y_k-ℓ (t_i)) (μ_n,T)^k-ℓ]^4(μ_n,T)^ℓ2 ≲∫_^ℓ[ ∑_i=1^m ∫_^k-ℓ{(π ( x_ℓ(t_i))) ≤δ r }×{max_1≤ j≤ k-ℓπ (x_1(t_i)) - π(y_j (t_i))≤δ r } (μ_n,T)^k-ℓ]^4(μ_n,T)^ℓ . Now note that for any given x_1 and t_i,∫_^k-ℓ{max_1≤ j≤ k-ℓπ (x_1(t_i)) - π(y_j (t_i))≤δ r } (μ_n,T)^k-ℓ = C (nr^d)^k-ℓfor some constant C, due to the spatial homogeneity (Remark <ref>). Therefore, (<ref>) can be asymptoticallybounded above by∫_^ℓ[ (nr^d)^k-ℓ∑_i=1^m {(π ( x_ℓ(t_i))) ≤δ r }]^4(μ_n,T)^ℓ≲ m^4 (nr^d)^4(k-ℓ)∫_^ℓ{(π ( x_ℓ(t_1))) ≤δ r } (μ_n,T)^ℓ,using the spatial homogeneity again. Using the same techniques as in the calculation of the first moment of f_n(t), the integral on the right hand side can,asymptotically, bebounded above by n(nr^d)^ℓ-1, which leads us to conclude thatg̅_ℓ^2_L^4(μ^ℓ)≲n^1/2 (nr^d)^4k-3ℓ-1/2/[f_n(0)].On the other hand, we observe thatlim_n→∞[G̅_n] = lim_n→∞[∑_i=1^mω_i f_n(t_i)/√([f_n(0)])]=∑_i,ℓ=1^m ω_i ω_ℓlim_n→∞[f̅_n(t_i), f̅_n(t_ℓ)].In the slow regime, σ≪ r, using Lemma <ref>, we obtainlim_n→∞[G̅_n] =∑_i,ℓ=1^m ω_i ω_ℓ∑_j=1^k λ_j e^-j|t_i-t_ℓ| =∑_j=1^k λ_jω^⊤T^(j)ω,for a set of non-negative constants λ_1,…,λ_ℓ, with the entries of the matrix T^(j) defined asT^(j)_iℓ = e^-j|t_i-t_ℓ|.As this is the covariance matrix of an Ornstein-Uhlenbeck process, T^(j) is a positive definite matrix. Therefore ω^⊤T^(j)ω >0, for all nonzero ω∈ℝ^m, and ρ^2 = lim_n→∞[G̅_n] is a positive constant for all ω. Furthermore, (<ref>) and (<ref>) give thatr^-d∑_j=1^k (nr^d)^2k-j≲[ f_n(0)].Using this, togetherwith Corollary <ref> and (<ref>),leads to d_W(G̅_n - [G̅_n],ρ N )≲n^1/2r^d max_1≤ℓ≤ k(nr^d)^4k-3ℓ-1/2/∑_j=1^k (nr^d)^2k-j,with G̅_n as defined in (<ref>). If nr^d→ 0, (<ref>) givesd_W(G̅_n- [G̅_n],ρ N )≲n^1/2r^d (nr^d)^k-1/2/(nr^d)^k = 1/√(n^k r^d(k-1))≪ 1 .If nr^d→γ∈ (0,∞], (<ref>) givesd_W(G̅_n- [G̅_n],ρ N )≲n^1/2r^d (nr^d)^2k-2/(nr^d)^2k-1 = n^-1/2. Therefore, the proof of Lemma <ref> follows in the slow regime. In the moderate regime, σ /r →√(β)>0, (<ref>) is no longer true, and so we cannot immediately conclude that ρ^2 = lim_n→∞[G̅_n] is a positive constant, which is a condition required for applying Corollary <ref>. Nevertheless, examination of[G̅_n] in the moderate regime reveals thatlim_n→∞[G̅_n]=λ_1ω^⊤T^(1)ω+ ∑_j=2^k λ̃_jω^⊤[T̃^(j)∘T^(j)] ω,where λ̃_2,…, λ̃_k are some non-negative constants. Here, from (<ref>) and (<ref>), we haveT̃^(j)_iℓ = [ξ_1(0,z,y) ξ_1(0, ( z +w_β|t_i-t_ℓ|) ,y')],where w_x is a set of (j-1) jointly Gaussian distributed vectors inwith the inverse covariance matrix M^j,x asgiven in (<ref>), and z,y,y' are iid points in Q⊂ as before. In addition, ∘ in (<ref>) denotes the Hadamard (entry-wise) product of two matrices. Note that T̃^(j) can also be considered as the correlation matrix of the process ξ_1(0, ( z +W(β t)),y),sampled at time points t_1,… ,t_m,where { W(β t)): t≥ 0} is a stationary Gaussian process in ℝ^d(j-1) with non-degenerate covariance satisfying W(0)=0. Consider ω^⊤T̃^(j)ω^⊤ =[(∑_ℓ=1^m ω_ℓξ_1(0, ( z +W(β t_ℓ)),y) )^2]for a nonzero ω∈ℝ^m, and consider the probability[∑_ℓ=1^m ω_ℓξ_1(0, ( z +W(β t_ℓ)),y) ≠ 0] ≥[ξ_1(0,z,y)>0,⋂_ℓ=2^m {ξ_1(0, ( z +W(β (t_ℓ-t_1))),y)=0}].The right hand side is positive due to Remark <ref> and the fact that { W(β t)): t≥ 0} has non-degenerate covariance. This leads to the conclusion that ω^⊤T̃^(j)ω^⊤>0 for all ω, all entries of which are nonzero, and therefore T̃^(j) is a positive definite matrix for all 2≤ j≤ k.The product T̃^(j)∘T^(j) is positive definite as a result of the Schur product theorem <cit.>. Consequently,(<ref>) is strictly positive, and the proof of Lemma <ref> follows. For the fast regime we prove the following lemma.Under the assumptions of Theorem <ref>, the finite dimensional distributions of the process {∫_0^tf̃_n(x) x:t≥ 0} converge to multivariate Gaussian.Proceeding as in the proof of Lemma <ref>, we writeG̃_n∑_i=1^m ω_i ∫_0^t_if_n(u) u /√(2[f_n(0)] M_n ) .The U-statistic kernel associated with G̃_n is thenh̃( x) = 1/k!·√(2M_n[f_n(0)] )∑_i=1^m ω_i ∫_0^t_iξ_r( x (u))τ_u ( x)u,and the Wiener kernel of degree ℓ is g̃_ℓ( x_ℓ,n) =kℓ/k!·√(2M_n[f_n(0)])∫_^k-ℓ∑_i=1^m ω_i ∫_0^t_i ξ_r( x_ℓ (u),y_k-ℓ (u))×τ_u( x_ℓ, y_k-ℓ ) u (μ_n,T)^k-ℓ.Accordingly,∫_^ℓ[∫_^k-ℓ∑_i=1^m ω_i ∫_0^t_iξ_r( x_ℓ (u),y_k-ℓ (u))τ_u( x_ℓ, y_k-ℓ )u · (μ_n,T)^k-ℓ]^4(μ_n,T)^ℓ≲∫_^ℓ[∫_^k-ℓ∫_0^T ξ_r( x_ℓ (u),y_k-ℓ (u))u · (μ_n,T)^k-ℓ]^4(μ_n,T)^ℓ2 ≲∫_^ℓ∫_0^T [ ∫_^k-ℓ{(π ( x_ℓ(u))) ≤δ r }×{max_1≤ j≤ k-ℓπ (x_1(u)) - π(y_j (u))≤δ r } (μ_n,T)^k-ℓ]^4u · (μ_n,T)^ℓ,using Jensen's inequality for the final inequality.Using (<ref>) for t_i=u we obtain thatthe above isboundedbyC (nr^d)^4(k-ℓ)∫_^ℓ∫_0^T {(π ( x_ℓ(u))) ≤δ r } u· (μ_n,T)^ℓfor some constant C. Therefore, arguing as in the paragraph in the proof of Lemma <ref> leading to (<ref>), we findg̃_ℓ^2_L^4(μ^ℓ)≲n^1/2 (nr^d)^4k-3ℓ-1/2/M_n [f_n(0)].Thus, similar to (<ref>), since nr^d→ 0d_W(G̃_n- [G̃_n],ρ̃ N )≲1/M_n√(n^k r^d(k-1))≪σ^2+ϵ/r^2+ϵ√(n^k r^d(k-1))for any ϵ>0, due to Proposition (<ref>). The convergence follows from the assumption of Theorem <ref> that σ /r≪ (n^kr^d(k-1))^1/4-ϵ' for some ϵ'. Also,ρ̃^2 = lim_n→∞[G̃_n]= lim_n→∞[∑_i=1^m ∑_ℓ=1^m ω_i∫_0^t_if̃_n(u) u] = ∑_i=1^mω_i ω_ℓ (t_i∧ t_ℓ) >0,due to Lemma <ref>. This concludes the proof.Follows from Lemma <ref> and Lemma <ref>.Follows from Lemma <ref> andLemma <ref>.Follows from Lemma <ref> andLemma <ref>.EO was supported in part by the Israel Science Foundation, Grants 2539/17 and 1965/19.OB was supported in part by the Israel Science Foundation, Grant 1965/19. RJA was supported in part by the Israel Science Foundation, Grant2539/17.imsart-number
http://arxiv.org/abs/2310.17775v2
{ "authors": [ "Efe Onaran", "Omer Bobrowski", "Robert J. Adler" ], "categories": [ "math.PR", "60G55, 60F17 (Primary) 60D05, 60G15 (Secondary)" ], "primary_category": "math.PR", "published": "20231026205550", "title": "Functional Limit Theorems for Local Functionals of Dynamic Point Processes" }
Quantum Research Centre, Technology Innovation Institute, Abu Dhabi, UAE Dipartimento di Fisica e Astronomia “Ettore Majorana”, Via S. Sofia 64, 95127 Catania, Italy INFN-Sezione di Catania, Via S. Sofia 64, 95127 Catania, ItalyQuantum Research Centre, Technology Innovation Institute, Abu Dhabi, UAENational Research Council, Institute for Microelectronics and Microsystems (IMM-CNR), VIII Strada 5, Catania, 95121, ItalyCNR-INO and Dipartimento di Fisica dell’Università di Pisa, Largo Pontecorvo 3, 56127 Pisa, ItalyQuantum Research Centre, Technology Innovation Institute, Abu Dhabi, UAE Dipartimento di Fisica e Astronomia “Ettore Majorana”, Via S. Sofia 64, 95127 Catania, Italy INFN-Sezione di Catania, Via S. Sofia 64, 95127 Catania, Italy Centre for Quantum Technologies, National University of Singapore 117543, SingaporeNetworks of Rydberg atoms provide a powerful basis for quantum simulators and quantum technologies. Inspired by matter-wave atomtronics, here we engineer switches, diodes and universal logic gates. Our schemes control the Rydberg excitation dynamics via the anti-blockade or facilitation mechanism, allowing for much faster devices compared to cold atom systems. Our approach is robust to noise and can be applied to individually trapped atoms and extensive three-dimensional gases. In analogy to electronics, Rydberg atomtronic devices promise to enhance quantum information processors and quantum simulators. Rydberg atomtronic devices Luigi Amico Accepted XXX. Received YYY; in original form ZZZ ====================================================Introduction Electrons in Rydberg atoms can be excited to very large principle quantum number <cit.>. The resulting large dipole moment and polarisability leads to peculiar effects, such as the dipole blockade: within a specific volume, the excitation of more than one atom to the Rydberg state is inhibited due to the aforementioned dipole interaction <cit.>. Conversely, when the excitation laser is negatively detuned from resonance, an anti-blockade or facilitation effect occurs: a single initial excitation induces more excitations in neighbouring atoms <cit.>. Combining blockade and facilitation effects together can provide flexible schemes for coherent manipulation of excitations in networks of Rydberg atoms <cit.>. The inherent physics and the remarkable know-how in coherent atom manipulations <cit.>, networks of Rydberg atoms provide a fruitful and versatile toolbox for quantum simulators and more widely quantum technologies <cit.>. Rydberg networks also provide a promising basis for quantum information processors <cit.>.In our approach, we are inspired by atomtronics, which encapsulates the properties of ultra-cold atoms to create circuits via laser fields of different shapes and intensities <cit.>. In particular, atomic devices such as atomtronic transistors and switches for cold atoms have been proposed <cit.> and realised <cit.>. Another vital building block to perform classical analogue or digital computation is the diode. In the same way as electronics, the atomtronic diode has been proposed by bringing doped conducting cold atom systems together <cit.>. Here, we demonstrate how the aforementioned control of Rydberg excitations can be exploited to conceive specific atomtronic devices in which, instead of matter, the dynamics involve Rydberg excitations. The transfer and control of excitations are conducted via the facilitation mechanism, where an excited state of an atom induces excitations in neighbouring atoms via the van der Waals interaction combined with appropriately chosen frequency detunings. By applying this idea to different networks, we construct specific Rydberg atomtronic schemes analogous to switches and diodes. Further, we construct logic gates such as AND, NOT and NAND, demonstrating that Rydberg atomtronics provides a universal logic gate set.A key component for these devices, especially for the diode, is the generation of a non-reciprocal or chiral flow of excitations. When considering interactions within two levels of different Rydberg excitations, chiral currents in ring-shaped networks have been induced via phase shifts <cit.>. In contrast, here we consider the dynamics of the ground state and excited Rydberg state which lacks a coherent hopping interaction. Nonetheless, we can engineer non-reciprocal behaviour by spatially varying the distance and detuning of atoms to create a one-way facilitation mechanism.Model We investigate a network of N Rydberg atoms where we denote the atomic ground state as |↓⟩ and Rydberg state as |↑⟩ with Hamiltonian <cit.>, ℋ = ∑_j=1^NΔ_jn_j+Ω ∑_j=1^Nσ^x_j +1/2 ∑_i j C_6/|x_i - x_j|^6n_in_j. Here, Ω is the Rabi frequency, Δ_j the detuning of the j^th atom for the x-Pauli σ^x_j, C_6 the van der Waals interaction coefficient,n_j = 1/2(σ_i^z + 1) the excitation number operator and x_j the position of the j^th atom. Coupling with the environment for mixed state ρ is modelled with the Lindblad master equation,∂_tρ = -i[ℋ,ρ] +∑_k (L_k ρ L_k^†-1/2{L_k^†L_k,ρ} ). We consider two dissipative mechanisms expressed with Lindblad operators: dephasing L_k,dephasing = √(γ)n_k with rate γ as well as decay of excitations L_k, decay = √(κ) σ_k^- with rate κ, where σ_k^- destroys a Rydberg excitation. In the limit of strong dephasing γ≫Ω, the atoms rapidly dephase into mixed states <cit.> and quantum coherences can be neglected. In this regime, a classical master equation can be derived via a second-order perturbation theory <cit.>. The evolution of the probabilities of the basis states p=diag(ρ) is given by∂_tp = ∑_k Γ_k[σ_k^++σ_k^-- 1]p+∑_kκ[σ_k^-- n_k]p ,with transition rateΓ_k = Ω^2γ/(γ/2)^2+(Δ_k+C_6 ∑_q ≠ k n_q/|x_k - x_q|^6)^2.We now review two fundamental phenomena observed in Rydberg atoms. First, we illustrate the Rydberg blockade. Let us assume a Rydberg atom in the ground state and detuning Δ=0. When there are no other excited atoms nearby, the driving Ω will excite the atom. In contrast, if there is an excited Rydberg state within the Rydberg radius r_b=(C_6/Ω)^1/6, the atom cannot be excited due to the energy shift induced by the van der Waals interaction.Second, the Rydberg facilitation mechanism induces excitations only when another excitation is present <cit.>. Let us consider two atoms at the facilitation distance r_f and the facilitation detuning Δ_f=-C_6/r_f^6 where we choose |Δ_f|≫Ω. When initially none of the atoms are excited, then the detuning will suppress any excitations due to Rabi driving Ω. Now, what happens if the first atom is excited? In this case, the positive van der Waals interaction combined with the negative detuning Δ_f brings the second atom into resonance, as shown in Fig. <ref> and induces its excitation <cit.>. Repeating this process, the induced excitation can further excite the next atom, effectively creating a facilitation chain of propagating excitations (see Appendix <ref>)The facilitation mechanism is robust even for strong dephasing noise.We now apply the blockade and facilitation mechanisms to control the flow of excitations in networks of Rydberg atoms and create various practical devices. Switch for individually trapped atoms A switch is a device that allows current to pass through it whilst it is enabled, however, prevents the transport of current when it is disabled.The smallest setup for an atomtronic switch is composed of a one-dimensional chain of N = 3 atoms with distance r_f, as shown in Fig. <ref>. We initialise the system with an excitation in the input and all other sites in the ground state. The transport to the output is controlled by a gate atom with variable detuning Δ_g. When we choose Δ_g≈Δ_f, the gate atom is excited by the input via the facilitation mechanism, while otherwise, the gate atom remains with high probability in the ground state. Hence, if the gate atom is excited, the facilitation mechanism induces excitations in the output.In Fig. <ref>, we vary the gate detuning Δ_g and measure the average number of excitations in the output N_o after a specific evolution time (see Appendix <ref>). We observe that our device behaves like a switch, with a peak in output excitations for Δ_g≈Δ_f, while away from the facilitation regime excitations cannot reach the output. The dynamics are robust against dephasing γ and decay κ. Switch for three-dimensional gas Next, we consider N Rydberg atoms trapped in a three-dimensional potential without individual control over the position.As shown in Fig. <ref>, we have a cylindrical trap of length L_x and radius R, where the minimal distance between atoms is d_min=0.1μm.All atoms are subject to the same driving strength Ω and are initialised in the ground state. The system along the x direction is split into three regions: input of length L_i, gate of length L_g and output of length L_o. Each region has a different detuning frequency Δ(x): in the input, we excite Rydberg atoms on resonance with Δ_i=0. In the gate, we have either Δ_g=Δ_f when the switch is on, else we set Δ_g=-Δ_f to block any transport of excitations. In the output, we set the detuning to the facilitation regime Δ_o=Δ_f. For the gate to block transport,L_g must be larger than the facilitation radius r_f, else excitations in the input can directly excite the output. We simulate the dynamics using the classical approximation (<ref>) by Monte-Carlo sampling of trajectories where we confirm the validity of the strong dephasing approximation in Appendix <ref>.The dynamics of the excitations in the output are shown in Fig. <ref>. Our simulation parameters are chosen closely to the ones from the Rubidium atom experiment in Ref. <cit.>. For an enabled switch with Δ_g=Δ_f we observe twice as many excitations compared to the disabled switch with Δ_g=-Δ_f. This behaviour is robust in the presence of strong dephasing and excitation loss. Diode The diode is a non-reciprocal device that allows current to pass through from one direction, but blocks transport coming from the reverse direction. To induce non-reciprocal behaviour, we consider a one-dimensional facilitation chain with equal spacing to its neighbours, except for a single gate atom with detuning Δ_g and distance r_g=(C_6/Δ_g)^1/6 to either its left or right neighbour (see Fig. <ref>). First, in Fig. <ref>a we consider the forward direction operation of the diode. When the gate atom has distance r_g to its left neighbour, an initial excitation in the input can travel via the gate to the output as the facilitation condition is met along the way. In contrast, we consider the reverse direction of the diode in Fig. <ref>b, where the gate atom has distance r_g to its right neighbour. Then, the facilitation condition is not met between the gate and output atom, blocking any transport. We set the detuning of the gate atom Δ_g/Δ_f = 2 (see Appendix <ref>) and evolve the system for different values of γ. Fig. <ref>cshows the number of excitations in the output N_o at tΩ = 4. We find that the forward direction transports a large number of excitations compared to the reverse operation of the diode. The difference between forward and reverse decreases with increasing γ, but remains relatively large even when γ is in the same order as the driving frequency Ω.Logic gates We now construct different logic gates using the Rydberg interactions. Logic gates return a binary outcome depending on given input bits. For the input, we define logic 0 as a Rydberg atom in the ground state, while 1 corresponds to an excited input atom. We define the logic output as 0 when N_o<N_threshold, while 1 corresponds to N_o>N_threshold, where N_threshold is a threshold number of excitations. The AND gate returns 1 only when two inputs are 1, else 0. We construct the AND gate with three atoms as seen in Fig. <ref>a. The two input atoms are at a distance r_f to the output atom, while the detuning of the output atom is chosen as 2Δ_f, i.e. twice the original facilitation condition. Only when both input atoms are excited, the output atom is on resonance due to the van der Waals interaction.Next, we consider the NAND gate, which is an inverted AND gate, i.e. it returns 0 only when the two inputs are 0. We realise the NAND gate by combining the AND gate with a NOT gate (see Fig. <ref>b). The NOT gate flips 0 to 1 and vice versa. In our setup, we realise the NOT gate by setting the detuning on the output atom to Δ=0 for a time period δ t=π/(2Ω) at tΩ=1.5, and Δ=Δ_f for all other times. Together with the constant Rabi driving, this realises a π pulse which excites the ground state to a Rydberg state and de-excites an initial Rydberg state into the ground state. We create a NAND gate by applying a NOT gate on an additional atom which is at facilitation distance to the output of the AND gate. We show the AND gate and NAND gate in Fig. <ref>. We show the average number of excitations N_o in the output against time for different input excitations. We observe that the logic table of the AND (Fig. <ref>a,b) and NAND gate (Fig. <ref>c,d) can be realised by reading out N_o.We find that a threshold N_threshold=0.5 is sufficient to distinguish between 0 and 1 even in the presence of noise. We find the optimal work time t_w as dashed lines where we find optimal performance for the gates.Discussion We have demonstrated that networks of Rydberg atoms can create atomtronic devices that, instead of matter-wave, are based on acontrolled flow of excitations. The flow is controlled by using the blockade and facilitation mechanism of interacting Rydberg atoms. This way, a new platform of atomtronic circuits is proposed. The propagation of matter-wave in typical cold atoms clouds occurs on the millisecond scale, whereasRydberg excitations can travel in microseconds. Therefore, Rydberg excitations have the potential to provide proof for fast atomtronic quantum devices. With this approach, we have demonstrated different circuit elements providing the Rydberg atomtronics counterpart of classical electronic devices as switches and diodes.In particular, diodes require non-reciprocal transport which commonly is implemented by breaking time-reversal symmetry via the flux <cit.>. In contrast, we engineer non-reciprocal transport by using the facilitation mechanism combined with non-uniform atomic distances. Further, by using the facilitation condition involving multiple atoms we implement AND, NOT and NAND classical gates, realising a universal logic gate set. Future work can combine our different gates and devices to create even more complex gadgets such as adders or routers. Our proposed devices use experimentally demonstrated parameter regimes and thus can be realised in state-of-the-art experiments for tweezer arrays of Rydberg atoms and three-dimensional gases.Note added While writing the manuscript, a similar mechanism to engineer non-reciprocal transport via facilitation has been proposed <cit.>.Acknowledgements We thank Leong-Chuan Kwek, Wenhui Li, Francesco Perciavalle, Enrico Domanti, Wayne J. Chetcuti, Davide Rossini andThibault Vogt for discussions. The Julian Schwinger Foundation grant JSF-18-12-0011 is acknowledged. OM and AL also acknowledge support by the H2020 ITN “MOQS" (grant agreement number 955479) and MUR (Ministero dell’Università e della Ricerca) through the PNRR MUR project PE0000023-NQSTI.§ APPENDIX§.§ Experimental ConsiderationConsidering the experimental creation, we assume the same experimental procedure that is noted in <cit.>. Here ^87Rb atoms are excited from the ground to the Rydberg state via a two-photon transition as they share the same parity <cit.>. The first of which,|5S_1/2⟩→|6P_3/2⟩, a laser with Ω_420 excites the atom to an intermediate state. Then from here, another transition occurs, due to a laser with Ω_1013, this excites the atom from |6P_3/2⟩→|70S_1/2⟩, the Rydberg state.We chose our simulation parameters in close accordance with experimental work conducted on Rubidium atoms <cit.>. We select to use parameters with values: Ω̃/(2π) = 0.7MHz, κ̃/(2π) = 1kHz, γ̃/(2π) = 0.7MHz and C̃_6/(2π) = 109GHz. We decide to work in units of Ω = 1 and covert the other parameters accordingly. We convert the dephasing, γ = γ̃/Ω̃ = 1, the decay, κ = κ̃/Ω̃ = 0.003. For the interaction, we fix the values of Δ̃_f = 7MHz and r̃_f = 5 μm therefore C_6 =C̃_6/(Ω̃ r̃_f) = 10. We provide these values to a similar range to that of experimental work.For our systems, we consider detuning with spatial variation. In practice, such conditions can be implemented by suitably shifting the excitation laser frequency (for example through an acousto-optic modulator) and then by exciting specific portions of the Rydberg network with different frequencies. §.§ Classical equation vs full simulation Here we compare the simulation with the full quantum equations (<ref>) against the classical approximation (<ref>), derived for the limit of strong dephasing γ≫Ω. For the system in Fig. <ref>, both the dynamics are represented in Fig. <ref>. We observe that for γ≥Ω, the classical equations are a good approximation to the full quantum dynamic. Beyond this limit, γ < Ω, the quantum coherence between the atoms becomes too large, therefore there is a discrepancy between the two evolutions. §.§ Transport We study the transport in a linear chain of Rydberg atoms as shown in Fig. <ref>. We set the inter-atom distance to the facilitation radius r_f and detuning Δ_f. We evolve the system with an initial excitation in the input. The dynamics of excitations are shown in Fig. <ref>. In the quantum regime, we consider two cases: γ = κ = 0 and γ = 1, κ = 0.003.We observe a propagation of excitation throughout the N = 6 sites and then a "back-reflection" towards the input. In this situation, the individual atoms transition between the ground and Rydberg state via the Rabi driving frequency, Ω. The interaction, C_6, plays an additional role in the correlation in the interaction term, resulting in the excitation being back-reflected at every site. With increasing dephasing γ, the back reflection is less dominant in the dynamics as the excitation density decreases with increasing propagation distance. We also find that after becoming excited, the atoms are not driven directly to the ground state due to the dephasing destroying the coherence of the state.§.§ Switch We elaborate on how we determine the optimal time to read out the switch. The goal is to find the time when we have maximal density in the output. We evolve our N = 6 atom switch via  (<ref>) for 0 < Δ_g/Δ_f≤ 3 and record the density of excitations at each time increment. The results are shown in Fig. <ref>. We now regard the regime where the switch is on by zooming into dynamics where 0.80 ≤Δ_g/Δ_f≤ 1.2. We find that the maximum does not vary much with Δ_g in this regime. We identify the time t with maximum density for Δ_g = Δ_f, which occurs at time t = 4.60/Ω and t = 3.20/Ω respectively for γ = 1 and γ = 0.§.§ DiodeWe now identify a good choice for gate detuning Δ_g for the individually trapped diode. We evolve the diode with N =6 atoms for 0 < Δ_g/Δ_f≤ 3 in both the forward and reverse direction. We measure the number of excitations in the output at t = 4.60/Ω (t = 3.20Ω) with (without) dephasing. We choose these times as here we find the highest number of excitations for the forward direction as studied previously for the switch. We show N_o against Δ_g in Fig. <ref>. Note that the modified distance r_g=(-C_6/Δ_g)^1/6 depends on Δ_g. For Δ_g≈Δ_f, there is no difference between forward and reverse direction, thus this parameter regime cannot be used for a diode. In contrast, we find a large difference between forward and reverse away from this point. Therefore, we consider the diode with Δ_g/Δ_f = 2.
http://arxiv.org/abs/2310.18242v1
{ "authors": [ "Philip Kitson", "Tobias Haug", "Antonino La Magna", "Oliver Morsch", "Luigi Amico" ], "categories": [ "quant-ph", "cond-mat.quant-gas" ], "primary_category": "quant-ph", "published": "20231027162859", "title": "Rydberg atomtronic devices" }
Coded Caching Scheme for Partially Connected Linear Networks Via Multi-antenna Placement Delivery Array M. Cheng, Y. Xie and M. Zhang are with Guangxi Key Lab of Multi-source Information Mining & Security, Guangxi Normal University, Guilin 541004, China(e-mail: [email protected], [email protected], [email protected],). Z. Huang and Y. Wu are with the School of Information Science and Technology, ShanghaiTech University, Shanghai 201210, China(e-mail:huangzhh,[email protected]).Minquan Cheng, Yun Xie, Zhenhao Huang, Mingming Zhang, and Youlong Wu====================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== In this paper, we study the coded caching scheme for the (K,L,M_T,M_U,N) partially connected linear network, where there are N files each of which has an equal size, K+L-1 transmitters and K users; each user and transmitter caches at most M_U and M_T files respectively; each user cyclically communicates with L transmitters. The goal is to design caching and delivery schemes to reduce the transmission latency measured by the metric normalized delivery time (NDT). By delicately designing the data placement of the transmitters and users according to the topology, we show that a combinatorial structure called multiple-antenna placement delivery array (MAPDA), which was originally proposed for the multiple-input single-output broadcast channels, can be also used to design schemes for the partially connected linear network. Then, based onexisting MAPDAs andour constructing approach, we propose new schemes that achieve the optimal NDT when M_T+ M_U≥ N andsmaller NDT than that of the existing schemes when (M_T+ M_U≤ N, M_U/N+M_T/NL/K⌈K/L⌉≥ 1) or(M_U+ M_T< N,K/L∉ℤ^+). Moreover, our schemes operate in one-shot linear delivery and significantly reduce the subpacketizations compared to the existing scheme, which implies that our schemes have a wider range of applications and lower complexity of implementation.Coded caching, Multiple-antenna placement delivery array, normalized delivery time, partially connected linear network. § INTRODUCTIONThe immense growth of wireless data traffic is putting incredible pressure on the wireless network, especially the high temporal variability of network traffic, resulting in congestion during peak traffic time and under utilization during off-peak time. Coded caching in <cit.> is an efficient solution to reduce transmission pressure by pre-populating the user's local cache with content at off-peak time and using coding theory to generate more multicast opportunities during peak traffic time.A caching system consists of two phases, i.e., the placement phase at off-peak traffic time and the delivery phase at peak traffic time. In the placement phase, the server places content in each user's cache without knowing future users' demands. In the delivery phase, each user requests an arbitrary file and the server broadcasts coded packets such that each user can decode its requested file with the help of its caching contents.The first coded caching scheme was proposed byMaddah-Ali and Niesen in <cit.> for a shared-link broadcast network, where a central server containing N files with equal length connects to K users, each of which can cache at most M files through an error-free shared link. The communication goal is to design a scheme such that the transmission cost is as small as possible.The first coded caching scheme, referred to as theMN scheme, achieves the minimum communication load (i.e., the number of communication bits normalized by the size of the file) within a multiplicative factor of 2<cit.>. For the uncoded placement where each user stores directly a subset of the bits of files, the MN is exactly optimal <cit.>. Following the original caching problem, many works have investigated the coded caching problem for a variety of network topologies, such as Device-to-Device (D2D) network <cit.>, hierarchical network <cit.>, combination network <cit.>, and arbitrary multi-server linear network <cit.>,etc. Recently, coded caching has been widely extended to the wireless network, such as multiple-input single-output (MISO) broadcast channels <cit.>,multiple-input multiple-output (MIMO) broadcast channels <cit.>, single-input-single-output (SISO) interference channels <cit.>, andMIMO interference channels<cit.>,etc. The goal of most existing works on cache-aided wireless network is to jointly designdata placementand physical layer delivery to improve the communication efficiency. For instance,<cit.> proposed schemes that are (order) optimal in the sense of sum DoF for the cache-aided SISO interference channels, and <cit.> established the optimal normalized delivery efficiency (NDT), a definition first introduced by <cit.>, in certain cache size regions. The work in<cit.> studied a partially connected linear network where the users can only connect with part of the transmitters, and proposed a coded caching scheme thatachieves the optimal NDT when cache memories of transmitters and users are relatively large. In addition, some workstake both communication efficiency and computational complexity into consideration. For example,low-subpacketisation coded caching schemes generated by constructing the multiple-antenna placement delivery array (MAPDA)[The authors independently proposed the same combinatorial structure called extended placement delivery array for MISO broadcast channels.] were proposed in<cit.> for MISO broadcast channels, andone-shot linear deliverybased on interference zero-forcing was proposed in <cit.> for SISO interference channels.In this paper, we revisit the partially connected linear network<cit.> that models a typical wireless network where some users can only communicate with a subset of transmitters due to pass loss caused by blocking objects. More specifically, we consider a wireless network with K linearly aligned users, K+L-1 linearly aligned transmitters, and each user is locally connected to a subset of L∈{1,…,K} continuous transmitters. Let M_T and M_U represent the caching memory sizes of each user and transmitter, respectively. In <cit.>,a coded caching scheme based on interference alignment and interference neutralization was proposed by Xu, Tao, and Zheng, namely the XTZ scheme, to achieve the optimal NDT when M_T+M_U≥ N. Despite the optimality of the scheme in the case M_T+M_R≥ N, there are still several important and unresolved issues. First,it is unknown whether the optimality still holds for the case M_T+M_R< N, which is a common case when each user and transmitter are equipped with insufficient size of cache memories; if the answer is not, thenhow can we further improve the communication efficiency? Second, the XTZ scheme involveshigh coding and computational complexity. More specifically, for the case M_T/N=1/L, the XTZ scheme splits each file into an exponentially large number of subfiles LM_RL/N and applies interference alignment, which requires each user to first waitL L- 1M_RL/N n^ρ+L-1M_RL/N+1(n+1)^ρtransmission slots where n∈ℕ [In general, the value n should be sufficiently large such that the maximum degree of freedom is achieved.] and ρ=(K+L-1)(L-M_RL/N-1) to decode its desired contents. This would cause unbearable waiting latency and high computational complexity in thetransmitbeamforming vector design (See detailed discussions in Section <ref>.) In view of the facts above, we aim to find low-complexity and communication-efficient coded caching schemes for the (K+L-1) × K partially connected wireless network, to simultaneously reduce the subpacketisation level, computational complexity, and transmission latency.The contributions of our work can be summarized as follows.∙ We first prove that MAPDA can also be used to design coded caching schemes for the partially connected wireless network, and then proposenew coded caching schemes based on existing MAPDAs, which are listed in Table <ref>. Note that the MAPDA was originally proposed for wireless network where all users connect with all transmitters, and directly applying MAPDA to the partially connected wireless network would lead to the users missing some signals due todisconnecting links. To address this issue, we delicately design the data placement at the transmitters via a cyclic-based MAPDA such that any L consecutive transmitters can store and deliver all required contents to their connected users, and then globally design all the users's placement based on an integralMAPDA. This enables our scheme to simultaneously deliver all desired files to all users and achieve larger multicast opportunities compared to the XTZ scheme. ∙Compared with the XTZ scheme, our schemes achieve smaller NDTs when 1) M_T+ M_U≤ N, M_U/N+M_T/NL/K⌈K/L⌉≥ 1 or 2)M_U+ M_T< N, K/L∉ℤ^+; the sameNDT as the XTZ scheme when M_U + M_T≥ N (optimal NDT is achieved) or M_U+ M_T< N, K/L∈ℤ^+; and slightly larger NDTs than the XTZ scheme whenM_U+ M_T< N and LM_T=N but preserving much lower decoding complexity due to the one-shot delivery strategy (see Table <ref>). ∙Unlike the XTZ scheme where the subpacketizationgrows exponentially with L, our schemes significantly reduce the subpacketization that linearly increases with K. Moreover, our scheme enables independent data placement between the transmitter and user, while the XTZ scheme requires data placements among all nodes to be dependent on each other. Finally,our schemes operate in a one-shot delivery strategy for all cases, while the XTZ scheme needs all users to wait for long transmission slots and then decode the information for some cases (e.g., LM_T/N=1).These facts indicate that our schemes have a wider range of applications and lower complexity of implementation (see Section <ref>). *Paper Organization The rest of this paper is organized as follows. Section <ref> describes the system model. Section <ref> present MAPDA for the partially connected linear network. Some proofs can be found in Section <ref> and Appendices. Notations: In this paper, the following notations will be used unless otherwise stated.∙ [a:b]:={ a,a+1,…,b} and [a]:={ 1,2,…,a}.|·| denotes the cardinality of a set.∙We use the notation a| q if a is divisible by q and a∤ q otherwise. If a is not divisible by q, ⟨ a⟩ _q denotes the least non-negative residue of a modulo q; otherwise, ⟨ a⟩ _q:=q.∙ gcd(a,b) denotes the greatest common divisor of a and b.∙Let ℬ={b_1,b_2,…,b_n} be aset with b_1<b_2<…<b_n, for any i∈[n], ℬ[i] denotes the i^th smallest element of ℬ, i.e., ℬ[i]=b_i.∙For any positive integers n and t with t<n, let [n] t={𝒯 | 𝒯⊆ [n], |𝒯|=t}, i.e., [n] t is the collection of all t-sized subsets of [n].∙ Let a be a vector with length n, for any i∈[n], a[i] denotes the i^th coordinate of a. For any subset 𝒯⊆ [n], a[𝒯] denotes a vector with length |𝒯| obtained by taking only the coordinates with subscript i∈𝒯.∙ Given any F× m array 𝐏, for any integers i∈[F] and j∈ [m], 𝐏(i,j) represents the element located in the i^th row and the j^th column of 𝐏; 𝐏( 𝒱,𝒯) represents the subarray generated by the row indices in 𝒱⊆ [F] and the columns indices in 𝒯⊆ [m]. In particular let 𝐏([F],𝒯)be shortened by 𝐏(·,𝒯) and𝐏(𝒱,[m])be shortened by 𝐏(𝒱,·).§PARTIALLY CONNECTED NETWORKS PLACEMENT DELIVERY ARRAY§.§ System Model Consider a (K+L-1)× K partially connected linear network (see Fig. <ref>), where there is a library of N files 𝒲={W_1,…,W_N} each of V-bit length, K+L-1 linearly aligned transmittersdenoted by T_1, T_2, …, T_K+L-1, K linearly aligned users denoted by U_1, U_2, …, U_K, and each user U_k is connected to L consecutive transmitters T_k, T_k+1, …, T_k+L-1 where k∈[K] and L≤ K. Here, L is referred to as user connectivity. Fig. <ref> shows an example of the linear network withK = 4 and L = 3 where each transmitter is equipped with a cache of finite size and has a single antenna.A (K,L,M_T,M_U,N) coded caching scheme contains two phases. §.§.§ Placement phaseIn this paper, we consider the uncoded placement where every node directly caches a subset of the library bits. Each file is divided into F packets, i.e., W_n=(W_n,1,W_n,2,…,W_n,F), where each packet W_n,f∈𝔽_2^B for n∈ [N], f∈[F]. Here B represents the size of each packet. Clearly, we have V=FB. Each transmitter and each user caches some packets of 𝒲 with a size of at most M_TF packets and M_UF packets, respectively. Denote the cached contents at transmitter T_j where j∈[K+L-1] and userU_k where k∈[K] as 𝒵_T_j and𝒵_U_k, respectively.We assume that the placement is performed without knowing users' later demands.§.§.§ Delivery phaseEach user U_k requests for an arbitraryfile W_d_k from the library d_k∈[N], for k∈[K]. Let 𝐝≜ (d_1,d_2,…,d_K) denote the demand vector. According to the users' demands and caches, the server transmits coded packets through L antennas. More precisely, the server first uses a code for the Gaussian channel with the rate rCl B/B̃=logP+o(logP)to encode each packetto a coded packet asW̃_n,f=ψ( W_n,f)∈ℂ^B̃, where ψ is the coding scheme for the Gaussian channel, e.g., random Gaussian coding. Here each coded packet contains B̃ complex symbols and carries one degree-of-freedom (DoF). The whole communication process contains S blocks, each of which consists of B̃complex symbols (i.e.,B̃ time slots). In each block s∈ [S], the communication goal is to deliver a subset of the requested packets, denoted by 𝒟_s = {W̃_d_k_1,f_1,…,W̃_d_k_|𝒟_s |,f_|𝒟_s |}, to a subset of users 𝒦_s = {k_1,…, k_|𝒟_s |}. Assume that the user U_k_i requests the packet W̃_d_k_i,f_i for each i∈ |𝒟_s|. In this paper we only consider linearcoding schemes in the delivery phase. Ineach block s∈[S], each transmitter T_j where j∈ [K+L-1]sends 𝐱^(s)_j∈ℂ^B̃, which is linear combinations of the coded plackets, i.e.,𝐱^(s)_j = ∑_i∈[|𝒟_s|] v^(s)_j,k_iW̃_d_k_i,f_i,where v^(s)_j,k_i is the complex beam forming coefficient and can be any complex value if the packet W_d_k_i,f_i is cached by transmitter T_j, otherwise v^(s)_j,k_i=0 for each i∈ [|𝒟_s|]. Then each user U_k, k∈𝒦_s can receive the following signal𝐲^(s)_k=∑_j=k^k+L-1h^(s)_k,j𝐱^(s)_j+ ^(s)_k, through the interference channel where h^(s)_k,j∈ℂ denotes the channel coefficient from transmitter T_j to user U_k, which is independent and identically distributed in ℂ. User U_k∈𝒦_s can decode the following coded signalW̃_d_k,f +^(s)_k. based on its local caches and received signal 𝐲^(s)_k. By assumingP is large enough, the coded packet W̃_d_k,f can be decoded with an error probability exponentially decreasing to zero. To evaluate the transmission efficiency of the scheme, we adopt the same metric normalized delivery latency (NDT) as in <cit.>, which is defined asτ(M_T,M_U)≜lim_P→∞lim_V→∞supmax_ d∈[N]^KT/V/log P,where T is the total time slots in the whole communication process. Since each file contains F packets, each of which has B bits and there are in total SB̃ time slots, (<ref>) can be written asτ = lim_P→∞lim_V→∞SB̃/BF/log P=lim_P→∞S/F·log P/log P+o(log P) =S/F.From (<ref>), NDT can represent the maximal normalized number of transmitted files over all possible demands in the interference channel and the high signal-to-noise ratio (SNR) regime. We prefer to design a scheme with the optimal NDT defined asτ^*(M_T,M_U)≜inf{τ(M_T,M_U) |τ(M_T,M_U) is achievable}.In <cit.>, the authors applied the metric sum DoF to measure the communication efficiency, which is defined as the total transmitted requested packet bits per time slot normalized by log P, i.e.,Sum-DoF = lim_P→∞K(1-M_U/N)BF/SB̃log P= K(1-M_U/N)/τ, where the last equality holds by (<ref>) and(<ref>). The first coded caching scheme for the partially connected linear network was proposed in <cit.>, where the following result was given.For the cache-aided (K+L-1)× K partially connected linear network, there exists a (K, L, M_T,M_U,N) coded caching scheme with the NDT as follows.τ_XTZ(M_T,M_U)= {[ (1-1/L+1/M_UL/N + 1 )(1-M_U/N)cif M_TL / N =1, M_UL / N ∈[0:L-1],; 1-M_U/N/min{M_T/N+M_U/N, 1 } cif M_TL/N∈[2:L],M_UL/N∈[0:L-1]. ]. It is worth mentioning that the authors in <cit.> showed that when M_T/N+M_T/N≥ 1, the scheme in Lemma <ref> achieves the optimal NDT. In this paper, we aim to propose communication-efficient and low-complexity schemes that improve the scheme in Lemma<ref> for the case M_T/N+M_T/N < 1.§.§ Multi-antenna Placement Delivery ArrayThe authors in <cit.> proposed multiple-antenna placement delivery array(MAPDA) to characterize the placement strategy and delivery strategy for the MISO caching system. In this section, we will introduce MAPDA that will be helpful in generating schemes for the partially connected linear network. For any positive integers r, K, F, Z and S, an F× K array 𝐐 that is composed of “*" and [S] is called (r,K,F,Z,S) multiple-antenna placement delivery array (MAPDA) if it satisfies the following conditions C1. The symbol “*" appears Z times in each column;C2. Each integer occurs at least once in the array; C3. Each integer s appears at most once in each column; C4. For any integer s∈[S], define𝐐^(s) to be the subarray of 𝐐 including the rows and columns containing s, and let r'_s× r_s denote the dimensions of 𝐐^(s).The number of integer entries in each rowof 𝐐^(s) is less than or equal to r, i.e., |{k_1∈ [r_s]| 𝐐^(s)(f_1,k_1)∈[S]}|≤ r,∀ f_1 ∈ [r'_s]. □ If each integer appears g times in the 𝐐, then 𝐐 is a g-regular MAPDA, denoted byg-(r,K,F,Z,S) MAPDA. We can check that the 5× 5 array 𝐐 is a g-(r,K,F,Z,S) = 5-(4,5,5,1,4) MAPDA.𝐐=([ * 1 1 1 1; 1 * 2 2 2; 2 2 * 3 3; 3 3 3 * 4; 4 4 4 4 * ]).For instance, when s =1, we have the following sub-array.𝐐^(1)=([ * 1 1 1 1; 1 * 2 2 2 ]).It can be seen that each row of 𝐐^(1) contains 4 integer entries and no more than r=4 integer entries. Hence, 𝐐^(1) satisfies the condition C4 of Definition <ref>. □ In a (r,K,M,N) MISO caching system, there is a server containingN files with the same size, and K users each of which has a cache with the capacity of M files through r antennas over the interference channel. Given a (r,K,F,Z,S) MAPDA 𝐐, we can obtain a (r,K,M,N)scheme for MISO caching system in the following two phases.∙Placement phase: The K columns and F rows denote the users and packets of each file, respectively. Specifically, the server divides each file into F packets; the entry 𝐐(f,k)=* means that the f^th packets of all files are cached by user k. Each user caches M=ZN/F files by Condition C1 of the Definition <ref>. So, if 𝐐(f,k), for f∈[F] and k∈[K], is an integer s∈[S], then the f^th packet of each file is not stored by user k. Clearly, the placement strategy of the scheme realized by MAPDA is called uncoded cache placement. ∙Delivery phase: The integer s∈[S] represents delivery strategy at the block s. For any demand vector d, at block s, the server first chooses a pre-coding matrix for r antennas to encode the requested packets indicated by s, and then sends coded packets to the users.In <cit.>, the authors pointed out that 1) at each block the server multicasts r_spackets requested by r_s different users by Conditions C2-3 of Definition <ref>; 2) the condition C4 of Definition <ref> ensures that at each block s∈ [S], the server can always find the pre-coding matrices such that each user can recover its requested packet. So, the delivery strategy of the scheme realized by MAPDA is a one-shot linear delivery. Then we can obtain the following result.Given (r,K,F,Z,S) MAPDA 𝐐, there exists anF-division scheme for the (r,K,M,N) multiple antennas coded cachingproblem with memory ratio M/N=Z/F, sum-DoF K(F-Z)/S and subpacketization F. □Under the constraints of uncoded cache placement and one-shot linear delivery, the maximum sum-DoF is upper bounded by min{K,KM/N+r} <cit.>, which is also an upper bound on the sum-DoF achieved by the caching schemes from MAPDA.For the (r,K,M,N) multiple-antenna coded caching scheme with memory ratio M/N=Z/F generated by a (r,K,F,Z,S) MAPDA, the sum-DoF is no more than min{KZ/F+r=KM/N+r,K}. □ In the literature, the schemes in <cit.> can be represented by MAPDAs. So we summarize all the schemes which achieve thesum-DoF min{r+ KM/N,K} by MAPDAs in Table <ref>. In Table <ref>, it is not difficult to check that the subpacketizations F of the first six MAPDAs are exponent with the number of users; the subpacketizations of the last five MAPDAs are linear with the number of users; the second MAPDA is a special case of the fifth scheme; the subpacketizations of the third and fourth MAPDAs are larger than that of the fifth scheme. So in the following we will only use the fifth, sixth, seventh and eighth MAPDAs in Table <ref>. The authors in <cit.> also pointed out that MAPDA can also be used for the SISO cache-aided interference channels by viewing the transmitters as transmit antennas.This is becausethe channel coefficient between any transmitter and user can be chosen independently and identically distributed (i.i.d) from ℂ, which enables us to always find a pre-coding matrix for all the transmitters. However, the MAPDA can not be directly applied to design coding schemes for the partially connected linear network since channel coefficients h^(s)_k,jcan be chosen independently and identically from ℂ only if the transmitter T_j connects to the user U_k, and h^(s)_k,j=0 otherwise. This means that directly applying MAPDA will lead the users not to receive the signals carrying their desired packets, due to the disconnecting link h^(s)_k,j=0.§ MAPDA FOR PARTIALLY CONNECTED LINEAR INTERFERENCE NETWORKS In this section, we will show that MAPDA can also be used to generate a coded caching scheme for the partially connected linear network by our novel construction and the Schwartz-Zippel Lemma <cit.>, i.e., the following result which is proved in Section <ref>. Given a (r,K,F_1,Z_1,S_1) MAPDA, there exists a (K,L,M_T,M_U,N) coded caching scheme for the (K+L-1)× K partially connected linear network achieving the NDT τ = S_1/F_1 with L M_T/N=r/⌈ K/L⌉∈ℤ^+, M_U/N=Z_1/F_1, and subpacketization F=LF_1. □By Theorem <ref> and the fifth, sixth, seventh and eighth MAPDAs in Table <ref>, we have the following schemes for the (K+L-1)× K partially connected linear network. For any positive integers K, L, m and N, there exist four (K,L,M_T,M_U,N) coded caching schemes for the (K+L-1)× K partially connected linear network with M_T/N∈{1/L,…,L-1/L,1}, M_U/N∈{0,1/K, …,K-1/K,1}, NDT τ_new=K(1-M_U/N)/min{K,KM_U+M_TL⌈ K/L⌉/N}, subpacketization and their parameter limitations listed in Table <ref>. §.§ Performance Analyses In this subsection, we will show the advantages of our schemes compared to the XTZ scheme in <cit.> from the point of view of subpacketization, NDT, and complexity in the delivery phase, respectively. Prior to the comparisons, we first introduce the data placementand the delivery strategies of the scheme in <cit.>.§.§.§ The placement strategy for users of XTZ scheme in <cit.> Given the data placement in (L,M_U,N) MN scheme (i.e., the scheme for the shared-link network with L users each with a cache size of M_U files), each user in a partially connected network consecutively chooses a placement method such that theplacement method chosen for any L consecutive of users follows exactly the placement strategy of the (L,M_U,N) MN scheme. Here we summarize the subpacketization of XTZ scheme in <cit.> as follows.LM_U L/NL,LM_T/N=1 LM_UL/N(L-M_UL/N),LM_T/N∈[2:L], M_T/N+M_U/N≥ 1, LM_UL/NL-M_UL/N-1LM_TL/N-1(L-M_UL/N),LM_T/N∈[2:L], M_T/N+M_U/N< 1. From (<ref>), we observe that the XTZ scheme requires a subpacketization that grows exponentially with L.By Table <ref>, the subpacketization of our schemes F is small or linear with the number of users K for some parameters, instead of exponentially increasing in the XTZ scheme, which demonstrates the advantage of the small subpacketizations.Unlike the XTZ scheme that takesL consecutive users at a time and uses L-user MN data placement, our data placement is determined by the stars in a (r,K,F_1,Z_1,S_1) MAPDA, which implies that our schemes globally design the placement and delivery among all the K users. Naturally, our schemes could improve the NDT of the XTZ scheme for some parameters as the global design among all users' data placement could potentially create larger multicast opportunities. By theoretical comparisons, we have Table <ref> whose proof is included in Appendix <ref>.By Table <ref>, we can see that our schemes reduce the NDT compared to the XTZ scheme when M_U/N+M_T/NL/K⌈K/L⌉≥ 1 or M_U/N+M_T/N< 1 and L∤ K; when M_U/N+M_T/N≥ 1 our scheme has the same optimal NDT as the XTZ scheme. Let us also take a numerical comparison to verify our claim in Table<ref>. When M_T/N=1/2, K=10, and L=6, we can obtain the NDTs of our schemes in Theorem <ref> and the XTZ scheme respectively, as listed in Fig.<ref>.We can see that the NDTs of Scheme 1, Scheme 3, and Scheme 4 are smaller than the NDT of the XTZ scheme when M_U/N≤1/2, which is corresponding to the conditions in the second and third row of Table <ref>. Recall that the XTZ scheme already achieves the optimal NDT when 1/2≤M_U/N for the condition in first row of Table <ref>. Clearly, Scheme 1 and Scheme 2 also have the same NDT which implies that they are also optimal.Finally, by Table <ref> we can see that there exist some schemes in Theorem <ref> having both smaller NDT and subpacketizations than those of the XTZ schemes.§.§.§ The placement strategy for transmitters of XTZ scheme in <cit.> For the placement strategy of the transmitters, when L M_T/N = 1 each of every L consecutive transmitters caches a distinct part of each file. This is the same as our placement strategy for the transmitters. When L M_T/N> 1 the placement strategy for the transmitters relies on the placement strategy for the users. Clearly, our placement strategy for the transmitters is independent to the placement strategy for the users. §.§.§ The delivery strategy of XTZ scheme in <cit.> In the delivery phase of the XTZ scheme, the computational complexity mainly comes from the design of the precoding matrices and decoding matrices to align or neutralize the interference in the communication<cit.>.Specifically, it requires the users to first wait for multiple transmission slots and then compute the decoding matrices, which are the inverse of the multiplication of channel coefficients matrices and the precoding matrices. For example, for the case LM_T/N=1, the XTZ scheme applies the interference alignment in transmission that requires in total L L-1M_UL/N n^ρ+L-1M_UL/N+1)(n+1)^ρ transmission slots where n∈ℤ^+ and ρ=(K+L-1)(L-M_UL/N-1).On the contrary, there exists no matrix computation in our schemes due to one-shot binary communication, i.e., the user can directly obtain the desired symbol after every transmission.§.§ Sketch of Construction in Theorem <ref> Let us consider a(K+L-1)× K =7× 5 partially connected linear network with(K,L,M_T,M_U,N)=(5,3,10,3,15). We will show how to generate acoded caching scheme based ong-(r,K,F_1,Z_1,S_1)=5-(4,5,5,1,4) MAPDA 𝐐 in Example <ref>. Clearly, r/⌈ K/L⌉=2 is an integer, i.e., the condition in Theorem <ref> holds.Our main construction idea is that we first generate a new g-(r,K,LF_1,LZ_1,LS_1)=5-(4,5,15,3,12) MAPDA 𝐏 based on 𝐐 and an LF_1× (K+L-1)=15× 7 array 𝐓 calledtransmitter caching array. Then, we propose the placement strategies for the transmitters and users according to the stars in 𝐓 and 𝐏, respectively. According to the integers in𝐏, the delivery strategy is proposed. So, our method contains constructing two arrays 𝐏 and 𝐓, and realizing a scheme via 𝐓 and 𝐏. §.§.§ Constructing 𝐏 and 𝐓 By placing 𝐐 in Example <ref> three times vertically and then increasing the integers in 𝐐 by the occurrence orders (from up to down) of 𝐐, we have a 15× 5 array 𝐏as follows. 𝐏=([ 𝐐; 𝐐+4; 𝐐+8 ]).It is not difficult to check that the obtained array 𝐏 is a g-(r,K,LF_1,LZ_1,LS_1)=5-(4,5,15,3,12) MAPDA. Now let us construct the 15× 7 array 𝐓. As illustrated in Fig.<ref>, we first construct an L× L=3× 3 square 𝐀 in which each row has t=LM_T/N=2 cyclic stars, replicate each row of 𝐀 vertically F_1=5 times to obtain 𝐁, and further replicate 𝐁 horizontally ⌈ (K-1)/L⌉+1=3 times, and finally delete the lastL·(⌈ (K-1)/L⌉+1)-(K+L-1)=9-7 columns to obtain our desired array 𝐓 in Fig. <ref>.In order to simplify our introduction, we use the (l, f) to represent the row label for each l∈ [L] and f∈ [F_1]. That is, the arrays 𝐏=𝐏((l, f),k)_l∈ [L], f∈ [F_1],k∈ [K] and𝐓=𝐓((l, f),j)_l∈ [L],f∈ [F_1], j∈ [K+L-1].§.§.§ Generating a scheme via 𝐓 and 𝐏Let theK+L-1 columns and LF_1 rows of 𝐓 in Fig. <ref> represent the transmittersand packets of each file, respectively; and let the columns K and LF_1 rows of 𝐏 in (<ref>) represent the users and packets of each file, respectively. Then we can obtain a coded caching scheme for the (K+L-1)× K=7× 5 partially connected network as follows.∙ Placement phase: We divide each file into LF_1=15 packets with equal size, i.e., for any n∈[15], we have W_n={W_n,(l, f) | l∈ [L]=[3], f∈ [F_1]=[5]}. Each transmitter and user caches the packets according to the stars in array 𝐓 and 𝐏, respectively. Specifically, each transmitter T_j where j∈ [K+L-1] caches the packet W_n,(l, f) if the entry 𝐓((l, f),j)=*, i.e., all the transmitters cache the following packets.𝒵_T_1= {W_n,(1, f), W_n,(3, f) | f∈ [5], n∈[15]}, 𝒵_T_2= {W_n,(1, f), W_n,(2, f) |f∈ [5], n∈[15]}, 𝒵_T_3= {W_n,(2, f), W_n,(3, f) | f∈ [5], n∈[15]}, 𝒵_T_4= {W_n,(1, f), W_n,(3, f) | f∈ [5], n∈[15]}, 𝒵_T_5= {W_n,(1, f), W_n,(2, f) | f∈ [5], n∈[15]}, 𝒵_T_6= {W_n,(2, f), W_n,(3, f) | f∈ [5], n∈[15]}, 𝒵_T_7= {W_n,(1, f), W_n,(3, f) | f∈ [5], n∈[15]}.Each user U_k, for k∈ [K], caches the packet W_n,(l, f) if the entry 𝐏((l, f),k)=*, i.e., all the users cache the following packets.𝒵_U_1= {W_n,((l,1) |l∈ [L]=[3], n∈[15]}, 𝒵_U_2= {W_n,((l,2) |l∈ [L]=[3], n∈[15]}, 𝒵_U_3= {W_n,((l,3) |l∈ [L]=[3], n∈[15]},𝒵_U_4= {W_n,((l,4) |l∈ [L]=[3], n∈[15]}, 𝒵_U_5= {W_n,((l,5) |l∈ [L]=[3], n∈[15]}. ∙ Delivery phase: Assume that the request vector is d=(1,2,3,4,5). For each block s, we send all the packets indexed by the integer s in 𝐏. For instance, when s=1 we have𝐏((1,2),1)=𝐏((1,1),2) =𝐏((1,1),3)=𝐏((1,1),4) =𝐏((1,1),5)=1.Then, we will let the transmitters send the packets𝐖^(1) =(W̃_1,(1,2), W̃_2,(1,1),W̃_3,(1,1),W̃_4,(1,1),W̃_5,(1,1))^⊤,where W̃_n,(l, f) denotes the coded version ofany subfile W_n,(l, f), for l∈[L], f∈ [F_1]. Clearly, these packets are required by all users. From (<ref>), transmitters T_3 and T_6 do not cache any packet of W^(1). So the signals transmitted by all the transmitters can be represented as follows.𝐗^(1) =( [ 𝐱^(1)_T_1; 𝐱^(1)_T_2; ⋮; 𝐱^(1)_T_7 ])=𝐕^(1)𝐖^(1)= ( [ v^(1)_1,1 v^(1)_1,2 v^(1)_1,3 v^(1)_1,4 v^(1)_1,5; v^(1)_2,1 v^(1)_2,2 v^(1)_2,3 v^(1)_2,4 v^(1)_2,5; 0 0 0 0 0; v^(1)_4,1 v^(1)_4,2 v^(1)_4,3 v^(1)_4,4 v^(1)_4,5; v^(1)_5,1 v^(1)_5,2 v^(1)_5,3 v^(1)_5,4 v^(1)_5,5; 0 0 0 0 0; v^(1)_7,1 v^(1)_7,2 v^(1)_7,3 v^(1)_7,4 v^(1)_7,5 ]) ( [ W̃_1,(1,2); W̃_2,(1,1); W̃_3,(1,1); W̃_4,(1,1); W̃_5,(1,1) ]).Here each entry v^(1)_j,k, for j∈{1,2,4,5,7}, can be chosen from any value of ℂ. According to thepartially connected topology and from (<ref>), the signals received by all the users are 𝐘^(1) = ( [ 𝐲^(1)_U_1; 𝐲^(1)_U_2; 𝐲^(1)_U_3; 𝐲^(1)_U_4; 𝐲^(1)_U_5 ]) =( [ h^(1)_1,1 h^(1)_1,2 h^(1)_1,3 0 0 0 0; 0 h^(1)_2,2 h^(1)_2,3 h^(1)_2,4 0 0 0; 0 0 h^(1)_3,3 h^(1)_3,4 h^(1)_3,5 0 0; 0 0 0 h^(1)_4,4 h^(1)_4,5 h^(1)_4,6 0; 0 0 0 0 h^(1)_5,5 h^(1)_5,6 h^(1)_5,7 ])( [ v^(1)_1,1 v^(1)_1,2 v^(1)_1,3 v^(1)_1,4 v^(1)_1,5; v^(1)_2,1 v^(1)_2,2 v^(1)_2,3 v^(1)_2,4 v^(1)_2,5; 0 0 0 0 0; v^(1)_4,1 v^(1)_4,2 v^(1)_4,3 v^(1)_4,4 v^(1)_4,5; v^(1)_5,1 v^(1)_5,2 v^(1)_5,3 v^(1)_5,4 v^(1)_5,5; 0 0 0 0 0; v^(1)_7,1 v^(1)_7,2 v^(1)_7,3 v^(1)_7,4 v^(1)_7,5 ]) ( [ W̃_1,(1,2); W̃_2,(1,1); W̃_3,(1,1); W̃_4,(1,1); W̃_5,(1,1) ])= ( [ h^(1)_1,1 h^(1)_1,2 0 0 0; 0 h^(1)_2,2 h^(1)_2,4 0 0; 0 0 h^(1)_3,4 h^(1)_3,5 0; 0 0 h^(1)_4,4 h^(1)_4,5 0; 0 0 0 h^(1)_5,5 h^(1)_5,7 ])( [ v^(1)_1,1 v^(1)_1,2 v^(1)_1,3 v^(1)_1,4 v^(1)_1,5; v^(1)_2,1 v^(1)_2,2 v^(1)_2,3 v^(1)_2,4 v^(1)_2,5; v^(1)_4,1 v^(1)_4,2 v^(1)_4,3 v^(1)_4,4 v^(1)_4,5; v^(1)_5,1 v^(1)_5,2 v^(1)_5,3 v^(1)_5,4 v^(1)_5,5; v^(1)_7,1 v^(1)_7,2 v^(1)_7,3 v^(1)_7,4 v^(1)_7,5 ])( [ W̃_1,(1,2); W̃_2,(1,1); W̃_3,(1,1); W̃_4,(1,1); W̃_5,(1,1) ]) =𝐇^(1)_1𝐕^(1)_1𝐖^(1) =𝐑^(1)_1𝐖^(1). It is not difficult to check that all the users can decode their requesting packets by their cached packets respectively if𝐑^(1)_1= ( [ 1 a_1 a_2 a_3 a_4; a_5 1 0 0 0; 0 0 1 0 0; 0 0 0 1 0; 0 0 0 0 1 ]).Herea_i, for i∈[5], can be any complex number in ℂ. For instance, given𝐑^(1)_1 in (<ref>), the user U_1 observes the coded signal𝐲^(1)_U_1= W̃_1,(1,2)+a_1 W̃_2,(1,1)+a_2W̃_3,(1,1)+a_3W̃_4,(1,1)+a_4W̃_5,(1,1).From (<ref>), user U_1 has cached the packets W_2,(1,1), W_3,(1,1), W_4,(1,1) and W_5,(1,1). So, it can obtain thecoded packets W̃_2,(1,1), W̃_3,(1,1), W̃_4,(1,1) and W̃_5,(1,1). Clearly it can decode the required W̃_1,(1,2) based on the received 𝐲^(1)_U_1 and then recover the desired packet W_1,(1,2).Recall that the non-zero entry in𝐇^(1) is independent and identically distributed in ℂ. For instance, let𝐇^(1)_1=( [ 1 2 0 0 0; 0 1 2 0 0; 0 0 1 2 0; 0 0 1 3 0; 0 0 0 1 2 ]), 𝐑^(1)_1=( [ 1 1 1 1 1; 1 1 0 0 0; 0 0 1 0 0; 0 0 0 1 0; 0 0 0 0 1 ]).We have 𝐇^(1) is full rank and(𝐇^(1)_1)^-1=( [1 -2 12 -80;01 -640;003 -20;00 -110;000.5 -0.50.5 ]).From (<ref>) we have𝐕^(1)_1 =(𝐇^(1)_1)^-1( [ 1 1 1 1 1; 1 1 0 0 0; 0 0 1 0 0; 0 0 0 1 0; 0 0 0 0 1 ])= ( [ -1 -1 13 -71;11 -640;003 -20;00 -110;000.5 -0.50.5 ]). Then, we have the precoding matrix𝐕^(1)=( [ -1 -1 13 -71;11 -640;00000;003 -20;00 -110;00000;000.5 -0.50.5 ]).So, for any give channel matrix 𝐇^(1), we can always obtain a precoding matrix 𝐕^(1) such that (<ref>) holds. Similarly, we can check the other s=2, …, 12. From (<ref>) and (<ref>), we can obtain the NDTτ_new(M_T=10,M_U=3) and sum-DoF_new as follows.τ_new(M_T =10,M_U=3) =LS_1/LF_1=12/15=4/5, Sum-DoF_new = K(1-M_U/N)/τ_new(M_T=10,M_U=3) =5. Now let us see the performance of the XTZ scheme. By Lemma <ref> we have to use the memory sharing to obtain the (K,L,M_T,M_U,N)=(5,3,10,3,15)coded caching scheme for the partially connected linear network generated by the (K=5,L=3,M_T^'=10,M_U^'=0,N=15) scheme, say Scheme A, where each file has 2/5V bits, and the (K=5,L=3,M_T^”=10,M_U^”=5,N=15) scheme, say Scheme B, where each file has 3/5V bits, in <cit.> respectively, sinceM_T^'·2V/5+M_T^”·3V/5 =10· V=M_T,M_U^'·2V/5+M_U^”·3V/5 =5·3V/5=3V=M_U.By Lemma <ref>, we have p'=M_T^' L/N=2 and q'=M_U^' L/N=0. Then, by third statement of Lemma <ref>, the NDT of Scheme A isτ_A(M_T^' = 10, M_U^'=0)=L-M_U^' L/N/M_T^' L/N+M_U^' L/N= 3/2.Similarly, we have p^”=M_T^” L/N=2 and q^”=M_U^” L/N=1. Then, by the second statement of Lemma <ref>, the NDT of Scheme B isτ_A(M_T^”=10,M_U^”=5) =L-M_U^” L/N/L =2/3. Thus, the obtained scheme with M_U/N=1/5 has the following NDTτ_XT(M_T=10,M_U=3)= 2/5·τ_A(M_T=10,M_U^'=0) +3/5·τ_B(M_T=10, M_U^”=5)=2/5·3/2 +3/5·2/3= 1 > 4/5=τ_new(M_T=10,M_U=3),and the following Sum-DoFSum-DoF_X = K(1-M_U/N)/τ_XT (M_XT=10,M_U=3)=4<5=Sum-DoF_new.So, our scheme achieves a smaller NDT and a larger Sum-DoFthan that of the XTZ scheme. § THE PROOF OF THEOREM <REF> Let us consider the (K,r,M_T,M_U,N) partially connected coded caching problem, where t=LM_T/N∈[0:L] and z=KM_U/N∈ [0:K]. Given a (r,K,F_1,Z_1,S_1) MAPDA 𝐐 satisfying r/⌈ K/L⌉=t, as introduced in Subsection <ref>, we first construct a (r, K,L F_1,L Z_1,L S_1) MAPDA 𝐏 and an LF_1× (K+L-1)transmitter caching array 𝐓, and then generate our desired partially connected coded caching scheme by using these two arrays.§.§ Constructing MAPDA 𝐏 and Transmitter Cache Array 𝐓We can obtain a new array 𝐏 by replicating 𝐐 vertically L times and then increasing the integers in 𝐐 by the occurrence orders (from up to down) of 𝐐, i.e., the LF_1× K array 𝐏 is constructed as𝐏= (𝐏((l, f),k))_l∈[L], f∈ [F],k∈ [K] =([𝐐;𝐐+S;⋮; 𝐐+(L-1)S ]).It is easy to check that 𝐏 is a (r, K,L F_1,L Z_1,L S_1) MAPDA. From (<ref>) each entry of 𝐏 can be defined in the following way. rCl 𝐏((l, f),k)=𝐐( f,k)+(l-1)S_1, l∈[L], f∈[F_1], k∈[K].Here the sum a+*=* for any integer a∈ℤ. Let us introduce the construction of 𝐓. We first construct a cyclic star placement array that is defined in <cit.> as follows. (Cyclic star placement)An (L,t) star placement array 𝐀=𝐀(l,l')_l,l'∈ [L] including stars and null entries, is referred to as a cyclic star placement array, if the stars in each row are placed in a cyclic wrap-around topology, i.e., each entry𝐀(l,l')=* only ifl'∈{<l+μ>_L| μ∈[0:t-1]}.For instance, we can check that the following array𝐀=( [ * *; * *; * ]) is a(3,2) star placement array which is listed in Fig. <ref>. As illustrated in Fig. <ref>, we replicate each row of 𝐀 vertically F_1 times to obtain 𝐁, and further replicate 𝐁 horizontally ⌈ (K-1)/L⌉+1=3 times and deletethe lastL·(⌈ (K-1)/L⌉+1)-(K+L-1) columns to obtain our desired array 𝐓. In fact, each entry of the obtained array 𝐓=𝐓((l, f),j)_l∈ [L],f∈[F],j∈[K+L-1] can be defined as follows.𝐓((l, f),j)={[*if<j>_L ∈;{<l+μ>_L | μ∈[0:t-1]},; Null otherwise. ].For instance, when t=2, K=5, F_1=5,L=3 and from (<ref>), we have the transmitter caching array 𝐓 listed in Fig. <ref>.§.§ The Scheme Realized by 𝐏 and 𝐓Given a MAPDA (r,K,LF_1,LZ_1,LS_1) 𝐏 and a transmitter cache array 𝐓 constructed in the above subsection, we can obtain a LF-division coded caching scheme for the (K,L,M_T,M_U,N) for the (K+L-1)× K partially connected network in the following way.∙Placement phase: Each file W_n where n∈ [N] is divided into LF_1 packets with equal size, i.e., W_n=(W_n,(l, f))_l∈[L], f∈ [F_1]. From (<ref>), each transmitter T_j where j∈[K+L-1] caches the following packets.𝒵_T_j={W_n,(l, f) | 𝐓((l, f),j)=*, l∈ [L],f∈ [F_1], n∈ [N]}.We can check that transmitter T_j caches exactly tF_1N packets. Recall that t=LM_T/N. So we have M_T=tF_1N/LF_1. From (<ref>),each user U_k caches the following packets.𝒵_U_k={W_n,(l, f) | 𝐏((l, f),k)=*, l∈ [L],f∈ [F_1], n∈ [N]}.We can check that user U_k caches exactly LZ_1N packets and M_U=LZ_1N/LF_1=Z_1N/F_1. ∙Delivery phase: For any request vector d, the delivery strategy consists of LS_1 blocks. For each block s∈ [LS_1], we assume that there are r_sentries 𝐏((l_1, f_1),k_1), 𝐏((l_2, f_2),k_2), … , 𝐏((l_r_s, f_r_s),k_r_s) equal to s, where l_i∈[L], f_i∈[F_1] and k_i∈[K]for each i∈ [r_s]. The vector ofpackets to be transmitted in block s and the user index set to recover these packets are denoted by𝐖^(s)=( [ W̃_d_k_1,(l_1, f_1); W̃_d_k_2,(l_2, f_2); ⋮; W̃_d_k_r_s,(l_r_s, f_r_s) ]) and 𝒦_s={k_1,k_2,…,k_r_s}, respectively, where W̃_n,(l, f) denotes the coded version ofW_n,(l, f), for l∈[L], f∈ [F_1]. By the third property C3 of the MAPDA, we have that |𝒦_s|=r_s. Without loss of generality, we assume that k_1<k_2<⋯<k_r_s and each user U_k_i requires the packet W_d_k_i,(l_i, f_i) where i∈ [r_s]. Then, each transmitter T_j, j ∈ [K+L-1] will transmit𝐱^(s)_T_j=∑_i=1^r_s v^(s)_j,iW̃_d_k_i,(l_i, f_i),where v_j,i=0 if l_i∉{<j>_L, <j-1>_L, …, <j-t+1>_L} by (<ref>) and (<ref>), otherwise v_j,i can be chosen any complex number. For the users in 𝒦_s, all the signals transmitted by transmitters at block s are𝐘(s) =𝐇^(s)𝐗(s) =𝐇^(s)𝐕^(s)𝐖^(s)=𝐇^(s)( v^(s)_1, v^(s)_2,…, v^(s)_r_s) 𝐖^(s)=(𝐇^(s) v^(s)_1,𝐇^(s) v^(s)_2,…,𝐇^(s) v^(s)_r_s) 𝐖^(s)=𝐑^(s)𝐖^(s).Recall that each userU_k_i can receive the signal consisting of 𝐱^(s)_T_k_i, 𝐱^(s)_T_k_i+1, …, 𝐱^(s)_T_k_i+L-1 sent from the transmitters T_k_i, T_k_i+1, …, T_k_i+L-1, respectively, i.e., the following signal from (<ref>)y_k_i^(s) =∑_j=k_i^k_i+L-1h^(s)_k_i,j𝐱^(s)_T_j=∑_j=k_i^k_i+L-1h^(s)_k_i,j(∑_i'=1^r_s v^(s)_j,i'W̃_d_k_i',(l_i', f_i'))=∑_j=k_i^k_i+L-1∑_i'=1^r_sh^(s)_k_i,j v^(s)_j,i'W̃_d_k_i',(l_i', f_i')=∑_i'=1^r_s(∑_j=k_i^k_i+L-1 h^(s)_k_i,jv^(s)_j,i')W̃_d_k_i',(l_i', f_i').In our scheme, we design the beamforming vector {v_j,i} to ensure one-shot delivery, i.e., each 𝐱^(s)_T_j can be directly decoded by the desired users. We will explain the design of {v_j,i} in the following subsection. §.§ Decodability for Each UserNow let us consider the subarray 𝐏^(s) generated by the rows (l_1, f_1), (l_2, f_2), …, (l_r_s, f_r_s) and columns in 𝒦_s. In the following, we will take the column indices in 𝒦_s and row indices (l_1, f_1), (l_2, f_2), …, (l_r_s, f_r_s) as the columns indices and row indices of the subarray 𝐏^(s), respectively. For each i∈ [r_s], assume that there are λ^(s)_i columns with indices in𝒦_s containing integers at the row (l_i, f_i) of 𝐏. The set of these column indices can be written as 𝒫^(s)_i={k_i'∈𝒦_s | 𝐏((l_i, f_i),k_i')∈ [LS_1], i'∈ [r_s]}.Clearly, |𝒫^(s)_i|=λ^(s)_i. By (<ref>) and (<ref>), the demanded packet W_d_k_i,(l_i, f_i) required by user U_k_i is not cached by user U_k_i' ifk_i'∈𝒫^(s)_i. So, (<ref>) can be written asy_k_i^(s) =∑_i'=1^r_s(∑_j=k_i^k_i+L-1 h^(s)_k_i,jv^(s)_j,i')W̃_d_k_i',(l_i', f_i')=(∑_j=k_i^k_i+L-1 h^(s)_k_i,jv^(s)_j,i)W̃_d_k_i,(l_i, f_i)_Required & Uncaching packet+∑_i'∈[r_s]:k_i∈𝒫^(s)_i'∖{k_i'}(∑_j=k_i^k_i+L-1 h^(s)_k_i,jv^(s)_j,i')W̃_d_k_i',(l_i', f_i') _Unrequired & Uncaching packets+∑_i'∈[r_s]:k_i∈𝒦_s∖𝒫^(s)_i'(∑_j=k_i^k_i+L-1 h^(s)_k_i,jv^(s)_j,i')W̃_d_k_i',(l_i', f_i') _Caching packets,where the packet in the first term of the right side is required by user U_k_i; the packets in the second term of the right side are neither required nor cached by user U_k_i; the packets in the third term of the right side are not required but cached by user U_k_i. Clearly, we only need to consider the packets in the first two terms in(<ref>) since the user U_k_i can cancel all the packets in the third term by its cached contents. In order to decode the desired packet W̃_d_k_i,(l_i, f_i), we have to cancel the interfering packets in the second term of(<ref>). Clearly, each user U_k_i where i∈ [r_s] can decode its required packet W̃_d_k_i,(l_i, f_i) by its received signal y_k_i^(s) and its cached packets 𝒵_U_k_i if the following condition holds for any two different integers i,i'∈ [r_s].{[ 1=∑_j=k_i^k_i+L-1 h^(s)_k_i,jv^(s)_j,i= 𝐇^(s)(k_i,·)𝐯^(s)_i =𝐑^(s)(i,i) ,; c0=∑_j=k_i^k_i+L-1 h^(s)_k_i,jv^(s)_j,i'= 𝐇^(s)(k_i,·)𝐯^(s)_i' =𝐑^(s)(i,i'), k_i∈𝒫^(s)_i'∖{k_i'}. ].It is worth noting that the first equality in (<ref>) means that the user U_k_i can decode its desiredpacket W̃_d_k_i,(l_i, f_i), and the second equality in (<ref>) means that the user U_k_i can cancel its un-required and un-cached coded packet W̃_d_k_i',(l_i', f_i').Recall that for any i∈ [r_s] and j∈ [K+L-1], each coefficient h^(s)_k_i,j=0 if j∉{k_i,k_i+1,…,k_i+L-1}, otherwise h^(s)_k_i,j can be chosen any complex number in i.i.d distribution. Since all the required packets are sent by the transmitters in the whole communication process, each user can decode its required file if (<ref>) holds for all s∈ [LF_1]. So, it is sufficient to show that there exists a precoding matrix 𝐕^(s) satisfying (<ref>), for each s∈ [LS_1].§.§ The Existence of Precoding Matrix 𝐕^(s) Satisfying (<ref>)Now we will show that we can choose appropriate coefficients v_j,i for allj∈[K+L-1] and i∈[r_s] such that (<ref>) always holds. Recall that the transmitters cache all the packets in a successive placement from (<ref>). So, there are exactly t transmitters in {T_k_i, T_k_i+1,…, T_k_i+L-1} cachingeach packet W_d_k_i,(l_i, f_i) where i∈ [r_s]. Without loss of generality, we assume that the index set of the transmitters, each of which caches the W_d_k_i,(l_i, f_i), is𝒯={i+mL |i∈ [t], m∈[⌊K+L-1/L⌋]}⋂[K+L-1].By the assumption that each coefficient v^(s)_j,i=0 in (<ref>) for each j∉𝒯, there are exactly |𝒯| coefficients that can get any compplex number. So, (<ref>) can be written as follows.𝐇^(s)(𝒫^(s)_i, 𝒯)𝐯^(s)_i(𝒯)= b=(b_1,b_2,…, b_λ^(s)_i)^⊤, where b_i'=1 if k_i'=k_i and b_i'=0 if k_i'∈𝒫^(s)_i∖{k_i}.Recall that r/⌈ K/L⌉=t, i.e., r=t⌈ K/L⌉. By the fourth condition in (<ref>) of Definition <ref>, we have |𝒯|≥ t⌊K+L-1/L⌋≥ t⌈K/L⌉≥λ^(s)_i=|𝒫^(s)_i|. This implies that the number of rows of𝐇^(s)(𝒫^(s)_i, 𝒯) is less than or equal to the number of columns of 𝐇^(s)(𝒫^(s)_i, 𝒯). Furthermore, by Schwartz-Zippel Lemma <cit.>, we obtain the following result whose proof is included in Appendix <ref>. 𝐇^(s)(𝒫^(s)_i, 𝒯) is a full row rank matrix.By the linear algebra, there must exist a 𝐯^(s)_i(𝒯)∈ℂ^|𝒯| satisfying (<ref>). Then, by adding K+L-1-|𝒯| zero entries in 𝐯^(s)_i(𝒯), we can obtain a column vector 𝐯^(s)_i that satisfies (<ref>).Finally, from (<ref>) and (<ref>), we can obtain the NDTτ_new(M_T,M_U)=S/F and sum-DoF_new = K(1-M_U/N)/τ_new(M_T,M_U)=K(1-Z_1/F_1)F/S=g. § CONCLUSIONIn this paper, we studied the coded caching problemfor the (K,L,M_T,M_U,N) partially connected linear network. Firstly,we showed that MAPDA can be also used to design the scheme for the partially connected linear network with a delicate design on the data placement on the transmitters and users. Consequently, by the existing MAPDAs and a delicate construction method, we obtain some new schemes for the partially connected linear network which have smaller NDT than that of the XTZ scheme for many cases. Furthermore,our schemes operate in one-shot linear delivery and can significantly reduce the subpacketizations compared to the XTZ scheme. This implies that our schemes are communication-efficientand have a wider range of applications and lower complexity of implementation.§ PROOF OF LEMMA <REF> First, the following notation and assumption can be used to simplify our introduction. Let 𝐇=𝐇^(s)(𝒫^(s)_i, 𝒯). We also use the row indices and column indices of 𝐇^(s) as the row indices and column indices of 𝐇. That is,𝐇=(𝐇(k,j))_k∈𝒫^(s)_i,j∈𝒯. Without loss of generality, we assume that 𝒫^(s)_i={k_1,k_2,…,k_λ^(s)_i}. The main idea is that we first find a non-zero path(𝐇(k_1,j_1), 𝐇(k_2,j_2),…,𝐇(k_λ^(s)_i,j_λ^(s)_i)),where 𝐇(k_λ,j_λ)≠ 0 for all different λ∈ [λ^(s)_i] and all different j_λ∈𝒯, then select the columns j_1, j_2, …, j_λ^(s)_i and all the rows to form a square matrix 𝐌 with non-zero diagonal elements, and finally we view the determinant of 𝐌 as a non-zero polynomial of (𝐇(k_1,j_1), 𝐇(k_2,j_2),𝐇(k_λ^(s)_i,j_λ^(s)_i)) <cit.>, so that 𝐌 is invertible with high probability by Schwartz-Zippel <cit.> in the following. Let f∈ℱ[x_1,x_2,…,x_n] be a non-zero polynomial of total degree d≥0 over a field ℱ. Let 𝒮 be a finite subset of ℱ and let r_1, r_2, …, r_n be selected at random independently and uniformly from 𝒮. Then, Pr(f(r_0,r_1,…,r_n-1)=0)≤d/|𝒮|. From the above introduction,we now only need to show that we can always find a non-zero path (𝐇(k_1,j_1), 𝐇(k_2,j_2), …, 𝐇(k_λ^(s)_i,j_λ^(s)_i)) where 𝐇(k_λ,j_λ)≠ 0 for all different λ∈ [λ^(s)_i] and all different j_λ∈𝒯. From (<ref>), we have that each row of 𝐇 has exactly t successive non-zero complex numbers. Recall that 𝒫^(s)_i={k_1,k_2, …,k_λ^(s)_i}, l∈[λ^(s)_i]. By the connecting assumption between transmitters and users, letj_1=min({k_1,k_1+1, …,k_1+L-1}⋂𝒯) and j_λ=min(({k_l,k_l+1,…,k_l+L-1}⋂𝒯)∖{j_1,j_2,…,j_λ-1})for each λ∈ [2: λ^(s)_i]. Clearly, 𝐇(k_λ,j_λ)≠ 0 always holds for each λ∈ [λ^(s)_i] by our placement strategy for the transmitters, i.e.,(<ref>) and (<ref>). Furthermore,we have that j_1<j_2<⋯<j_λ^(s)_i by our assumption that k_1<k_2<⋯<k_λ^(s)_i.From the above discussion, we have that 𝐇=𝐇^(s)(𝒫^(s)_i, 𝒯) is full row rank. § PROOF OF TABLE <REF> Let us compare the values of τ_XTZ in Lemma <ref> and τ_new in Theorem <ref> according to the values of M_U/N and M_T/N. §.§.§ M_U/N+M_T/N≥ 1Clearly we have M_U/N+M_T/NL/K⌈K/L⌉≥ 1 andτ_XTZ =1-M_U/N/min{M_T/N+M_U/N, 1 }= K(1-M_U/N)/K=min{K,KM_U+M_TL⌈ K/L⌉/N} = 1-M_U/N = τ_new. This implies that our scheme also achieves the optimal NDT since the NDT in <cit.> is optimal. §.§.§M_U/N+M_T/N< 1 andM_U/N+M_T/NL/K⌈K/L⌉≥ 1According to the value M_TL/N in Lemma <ref>, we have to consider the following two subcases, i.e.,M_T/N=1/L and M_T/N∈{2/L,…L-1/L, 1}. When M_T/N=1/L we haveτ_XTZ = (1-1/L+1/M_UL/N + 1 )(1-M_U/N)>1-M_U/N=τ_new.When M_T/N∈{2/L,…L-1/L, 1}, we haveτ_XTZ = 1-M_U/N/min{M_T/N+M_U/N, 1 }> K(1-M_U/N)/K=min{K,KM_U+M_TL⌈ K/L⌉/N}=1-M_U/N=τ_new.§.§.§ M_U/N+M_T/NL/K⌈K/L⌉<1 According to the value M_TL/N in Lemma <ref>, the operation symbol “⌈⌉" ofτ_newin Theorem <ref>, we have to consider the following subcases.∙When M_T/N=1/L,we have M_T/N=1/L<1, then1/M_UL/N+1-1/L = 1/L(1/M_U/N+M_T/N-1)< 1/M_U/N+M_T/N-1≤1/M_U/N+M_T/NL/K⌈K/L⌉ -1. Hence, we have 1+ 1/M_UL/N+1-1/L < 1/M_U/N+M_T/NL/K⌈K/L⌉, andτ_new =1- M_U/ N /M_U/N+M_T/NL/K⌈K/L⌉ > (1-1/L+1/M_UL/N + 1 )(1-M_U/N) = τ_XTZ.∙When M_T/N∈{2/L,…L-1/L, 1} and K∤ L we haveτ_new =1- M_U/ N /M_U/N+M_T/NL/K⌈K/L⌉ <1- M_U/ N /M_U/N+M_T/N = τ_XTZ. ∙When M_T/N∈{2/L,…L-1/L, 1} and K| L we haveτ_new =1- M_U/ N /M_U/N+M_T/NL/K⌈K/L⌉ =1- M_U/ N /M_U/N+M_T/N = τ_XTZ. IEEEtran
http://arxiv.org/abs/2310.17931v1
{ "authors": [ "Minquan Cheng", "Yun Xie", "Zhenhao Huang", "Mingming Zhang", "Youlong Wu" ], "categories": [ "cs.IT", "math.IT" ], "primary_category": "cs.IT", "published": "20231027070758", "title": "Coded Caching Scheme for Partially Connected Linear Networks Via Multi-antenna Placement Delivery Array" }
=1∑∫Department of Physics, University of Colorado, Boulder, Colorado 80309, USACenter for Theory of Quantum Matter, University of Colorado, Boulder, Colorado 80309, USA In 1973, Coleman and Gross proved that in four dimensions, only non-abelian gauge theories can have asymptotic freedom. More recently, Aizenman and Duminil-Copin proved that four dimensional scalar field theories are quantum trivial in the continuum. Both of these proofs have a loophole, and it is the same loophole in both proofs: The proofs assume that the scalar self-coupling in the UV is positive definite. While this is a perfectly reasonable and classically very intuitive assumption, it is an assumption nevertheless. In this work, I show that the assumption of coupling positivity is violated in a concrete quantum field theory, the O(N) model, in the large N limit. Surprisingly, despite the classically nonsensical unbounded potential, the negative coupling has no pathological consequence for propagators, the free energy or cross sections. This suggests that interacting scalar field theories with asymptotic freedom in four dimensions are possible, despite long-held opinions to the contrary. A loophole in the proofs of asymptotic freedom and quantum triviality Paul Romatschke January 14, 2024 =====================================================================Classical physics is intuitive. In classical physics, a marble placed inside a bowl will always stay within the bowl, and never escape off to infinity. Similarly, if the bowl is turned upside-down, the marble place on its upper surface have the tendency to escape far away from the bowl.In quantum physics, things are not so intuitive. Phenomena such as quantum tunneling or the two-slit experiment have no simple classical analogue, and it is only by learning to trust the mathematics that physicists were able to gain understanding of quantum phenomena.The proofs of two important concepts in theoretical quantum field theory, namely the no-go proofs for asymptotic freedom and non-trivial interaction in four dimensional scalars in the continuum <cit.>, are based on the assumption of a classically stable potential. The authors of Ref. <cit.> are very specific that their proof only applies to stable potentials, yet in the high-energy physics community, their proof often gets summarized as “interacting scalar field theory in the continuum does not exist”. However, already in 1973, Symanzik <cit.> suggested that classically unstable potentials may lead to well-defined quantum field theories (cf. <cit.> for some early follow-up work on the subject).Historically, concrete constructive evidence that classically unstable potentials can lead to positive definite Hamiltonian eigenspectra and perfectly well-defined unitary quantum mechanical time evolution was given by Bender and Böttcher <cit.>. They studiednon-Hermitian Hamiltonians of the form H=p^2-(i x)^α with arbitrary α>2 and showed how to use analytic continuation to calculate real and positive eigenenergies for the quantum Hamiltonian H despite the fact that the classical problem does not admit real and positive energies. It was later found by Jones and Mateo that the particular case α=4 can be re-cast into an equivalent Hermitian eigenvalue problem <cit.>, with eigenvalue spectrum matching that from Bender and Böttcher <cit.>.In the present work, I am going to formulate this finding as follows:In quantum mechanics, classically unstable potentials may be understood as analytic continuation of classically stable potentials into the unstable region, unless singular structures in the complex plane prohibit this analytic continuation.Based on this finding, I will push well-known results for field theories outside their classically trivial boundaries. As criterion to decide if a quantum field theory is well-defined I am proposing the following definition:A given quantum field theory is well-behaved in the continuum if all physical observables are well behaved. By contrast, classical intuition based on quantities that are not observables, such as in particular the value of the non-renormalization group invariant running coupling λ_R(μ̅), should not be used.In a nutshell, if observables in a quantum field theory candidate come out well-behaved, this quantum field theory should be taken seriously even if it does not make sense classically. This alternative way of defining quantum field theory has surprising consequences.§ EXPLICIT CALCULATIONS§.§ A toy model in 0d As a particular example, let me consider the simplest possible case of 0 dimensional field theory, with partition function possessing the integral representationZ(λ)=∫_-∞^∞ dxe^-λ x^4 , Re(λ)>0 . Classically, the potential V(x)=λ x^4 is bounded only for Re(λ)>0, which is what sets the limit on the above integral representation.We can evaluate this integral for Re(λ)>0, and findZ(λ)=2 Γ(5/4)×λ^-1/4 .However, since the above result is valid in an open region for λ, we can analytically continue the result (<ref>) to values of λ outside the domain of validity of the original integral representation (<ref>). In our case, this is easy, because we can use the known analytic continuation of the root function. In particular, one finds for negative real λ Z(λ=-g)=(-1)^-1/4 2 Γ(5/4)× g^-1/4 , g∈ℝ^+ .The analytically continued result for Z(λ) to negative (real) λ is not unique because of the four-sheeted nature of the quarter root. To obtain a unique result, additional information, such as a symmetry, is needed. For instance, if the additional information is that the partition function should be real and positive, the only possible result isZ(λ=-g)=2 Γ(5/4)g^-1/4e^i π/4+e^-i π/4/2=√(2)Γ(5/4)g^-1/4 , g∈ℝ^+ . Far from being nonsensical, (<ref>) is a perfectly well-behaved partition function for λ<0, despite the classically unbounded potential. The situation is completely analogous to well-studied functions in pure mathematics, such as the Riemann ζ function or the Γ function, with integral representations defined by ζ(s)=1/Γ(s)∫_0^∞ dx x^s-1/e^x-1 , Γ(s)=∫_0^∞ dxx^s-1 e^-s ,for Re(s)>1 and Re(s)>0, respectively. The analytic continuation to negative real-valued arguments of these functions has been known for more than a century, in particular leading to ζ(-1)=-1/12 and Γ(-1/2)=-2 √(π). There is no controversy about evaluating ζ,Γ at negative real argument, so by analogy, neither should there be for analytic continuations such as (<ref>). §.§ The O(N) model in 4d The above toy model demonstrates the possibility of well-behaved results for the case of classically unbounded potentials. However, it is a toy model and not a bona-fide quantum field theory in four dimensions. Interacting four-dimensional field theories are in general extremely hard to solve, except if they possess a small parameter that allows non-perturbative expansions, such as many field components<cit.>. To be specific, let us consider the O(N) model defined by the Euclidean partition functionZ=∫ Dϕ⃗e^-∫ d^4x[1/2ϕ⃗(-∂_μ∂_μ)ϕ⃗+λ/N(ϕ⃗^2)^2] ,where ϕ⃗=(ϕ_1,ϕ_2,…,ϕ_N) is an N-component scalar field. Introducing an exact Hubbard-Stratonovic transformation with an auxiliary field ζ, the large N limit of the O(N) model in any number of dimensions is given by (see Refs. <cit.> for the detailed steps in between for various dimensions)Z=∫_-∞^∞ dζ_0 e^-N× vol× V_ eff(√(2 i ζ_0)) ,with the effective potential in the large N limit given byV_ eff(m)=1/2∫d^4k/(2π)^4ln(k^2+m^2)-m^4/16 λ .In this form, the effective potential still suffers from UV-divergencies. The standard procedure to regulate divergences in high energy theory is dimensional regularization <cit.>, though some researchers still prefer cut-off regularization despite it breaking Lorentz invariance of the theory. In either regularization scheme, the above integral is completely standard, and one finds in dimensional regularization (see Ref. <cit.> for cut-off regularization)V_ eff(m)=-m^4/64π^2(1/ε+4π^2/λ+lnμ̅^2 e^3/2/m^2) ,where μ̅ is the MS renormalization scale. The effective potential still needs to be renormalized, which in the present case is achieved by the non-perturbative renormalization condition 1/ε+4π^2/λ=4π^2/λ_R(μ̅) ,with the exact large N running coupling λ_R(μ̅) having β function β≡∂λ_R(μ̅)/∂lnμ̅^2=λ_R^2(μ̅)/4π^2 .The large N exact β-function is uniformly positive for all real λ_R. Integrating the β function, one obtains for the explicit large N exact running coupling λ_R(μ̅)=4π^2/lnΛ_ MS^2/μ̅^2 ,where Λ_ MS is the Λ parameter of the O(N) model, in complete analogy to what is done in QCD <cit.>.For small values of μ̅≪Λ_ MS, the running coupling is positive, allowing a simple and intuitive classical interpretation of the theory. This is the regime in which scalar field theory is usually employed, as a cut-off (effective) theory for scales μ̅≪Λ_ MS.Increasing μ̅, one finds that the running coupling increases and finally diverges at μ̅=Λ_ MS, which is often referred to as the Landau pole of the theory. Again, classical intuition fails near the Landau pole, even though several example in the literature exist where observables remain well-defined and finite despite the divergent coupling, e.g. <cit.>. Common lore also has it that near the Landau pole, all higher dimension operators turn on, rendering the theory incalculable. This is a myth, as shown in Ref. <cit.>.Beyond the Landau pole, λ_R(μ̅) remains well-defined, increasing, but negative for μ̅>Λ_ MS, straining classical interpretation. For asymptotically high energies μ̅→∞, λ_R(μ̅) approaches zero (albeit from below), which demonstrates that the O(N) model is an example of an asymptotically free theory. However, from the point of view of analytic continuation, nothing particularly remarkable is happening. It is key, however, to note that the non-positivity of λ_R(μ̅) in the UV is exploiting precisely the loophole in the proofs of asymptotic freedom and quantum triviality <cit.>. This is not an engineered setup, it follows naturally from solving the O(N) model non-perturbatively in the large N limit, and has been known for decades <cit.>. According to the criterion outlined in the introduction, it is necessary to calculate observables in order to decide if the theory is well-behaved. Fortunately, one can easily calculate observables in the large N limit of the O(N) model. The observable that is most easily accessible is the value of the partition function itself, which in the large N limit is given exactly from the saddle point of the integral (<ref>). One finds for the free energy density <cit.> F=-ln Z/ vol=N V_ eff(m) ,with m given by dV_ eff(m)/dm=0 . Inserting the explicit form of the large N exact running coupling (<ref>) into the renormalized expression for V_ eff one findsV_ eff(m)=-m^4/64π^2lnΛ_ MS^2 e^3/2/m^2 . One finds two saddles for the partition function: m=0, and m=√(e)Λ_ MS. The free energy density for these saddles is F_m=0=0 , F_m=√(e)Λ_ MS=-N e^2 Λ_ MS^4/128π^2 .Since the free energy is an observable, it cannot depend on the fictitious renormalization scale μ̅, and it is gratifying to see that this is indeed the case for (<ref>). Both results for the free energy are well-behaved, showing no sign of any pathologies that a simplistic classical interpretation of the potential would have perhaps suggested. This is no accident: the value of the running coupling, with its fictitious renormalization scale dependence, cannot appear on its own in any observable, and indeed it does not for the free energy. Put differently: it is irrelevant that the running coupling diverges at the Landau pole or that it becomes negative in the UV, because the free energy is not directly sensitive to this fictitious renormalization-scale dependent quantity. The value of the free energy is important, however. Basic thermodynamics tells us that in the presence of two phases, the phase with the lower free energy is thermodynamically preferred. This means that the saddle point solution m=0 is thermodynamically unstable with respect to decay to the thermodynamically preferred saddle m=√(e)Λ_ MS, something which confused early researchers <cit.> but was clarified soon afterwards <cit.>. Besides the free energy, another observable is the pole mass of the vector ϕ⃗, which is given by the value of the saddle (see e.g. Ref. <cit.> for details on the calculation). For the thermodynamically preferred phase, the large N exact result for the pole massm=√(e)Λ_ MS ,is again renormalization-scale independent, well-behaved and free from any pathologies. One might worry that pathologies only show up when considering scattering, which requires consistently including 1/N corrections into the calculation. Fortunately, this is not hard to do, and one finds for the cross section for example in the s-channelσ(E)=(4 π)^3/N^2 E^2 |1-2 √(1-4 m^2/E^2+i 0^+) atanh1/√(1-4m^2/E^2+i 0^+)|^2to NLO in large N. Again, the cross section is renormalization-scale independent, well-behaved and free from any pathologies. The only curious feature of the cross section is the presence of a stable scalar bound state with a mass ofm_2≃ 1.84 m ,that was again already found a long time ago <cit.>. Consistent incorporation of NNLO corrections in the large N expansion are expected to imbue this scalar bound state with a finite width, in complete analogy to how muonium obtains a finite width in perturbative QED calculations <cit.>. The result for the scalar mass is renormalization-scale independent, well-behaved and free of pathologies. It should be stressed that calculating the cross-section in perturbation theory one encounters divergencies from the Landau pole at every single order in perturbation theory, cf. the discussion in lecture 3 in <cit.>. However, expanding out (<ref>) in perturbation theory one finds that it corresponds to the sum over an infinite number of perturbative “bubble diagrams”, rendering the end-result finite and insensitive to the Landau pole. Non-perturbative evaluation for the O(N) model at finite N can be done by discretizing negative coupling field theory on a space-time lattice. The corresponding lattice action appears to be non-polynomial <cit.>, yet amenable to numerical integration e.g. for N=1<cit.>. Further numerical work for negative coupling scalar field theory on the lattice is required to study if the large N results carry over to N=1,2.§ SUMMARY AND CONCLUSIONSIn this work, I propose to use observables instead of classical intuition in order to decide whether or not a quantum field theory is sensible in the continuum. I show that — based on this criterion — the O(N) model in four dimensions in the large N limit is a sensible interacting quantum field theory exhibiting asymptotic freedom, despite (or perhaps because) it possesses a Landau pole and non-positive running coupling in the UV. This property (non-positive coupling) is precisely the loophole in no-go proofs for four-dimensional scalarsin Refs. <cit.>, rendering both proofs ineffective for the four-dimensional O(N) model in the large N limit. Further work needs to be done in order to decide if theories with a finite (small) number of scalars, such as the N=4 Higgs field in the Standard Model, are also non-trivial and asymptotically free quantum field theories. § ACKNOWLEDGEMENTSI thank the particle physics groups at the University of Ljubljana, the Technical University of Vienna and the University of Washington for their hospitality and helpful discussions on this topic. Also, I thank Michael Aizenman for encouraging remarks concerning the existence of negative coupling field theory, and Seth Koren for helpful discussions on fleshing out the phenomenological applications to the Standard Model Higgs sector. This work was supported by the Department of Energy, DOE award No DE-SC0017905.
http://arxiv.org/abs/2310.18414v1
{ "authors": [ "Paul Romatschke" ], "categories": [ "hep-th", "hep-ph", "nucl-th" ], "primary_category": "hep-th", "published": "20231027181156", "title": "A loophole in the proofs of asymptotic freedom and quantum triviality" }
Induced subdivisions in K_s,s-free graphs with polynomial average degree [ January 14, 2024 ========================================================================Compositional Explanations <cit.> is a method for identifying logical formulas of concepts that approximate the neurons' behavior. However, these explanations are linked to the small spectrum of neuron activations (i.e., the highest ones) used to check the alignment, thus lacking completeness. In this paper, we propose a generalization, called Clustered Compositional Explanations, that combinesCompositional Explanations with clustering and a novel search heuristic to approximate a broader spectrum of the neuron behavior. We define and address the problems connected to the application of these methods to multiple ranges of activations, analyze the insights retrievable by using our algorithm, and propose desiderata qualities that can be used to study the explanations returned by different algorithms. § INTRODUCTIONEXplainable AI (XAI) promises to foster trust <cit.> and understanding in AI systems. This is particularly important for deep learning (DL) models, which are not understandable. By using XAI methods, we may be able to interpret and better understand the underlying DL processes <cit.>.Indeed, it is still unclear what kind of knowledge these models learn and how they are able to achieve high performance across many different tasks. This paper focuses on the methods that explain what neurons learn during the training process. Most of these approaches adopt the idea of investigating what kind of information overstimulates a neuron.For example, generative approaches iteratively change the input features to maximize the neuron's activation <cit.> or dataset-based approaches select samples where the neuron activates the most  <cit.>. Among them, we are interested in methods that use concepts to explain neurons' behavior. The seminal work of the area is Network Dissection (NetDissect)  <cit.>. NetDissect proposes to use a dataset annotated with concepts to probe the neuron and select, as an explanation, the concept whose annotations are aligned (i.e., overlap) the most with the highest activations. While initially proposed for quantifying the latent space's interpretability, NetDissect has been extensively used for describing and understanding the concepts recognized by neurons <cit.>. Compositional Explanations  <cit.> (CoEx) generalize NetDissect by extracting logical formulas in place of single concepts, using a beam search over the formulas space. While in principle, these algorithms could be applied to any activation, the current literature <cit.> focuses only on the exceptionally high activations (i.e., activations above the 0.005 percentile). We argue that using an arbitrarily high value for the threshold gives us only a partial view of the concepts recognized by a neuron. For example, when these algorithms are used to compare the interpretability of latent space, they can flag as uninterpretable a latent space that is interpretable at lower activations, or vice-versa. When used for downstream tasks <cit.>the results of these techniques can easily mislead users, mining their trust intoexplainability <cit.>. Moreover, in preliminary experiments, we observe that the network uses multiple ranges of activations in the decision process (<Ref>) and that threshold variations lead to different explanations and that the lowest and highest activations react differently to the variation, being associated with different categories of explanations (<Ref> and <Ref>), thus suggesting that current approaches provide a partial view.Contributions This paper contributes a generalization of CoEx at different activation ranges. Our insights provide a more detailed description of the neurons behavior. In order to illuminate this problem, we mathematically describe and mitigate the issues connected to the computation of explanations for a broader range of activations. We propose an algorithm (Clustered Compositional Explanations), heuristic (MMESH), and qualities ready to be used to study and analyze the returned explanations. In detail, we: (i) propose a faster algorithm to compute compositional explanations based on a heuristic; (ii) provide an analysis of a wider spectrum of activations by clustering them and applying our algorithm to the clusters; (iii) extract and discuss the novel insights on image data retrieved from our analysis, namely the presence of unspecialized activations and the phenomenon of progressive specialization; (iv) collect and identify desirable qualities for this kind of explanation; we also propose three novel ones: sample coverage, activation coverage, and label masking. The code is available at <https://github.com/KRLGroup/Clustered-Compositional-Explanations>.The manuscript is organized as follows: <Ref> describes related work; <Ref> describes our proposed generalization and desiderata properties; <Ref>analyzes and discusses the proposed generalization and insights retrievable by it. The code will be released upon acceptance.§ RELATED WORK According to the recent surveys on the topic of explaining individual neurons <cit.>, we can distinguish between feature synthesis and dataset-based approaches. The former aims at generating inputs that maximally (or minimally) activate a neuron <cit.> either using DNNs <cit.> or iterative algorithms <cit.>.Despite their popularity,these methods face several challenges: the abstraction of the output; they can encode few patterns <cit.>; the process is stochastic and thus one can get two different visualizations for the same neuron <cit.>. These problems are addressed by dataset-based approaches, which take a different route and are more effective in helping users <cit.>.Dataset-based approaches select samples from a dataset where the activation of a neuron is above a given threshold <cit.>. The advantage is that, by simply looking at the selected samples, one can infer the multiple features detected by the neuron, avoiding the problem of diversity and abstraction. However, the samples-selection can lead to different problems. For example, it becomes difficult to distinguish between causal and merely correlated factors (e.g., is the neuron recognizing only planes or planes into the sky?), and the returned explanations depend on the used dataset <cit.>.The first problem can be mitigated by looking at concepts instead of entire samples. A concept is a set of semantically related features annotated in a sample. In this case, one can probe for a specific concept to find the neurons responsible for detecting it <cit.> or fix the neuron and select the concepts where the neuron returns the highest activations <cit.>. Our work can be placed in dataset-based approaches of the second category. Among them, the work of <cit.> proposes to select the concept whose annotations are the most aligned to the neuron's activations. To capture more facets of the same activations, <cit.> propose to select logical formulas of concepts instead of a single one using a beam search that extracts the best-aligned formula. The process to extract the logical formulas can be modified to include more operators <cit.>, ontologies <cit.>, or to impose constraints <cit.>.Our work generalizes both of these works by associating logical formulas with multiple activation ranges per neuron. With respect to these works, our proposed method uses a heuristic search in place of an exhaustive search, uses clustering to compute thresholds instead of using a fixed ad-hoc threshold, and considers multiple intervals of activations and thus the full spectrum of the neuron's activations.Finally, the problem of dataset dependency has been the focus of recent work, which propose to get natural language explanations by using captioning methods <cit.> or multimodal models <cit.>. With respect to this set of approaches, our paper can be considered orthogonal, since the clustering mechanism and the analysis of a wider range of activation can be used as a starting point for these works. We leave the investigation of this direction for future research.Most of the works described in this section use the intersection over union score <cit.> and its variants <cit.> to quantitatively test the quality of the returned explanations. The idea is that if a method discovers a different set of explanations with a higher score, then it discovers more effective directions on the interpretability of the unit. Other metrics used in literature are usually tailored to properties of the proposed methods ( <cit.>), the goal of the extension (<cit.>) or they are used to verify properties of the models<cit.>). To contribute to the analysis of these explanations, we collect two quantitative metrics from the literature, and we propose three additional ones that are general enough to be applied to any dataset-based approach similar to NetDissect and CoEx. While each of them gives us some information, we argue that only by looking at all of them simultaneously one can have a clear view of the difference between explanations returned by different approaches.§ ALGORITHM This section describes our proposed Clustered Compositional Explanations and the heuristic used for making the algorithm practically feasible. Additionally, it describes the desiderata properties of the output of these algorithms. Below, we introduce the terminology used in the rest of the paper: * 𝔇: a dataset annotated with concepts C; * 𝔏^l: the set of logical connections L of arity l that can be built between concepts of 𝔇;* L_←∈𝔏^l-1 and L_→∈𝔏^1 denotes the left side and the right side of a label of arity i obtained by adding an atomic term to the label at each step, respectively; * S(x, L): a function that returns the binary mask of the input x for the label L;* A^k(x): the activation map of the k neuron when the input is the sample x; [We assume A^k(x) is already scaled to the same dimensions of S(x, L) (e.g., by padding or interpolation).]* M_[τ_1,τ_2](x): the function that returns a binary mask of the activation A^k(x), setting to 0 all the values smaller thanτ_1 or greater than τ_2;* n_s: the maximum size of the segmentation; * θ(x,L): a function that masks the input x by keeping only the features connected to the label L visible;* IMS_[τ_1,τ_2](x, L): the intersection size between the label mask S(x, L) and the neuron's activation mask M_[τ_1,τ_2](x) computed over the activation range (τ_1,τ_2).Moreover, we represent a binary mask as the set of the indices of its elements equal to 1. Thus 1M_[τ_1,τ_2](x, L) can be represented by the cardinality |M_[τ_1,τ_2](x, L)|.§.§ Clustered Compositional Explanations Clustered Compositional Explanations generalize CoEx <cit.> and NetDissect <cit.> by computing explanations on multiple intervals of activation thresholds.We begin the description by recalling the objective of NetDissect <cit.> and CoEx <cit.>. NetDissect <cit.>extracts the single concept whose masks overlap the most with the binary activation mask. Network Dissection uses a single threshold (τ^top) to compute the activation mask. Conventionally, τ^top is set to the top 0.005 quantiles for each unit k, i.e., τ^top is determined such that P(a_i ≥τ^top) = 0.005,  ∀ a_i ∈ A^k(𝔇). Therefore, the objective can be written as:C^best = *arg max_C ∈𝔏^1 IoU(𝔏^1, τ^top,∞, 𝔇 )NetDissect proposes an exhaustive search over the full search space of all the concepts in order to select the best explanation. CoEx Algorithm <cit.>generalizes the NetDissect algorithm by considering logical formulas of arity n in place of single concepts. Therefore, the objective is changed to:L^best = *arg max_L ∈𝔏^n IoU(𝔏^n, τ^top,∞, 𝔇 )When the number of concepts or the dataset size is large enough, the computation of an exhaustive search becomes prohibitive. To solve the issue, Mu et al. <cit.> propose to use a beam search of size b. At each step i, only the b best labels of arity i are used as bases for computing labels of arity i+1. The first beam is selected among the labels associated with the best scores computed by NetDissect. Ignoring for simplicity the time needed to compute the masks for the labels, it is clear that the CoEx algorithm needs at least (n-1) × b times the time needed for running NetDissect.Clustered Compositional Explanations generalizes CoEx by returning a set of logical formulas by identifying n_cls clusters into the activations of the k neuron and computing the logical formula that better describes each cluster (i.e., the one that maximizes the IoU score) (<Ref>). Specifically, the algorithm finds the solution to the objectiveL^best = {*arg max_L ∈𝔏^n IoU(L, τ_i,τ_j, 𝔇 ),∀ [τ_i,τ_j] ∈T} where IoU(L, τ_1,τ_2, 𝔇 ) = ∑_x ∈𝔇|M_[τ_1,τ_2](x) ∩ S(x,L)|/∑_x ∈𝔇|M_[τ_1,τ_2](x) ∪ S(x,L)| and T = { [min(Cls),max(Cls)],∀ Cls ∈Clustering(A^k(𝔇))} Clustering(A^k(𝔇)) returns n_cls disjoint clusters of the non-zero activations of the k neuron over the whole dataset, and T is the set of thresholds computed by selecting the minimum and maximum activation inside each cluster. When setting T = [τ^top,∞], the objective is equivalent to the CoEx algorithm (<ref>), while by setting T = [τ^top,∞] and L ∈𝔏^n, one can obtain the NetDissect objective (<ref>). The CoEx algorithm extracts compositional explanations by iteratively applying NetDissect to a beam search tree of width b and deepness (n-1). Thus, since the algorithm applies CoEx on n_cls clusters, the vanilla implementation would require n_cls× (n-1) × b times the computation time of NetDissect. Even employing a parallelization of the computation over units, the base required time and some practical problems[A greater number of zeros in the matrix allows a faster computation since zeroed rows can be skipped.] arising from the wider considered ranges make the application of the CoEx algorithm practically unfeasible when dealing with multiple clusters.Min-Max Extension per Sample HeuristicTo solve this problem, we propose a beam search guided by a novel, admissible heuristic. Specifically, we propose the Min-Max Extension per Sample Heuristic (MMESH).Given a sample x, a neuron k, and a label L ∈𝔏^i, the heuristic estimates the IoU score by combining the following information: the size of the label mask on the sample S(x,L); the coordinates to identify the smallest possible extension of the concept on the sample (i.e., the longest contiguous segment including only concept elements); the coordinates to identify the largest possible extension of the concept on the sample maxExt(x, L); the size of the intersection between the label's terms mask and the neuron's activation on the sample IMS(x, t) ∀ t ∈ L. The first three pieces of information are computed and collected directly from the concept dataset, while the intersection size is collected during the execution of the first step of CoEx for all L ∈𝔏^1 and before expanding the best labels in the current beam for all their terms t ∈𝔏^i-1. Note that the shape of the coordinates depends on the data type of the concept dataset. In our case (image data), minExt(x,L) and maxExt(x, L) correspond to the top left and bottom right corners of the largest inscribed rectangle inside the polygon and the largest rectangle patch applied to cover the polygon drawn from the concept's segmentation (i.e., bounding box), respectively (<Ref>). MMESH provides an estimation for logical formulas connected by OR, AND, and AND NOT operators and their specialization <cit.>, which are the most used operators <cit.>.Specifically, the heuristic uses the best-case scenarios for the intersection and the worst-case scenario enhanced by the coordinates for the label's mask to estimate the IoU score. In formulas:IoU(L, τ_1,τ_2, 𝔇 )= I/U = ∑_x ∈𝔇I_x/∑_x ∈𝔇U_x == I_x/∑_x ∈𝔇 |M_[τ_1,τ_2](x)| + ∑_x ∈𝔇|S(x,L)| - I_xwhereI_x= min(|IMS_[τ_1,τ_2](x, L_←)|+ |IMS_[τ_1,τ_2](x, L_→)|, |M(x)| ) op=ORmin(|IMS_[τ_1,τ_2](x, L_←)|, |IMS_[τ_1,τ_2](x, L_→)|)op=AND min(|IMS_[τ_1,τ_2](x, L_←)|, |M_[τ_1,τ_2](x)| - |IMS_[τ_1,τ_2](x, L_→)|)op=AND NOTand S(x,L)= max(|S(x,L_←)|, |S(x,L_→)|, S(x,L_← ∪L_→)) , op=ORmax(MinOver(L), I_x) op=ANDmax(|S(x, L_←)| - MaxOver(L), I_x) op=AND NOTIn the previous formulas, op is the logical connector of the formula, L_←∈𝔏^i-1 denotes one of the best labels retrieved in the current beam, L_→∈𝔏^1 denotes the candidate term to be connected through op to the label as the right side, MaxOver(L)is a function that returns the maximum possible overlap between the largestsegments marked by the coordinatesmaxExt(x,L_←) and maxExt(x,L_→), MinOver(L) is a function that returns theminimum possible overlap between the smallest segments marked by the coordinates minExt(x,L_←) and minExt(x,L_→), andS(x, L_←∪ L_→) = |S(x, L_→)|+ |S(x, L_→)| - MaxOver(L)).Since I_x must be an overestimation of I, eq. (<ref>) corresponds to the best-case scenario for OR labels (i.e., disjoint masks) and eq. (<ref>) and eq. (<ref>) correspond to the best-case scenario for fully overlapping masks in the case of AND and AND NOT labels. Conversely, since S(x,L) must underestimate the real label's mask and thus cover the worst-case scenario, it assumes fully overlapping maps for OR labels and disjoint maps for AND and AND NOT operators. Note that in the case of AND and AND NOT, the coordinates for the label's mask (i.e., minimum possible overlapping between polygons generated by the coordinates) help us to avoid setting it S(x,L) to 0. We prove that this heuristic is admissible (<Ref>); thus, the heuristic search is guaranteed to find the optimal formula inside the beam.§.§ Desiderata Qualities This section describes a set of statisticsand metrics that can be used to describe the qualities of the returned explanations. As mentioned in the previous sections, compositional explanations are commonly analyzed by looking at their IoU score. However, IoU can be artificially increased by increasing the formula's length <cit.>. Conversely, we promote the simultaneous usage of a set of metrics to have the full view of the efficacy of a method, since each of them has its weak spot when taken in isolation. Additional and alternative statistics are listed in <Ref>.Intersection Over Union The metric optimized by the approaches considered in this paper. It measures the alignment between labels' annotations and activation maps (<ref>). Given the activation range (τ_1,τ_2), a higher IoU means the algorithm can better capture the pre-existent alignment <cit.>.Detection Accuracy The percentage of masks of the label L overlapping with the neuron activation inside the activation range (τ_1,τ_2) <cit.>. A value closer to one means that most of the label's masks are usually recognized by the neuron in that activation range. DetAcc(L, τ_1,τ_2, 𝔇 ) = ∑_x ∈𝔇| M_[τ_1,τ_2](x) ∩ S(x,L)|/∑_x ∈𝔇|(S(x,L)|Samples Coverage The percentage of samples satisfying the label where the neuron activation is inside the activation range (τ_1,τ_2). A value closer to one means that the neuron usually recognizes the concept using that activation range.SampleCov(L, τ_1,τ_2, 𝔇 ) = |{x ∈𝔇: |M_[τ_1,τ_2](x) ∩ S(x,L)|>0}|/|{x ∈𝔇: |(S(x,L)|>0}|Activation Coverage The percentage of neuron activations inside the activation range (τ_1,τ_2) overlapping with the label's annotations. A value closer to one means that the label captures most of the behavior of this type of activation (i.e., there is a strong mapping). ActCov(L, τ_1,τ_2, 𝔇 ) = ∑_x ∈𝔇| M_[τ_1,τ_2](x) ∩ S(x,L)|/∑_x ∈𝔇|M_[τ_1,τ_2](x)|Explanation Coverage The percentage of dataset samples covered by the explanations, i.e., samples that satisfy the explanation's label and where the neuron fires inside the activation range (τ_1,τ_2). A value closer to one means that the neuron fires at the given activation range when the input includes the explanation's label. Thus, there is a strong correlation between the label and the activation range.ExplCov(L, τ_1,τ_2, 𝔇 ) =|{x ∈𝔇: |M_[τ_1,τ_2](x) ∩ S(x,L)|>0}|/|{x ∈𝔇: |M_[τ_1,τ_2](x)|>0}|Label Masking The cosine similarity computed by comparing the neuron's activations when the model is fed by using the full input x and the masked input θ(x,L). A high score indicates a strong connection between the label and the activation range. Note that we only keep track of the changes in the regions identified by the activation range τ_1,τ_2.LabMask(L, τ_1,τ_2, 𝔇 ) = ∑_x ∈𝔇 CosineSim(M^k_[τ_1,τ_2](x)A^k(θ(x,L)), M^k_[τ_1,τ_2](x)A^k(x))/|{x ∈𝔇: |M_[τ_1,τ_2](x)|>0}| § ANALYSIS§.§ SetupFor the experiments in this section, we follow the same setup of <cit.> with the addition of the set of thresholds used to compute the activation ranges. We fix the length of the explanation to 3, as commonly done in the current literature <cit.>. For space reasons, in almost all the experiments in this section, we report the results using the last layer of ResNet18 <cit.> as a base model and Ade20k <cit.> as a concept dataset. However, the claims hold across different architectures and concept datasets, as reported in <Ref>. We use K-Means as a clustering algorithm and fix the number of clusters to five. The choice of K-Means is motivated by the need for a fast clustering algorithm that aggregates activations associated with a shared semantic and that can be applied to a large quantity of data (see <Ref> for further details about the choice of the clustering algorithm). The number has been set to five because a higher number of clusters returns almost the same labels but repeated over more clusters, and, on average, the scores are lower (<Ref>). §.§ Heuristic Effectiveness This section compares the number of labels for which the algorithm computes the true IoU (visited states) needed by the vanilla CoEx algorithm, our heuristic, and alternative heuristics. We select and test three heuristics using different amounts of the needed information to estimate the IoU score:the vanilla CoEx algorithm where no heuristics are used (CoEx);a heuristic that uses only the size of the label masks per sample (Areas); the Coordinates-Free Heuristic (CFH), an ablated version of our proposed heuristic that does not estimate the size of the label's mask; andour proposed heuristic (MMESH). Refer to <Ref> for further details about the baselines. <Ref> compares the average number of states visited during the computation of the baselines and our MMESH. The results are computed over 100 randomly selected units fixingT = [τ^top,∞] as in the CoEx algorithm. We can observe that it is possible to lower the number of visited states by increasing the amount of information. The areas heuristic lowers the number of operations by a third and, since the heuristic uses only information from the dataset, it represents a valid option for making NetDissect faster. By adding the estimation of the intersection, we further reduce the number of visited states by one order of magnitude. Finally, MMESH, which adds the estimation of the label's mask, reaches the best result, reducing the number of visited states by two orders of magnitude. Practically, this result means that, for each unit, we can generate explanations for clusters in the same amount of time (or less) as the vanilla compositional algorithm as long as the number of clusters is reasonably low. Indeed, while the CoEx algorithm takes, on average, more than ~60 minutes per unit, our proposed MMESH takes less than 2 minutes.[Timing collected using a workstation powered by an NVIDIA GeForce RTX-3090 graphic card.]§.§ Explanations AnalysisIn this section, we analyze the explanations produced by NetDissect, CoEx, and our algorithm in terms of the desiderata qualities introduced in <Ref>. <Ref> shows that NetDissect and CoEx reach similar values in most considered scores, and the labels returned by CoEx are only slightly better than the NetDissect ones. The most significant margin can be observed in Sample Coverage, Detection Accuracy, and IoU. The higher IoU can be explained by the degenerate effect of increasing the formula's length <cit.>. The margin in Sample Coverage and Detection Accuracy means that the neuron fires in most of the samples annotated with labels returned by NetDissect. However, since the Detection Accuracy is lower, the overlap between activations and annotation is less consistent and more sparse. This is verified by the observation that most of the CoEx labels are connected by OR operators, thus increasing the number of candidate samples, and that most of the NetDissect labels are scene concepts (i.e., concepts that cover the full input), which are associated with a sparse overlapping due to the size of activation ranges. Looking at the average over the clusters reached by Clustered Compositional Explanations, the algorithm seems to return better labels with respect to almost all the desiderata qualities by a large margin. However, if we look at the average scores per cluster, we can note that the average is favored by Cluster 1 and Cluster 2. In <Ref>, clusters are named progressively according to the activation ranges. Thus, Cluster 1 corresponds to the lowest activations and Cluster 5 to the highest activations.First, we can note that Cluster 1 and 2 have an almost perfect Dataset and Sample Coverage. By the definition of the scores, this means that their labels cover the full dataset and that there is a strong connection between the activation range and the label. These extreme values motivated us to further investigate these clusters, which are discussed in the next section.Now, we can analyze Cluster 5, which includes the highest activations and, thus, also the range used by NetDissect and CoEx. We observe that with respect to CoEx, enlarging the range of activation has a marginal impact on Detection Accuracy and Label Masking, no effect on IoU and Sample Coverage, and a positive effect on Explanation Coverage and Activation Coverage. Combining the scores, we can infer that the larger activations (higher Activation Coverage) allow the algorithm to detect labels associated with slightly bigger concepts and better aligned to the neuron behavior (the same IoU and a higher Explanation Coverage). Finally, Cluster 3 and Cluster 4, which include intermediate activations, behave similarly to Cluster 5, but progressively improve the Sample Coverage, IoU, and Explanation Coverage. Therefore, the insight extracted for Cluster 5 holds also for them and, as we discuss in the next section, it is connected to the property of specialization. In summary, extracting compositional explanations at different and wider activation ranges maintains or improves the same qualities of the returned explanations. Additionally, the combination and analysis of different qualities simultaneously allow us to extract a bigger view of the compositionality of neuron activations, providing hints ready to be exploited by further analyses. §.§ Neurons Compositionality In this section, starting from the results reported in the previous section, we analyze the compositionality of neurons at a wider range of activation.Unspecialized Activations. As previously discussed, Cluster 1 and Cluster 2 are associated with a high Sample Coverage, meaning that they almost cover the full dataset. Therefore, the labels should be present in almost all the samples. By inspecting the labels, we observe that they are often a union of colors (i.e., Black OR Blue OR Grey) or a mix of colors and general concepts (i.e., Black OR Blue OR Sky), and few labels are repeated over the whole range of neurons. While the first observation can suggest that they recognize general concepts or colors, the second one goes in the opposite direction. To investigate the phenomenon, we applied our algorithm on untrained networks, finding that all the clusters are associated with these kinds of labels in this case. Thus, these labels represent the default labels to which the algorithm converges when the activations are random. We call the activations ranges associated with such labels unspecialized, meaning that neurons do not recognize specific concepts or purposely ignore the concepts covered by the activation. By analyzing the clusters from one to five, we found (<Ref>) that Cluster 1 is almost always associated with unspecialized activations and Cluster 2 half of the time. This phenomenon can also be partial, meaning that the first part of the label is assigned to a default label, but the last part converges on a different concept. In this case, we call these activation ranges weakly specialized. They are rare, especially in ReLU networks, and usually appear only in the clusters near the unspecialized ones. We hypothesize that they are a side effect of the clustering algorithm, and a future clustering algorithm tailored to extracting the best activation ranges could reduce their number.<Ref> also shows a similar behavior of ReLU and non-ReLU networks when the activations are close to 0. In ReLU layers, activations are stored in Cluster 1, and they are unspecialized 93% of the time. This percentage becomes smaller when we approach the higher clusters. We can observe a similar behavior in the case of the layer without ReLU. In this case, since the activations can assume negative values, the activations close to zero are stored in the middle clusters, and thus, Cluster 3 includes unspecialized activations 95% of the time. And again, when we move far away from zero, the percentage starts to decrease, as in the ReLU layers. Progressive Specialization. Progressive specialization is a phenomenon we observe in association with ReLU layers, where lower activations recognize more general concepts (e.g., building, sky, etc.), and higher ones progressively detect more complex objects. The phenomenon is similar to the one observed by <cit.>, in which the lower layers of a neural network detect more abstract concepts while the latest detect the most complex and specific ones.In the case of image data, this phenomenon seems to be also spatially aligned, meaning that lower activations capture external elements surrounding the objects detected by higher activations (<Ref>). The specialization property highlighted by  <cit.> is an example of specialization (<Ref>).Activation Polysemy Following <cit.>, we manually inspected 128 randomly extracted units to analyze the relations among concepts inside the returned labels and among activation ranges. A polysemic neuron is the one that fires for unrelated concepts <cit.>. <cit.> found that 31% of neurons are polysemic in the highest activation ranges. We explore the polysemy considering the full range of activations, meaning that a neuron is considered non-polysemic only if all the labels associated with the clusters are related. While, as expected, the number of polysemic neurons is much larger (~85%), it is interesting to note that ~15% of neurons fire for related concepts at any activation range, meaning that they are highly specialized.[Note that the evaluation is subjective. Therefore, the reported numbers must be considered as an indication.] § LIMITATIONS AND FUTURE WORKThis paper presented a first step towards a fuller understanding of neurons compositionality. We introduced Clustered Compositional Explanation, an algorithm that clusters the activations and then applies the CoEx algorithm guided by heuristics to them.Using our algorithm, we found and analyzed novel phenomena connected to the neuron's activations, such as the unspecialization of the lowest activations in ReLU networks and the progressive specialization.Despite the progress, there are some limitations of the current design. First, the labels returned by our algorithm are deeply connected to the activation ranges identified by the clustering algorithm. Therefore, future work could analyze the differences among different clustering algorithms or develop a novel one tailored to the given task. The extracted insights refer to the image data case. While the heuristic and the approach are domain agnostic, the application on a different domain could extract different kinds of insights, thus limiting or confirming the generality of the findings of <Ref>. We hypothesized that looking at the scores of each cluster can uncover a deeper understanding of the behavior of different activation ranges. However, an interesting direction could be the development of weighting mechanisms to weight the contribution of each cluster to the final averaged scores, which is desirable when the number of clusters is high and looking and comparing individual cluster can become problematic. Finally, the specific labels returned by these algorithms are linked to the concept dataset used to probe the neurons, as observed by <cit.>. While the general insights hold even when changing the dataset (<Ref>), we do not address the problem of returning the same specific labels across different datasets. Other than mitigating the above-mentioned limitations, future work could also explore the application of the heuristic to study the optimality of the returned explanations, or the application of clusters on recent methods for reducing the dependency on the concepts datasets <cit.>.abbrvnat§ HEURISTICS §.§ Proof of Admissibility of MMESH Since the heuristic aims at approximating the IoU score, we can start by expanding the denominator of the IoU formula:IoU(L, τ_1,τ_2, 𝔇 )= I/U = ∑_x ∈𝔇|M_[τ_1,τ_2](x) ∩ S(x,L)|/∑_x ∈𝔇|M_[τ_1,τ_2](x) ∪ S(x,L)| = I_x/∑_x ∈𝔇 |M_[τ_1,τ_2](x)| + ∑_x ∈𝔇|S(x,L)| - ISpecifically, the heuristic should avoid the direct computation of I and ∑_x ∈𝔇|S(x,L)|, providing estimations of them. To be admissible, the heuristic cannot underestimate the intersection or overestimate the union. Thus, it must satisfy the following constraints: |I_x| ≥|I_x| 0 ≤|S(x,L)| - |I_x| ≤|S(x,L)| - I∀ x ∈𝔇.Eq. (<ref>) ensures that the heuristic returns an optimistic estimation for the intersection at the numerator. Eq. (<ref>)ensures that the heuristic returns a pessimistic estimation of the denominator of <ref>. <Ref> We begin by proving the first equation. Recall that MMESH estimates |I_x| as:I_x= min(|IMS_[τ_1,τ_2](x, L_←)|+ |IMS_[τ_1,τ_2](x, L_→)|, |M(x)| ) op=ORmin(|IMS_[τ_1,τ_2](x, L_←)|, |IMS_[τ_1,τ_2](x, L_→)|)op=ANDmin(|IMS_[τ_1,τ_2](x, L_←)|, |M_[τ_1,τ_2](x)| - |IMS_[τ_1,τ_2](x, L_→)|) op=AND NOT Given the masks L_← and L_→, we can distinguish between two cases: overlapping and non-overlapping masks. If the masks do not overlap, then the real intersection is given by:I_x = |IMS_[τ_1,τ_2](x, L_←)|+|IMS_[τ_1,τ_2](x, L_→)|op = OR 0op = AND|IMS_[τ_1,τ_2](x, L_←)|op = AND NOTComparing eq.(22-24) and eq. (25-27) we can verify that eq. (<ref>) holds. Indeed, eq. (<ref>) ≤ eq. (<ref>) due to the non-negative property of the cardinality and IMS_[τ_1,τ_2](x, L_→) ⊆ M_[τ_1,τ_2](x) . eq. (<ref>) ≤ eq. (<ref>) can be proved by observing that the mask obtained by M_[τ_1,τ_2](x) - IMS_[τ_1,τ_2](x, L_→) contains IMS_[τ_1,τ_2](x, L_←) since L_← and L_→ are assumed to be non-overlapping and thus the minimum operator used in the heuristics selects |IMS_[τ_1,τ_2](x, L_←)|. Finally,IMS_[τ_1,τ_2](x, L_←) ⊆ M_[τ_1,τ_2](x) and IMS_[τ_1,τ_2](x, L_→) ⊆ M_[τ_1,τ_2](x) by definition of IMS_[τ_1,τ_2](x, L). Since the two masks are not overlapping and|M_[τ_1,τ_2](x)| ≥ IMS_[τ_1,τ_2](x, L_←) + IMS_[τ_1,τ_2](x, L_→) then<ref> ≤ <ref> and thus I(L) = I(L). Let's begin with fully overlapping masks for the case of overlapping masks. In this caseI_x = max(|IMS_[τ_1,τ_2](x, L_←)|, |IMS_[τ_1,τ_2](x, L_→)|) op = ORmin(|IMS_[τ_1,τ_2](x, L_←)|, |IMS_[τ_1,τ_2](x, L_→)|)op=AND min(|IMS_[τ_1,τ_2](x, L_←)|, |M_[τ_1,τ_2](x)| - |IMS_[τ_1,τ_2](x, L_→)|)op=AND NOTComparing eq. (22-24) to eq. (280), it is easy to see that<ref> holds for AND and AND NOT operators sinceeq. (<ref>) = eq. (<ref>) and eq. (<ref>) = eq. (<ref>).When op=OR, one can observe that <ref> ≤ <ref> since the maximum between two cardinalities is lower than their sum and |M_[τ_1,τ_2](x)| ≥ max(|IMS_[τ_1,τ_2](x, L_←)|, |IMS_[τ_1,τ_2](x, L_→)|) since the masks are fully overlapping.The cases described above (fully overlapping and non-overlapping masks) for the estimation of I_x cover the best-case scenarios for all the involved operators. Therefore, when the masks are partially overlapping, the real intersection is smaller than the case of non-overlapping for the OR operator and smaller than the case of fully overlapping for the AND and AND NOT operator. Thus, |I_x| ≥ |I_x| holds also for partially overlapping masks. Equation (<ref>) Recall that the heuristic computes S(x,L) as:S(x,L)= max(|S(x,L_←)|, |S(x,L_→)|, |S(x,L_← ∪L_→)|op=ORmax(MinOver(L), |I_x|) op=ANDmax(|S(x, L_←)| - MaxOver(L), |I_x|) op=AND NOTwhere S(x, L_←∪ L_→) = |S(x, L_→)|+ |S(x, L_→)| - MaxOver(L)). We can distinguish again between the case of non-overlapping and fully overlapping masks.We proceed by proving for each case that (i) S(x,L)≤ S(x,L), (ii) S(x,L) - I_x≤ S(x,L) - I_x, and (iii) S(x,L) - I_x≥ 0.In the case of non-overlapping masks, the real joint label isS(x,L)= |S(x,L_←)| + |S(x,L_→)| op=OR0 op=AND|S(x, L_←)| op=AND NOTLet us begin by comparing eq. (<ref>) and eq. (<ref>) for the OR operator.When the max operator selects |S(x,L_←)| or |S(x,L_←)| (i) is verifiedsince |S(x,L_←)| ≤ |S(x,L_←)| + |S(x,L_→)| and |S(x,L_→)| ≤ |S(x,L_←)| + |S(x,L_→)| due to the non-negativity of the cardinality. When the maximum is |S(x,L_←∪ L_→)| (i) is verified since MaxOver(L) by definition returns a cardinality (the one of the overlapping between the largest possible extensions), then MaxOver(L) ≥ 0 and thus |S(x,L_←∪ L_→)| ≤ |S(x,L_←)| + |S(x, L_→)|. (ii) is proved by observing that |I_x| is an overestimation of I and thus|S(x,L)| - |I_x| ≤ |S(x,L)| - |I|.(iii) can be proved by showing that |S(x,L)| ≥ |I_x|. The proof follows by noting that IMS_[τ_1,τ_2](x, L_←) ⊆ S(x,L_←), IMS_[τ_1,τ_2](x, L_→) ⊆ S(x,L_→), and the max operator selects the highest cardinality.Now, let us examine the case of the AND operator. In this case, (i), (ii), and (iii) are proved by observing that since the masks are not overlapping and MinOver(L) returns the minimum possible overlap, then MinOver(L) = 0. Therefore, the max operator in eq. (<ref>) returns |I_x| and, thus |S(x,L)| - |I_x|=|S(x,L)| - |I_x| = 0due to eq. (<ref>). Finally, (i) also holds for the operator AND NOT since MaxOver(L) ≥ 0 and thus |S(x,L_←)| - MaxOver(L) ≤ |S(x,L_←)| and |I_x| ≤ |S(x,L_←)| since eq. (<ref>) can be at maximum |IMS(x,L_←)| and |IMS_[τ_1,τ_2](x,L_←)| ≤ S(x,L_←)|. Since |I_x| ≤ |S(x,L_←)| and |S(x,L_←) ≤ |S(x,L_←)|, then (ii) and (iii) are verified.Now, let us move to the case of fully overlapping masks. In this case, it holds:S(x,L)= max(|S(x,L_←)|,|S(x,L_→)|) op=ORmin(|S(x,L_←)|,|S(x,L_→)|) op=ANDmax(|S(x,L_←)|-|S(x,L_→)|, 0) op=AND NOTBy comparing eq. (<ref>) and eq. (<ref>), (i) holds for the OR operator since masks are fully overlapping, and thus S(x,L_←∪ L_→) is equal to the largest mask betweenS(x,L_←) and S(x,L_→). (iii) is verified because IMS_[τ_1,τ_2](x, L_→)|) ⊆ S(x,L_→) and IMS_[τ_1,τ_2](x, L_←)|) ⊆ S(x,L_←). Finally, (iii) is verified due to the overestimation of I_x and thus eq. (<ref>) holds. In the case of the AND operator, the equation is easily verified by observing that MinOver(L) returns a subset of S(x,L_←) ∩ S(x,L_→) and thus MinOver(L) ≤ min(|S(x,L_←)|,|S(x,L_→)|), and (i) holds. (ii) follows from the property of overestimation of I_x. (iii) is trivially verified by the max operator used in eq. (<ref>).The cases described above (fully overlapping and non-overlapping masks) cover the worst-case scenarios for all the involved operators for the estimation of |S(x,L_←)|. Therefore, when the masks partially overlap, the real label's mask is greater than the case of non-overlapping for the OR operator and bigger than the case of fully overlapping for the AND and AND NOT operator. Thus, since eq. (<ref>) holds for them, then it also holds for partially overlapping masks.In conclusion, we proved that |I_x| ≥ |I_x| and that 0 ≤ |S(x,L)| - |I_x| ≤ |S(x,L)| - I, thus, the heuristic is admissible, and it returns the optimal formula inside the beam.§.§ Alternative Heuristics §.§.§ Coordinate-Free Heuristic This heuristic follows the same structure as the MMESH heuristic, but it does not use the coordinates to compute the minimum and maximum possible extension of the label mask. Practically, it avoids the estimation of S(L,x) by setting it to 0. IoU(L, τ_1,τ_2, 𝔇 )= I_x/∑_x ∈𝔇 |M_[τ_1,τ_2](x)|- I_xwhereI_x is defined as in MMESH. This heuristic could be used in place of MMESH in contexts where computing the coordinates of the maximum and minimum extension is too costly. §.§.§ Areas Heuristic This heuristic does not collect additional info during the first step of the CoEx algorithm. Therefore, the IoU is estimated using only the information about the mask size of terms composing the current label.IoU(L, τ_1,τ_2, 𝔇 )= I_x/∑_x ∈𝔇 |M_[τ_1,τ_2](x)|- I_xwhereI_x= min(|S_[τ_1,τ_2](x, L_←)|+ |S_[τ_1,τ_2](x, L_→)|, |M(x)| ) op=ORmin(|S_[τ_1,τ_2](x, L_←)|, |S_[τ_1,τ_2](x, L_→)|)op=ANDmin(|S_[τ_1,τ_2](x, L_←)|, size(x) - |S_[τ_1,τ_2](x, L_→)|) op=AND NOTThe areas heuristic could also be used to speed up the vanilla NetDissect algorithm when it is used as a standalone algorithm. § ADDITIONAL RESULTS This section shows the comparison between NetDissect, CoEx, and Clustered Compositional Explanations on two additional architectures and one more concept dataset.§.§ Other Models <Ref> and <Ref> show the comparison between NetDissect, CoEx, and Clustered Compositional Explanations when the base model is DenseNet <cit.> and AlexNet <cit.>. We can easily observe that the analysis carried on for ResNet also holds in these cases.§.§ Pascal Dataset <Ref> compares NetDissect, CoEx, and Clustered Compositional Explanations when the Pascal dataset <cit.> is used as a probing concept dataset for the algorithms. Note that the dataset includes only colors, objects, and parts in this case. We can see that the differences among algorithms are similar to the ones discussed in Section 4.3, and thus, the insights are valid across different datasets. §.§ ImageNet<Ref> and <Ref> compare NetDissect, CoEx, and Clustered Compositional Explanations when the ResNet18 <cit.> and VGG-16 <cit.> are pretrained on the ImageNetdataset <cit.>. We can see that the differences among algorithms are similar to the ones discussed in Section 4.3, and thus, the insights are valid across different datasets. § NUMBER OF CLUSTERS <Ref> shows the variation in explanations' quality when the Clustered Compositional Explanations algorithm uses a different number of clusters. We can observe a marginal loss in qualities when increasing the number of clusters. Moreover, by manually inspecting the returned labels, we found that several labels are repeated over the clusters, and less than ~30% of the labels are novel with respect to the usage of fewer clusters. We hypothesize that these results can open the door to further research on a novel algorithm that reduces repeated labels over clusters and finds the optimal number of clusters. § OTHER STATISTICS This section lists additional statistics or metrics that can be used with our identified qualities to inspect models and explanations. Scene PercentagePercentage of scene concepts associated with explanations. Scene concepts are concepts whose masks cover the full input. <cit.> argues that a high percentage of scene concepts is undesirable in some circumstances.ScenePerc(𝔇, 𝔏^𝔟𝔢𝔰𝔱) = ∑_L ∈𝔏^𝔟𝔢𝔰𝔱 |{t: t ∈ L(|S(x,t)|=n_s ∀ x ∈𝔇) } |/∑_L ∈𝔏^𝔟𝔢𝔰𝔱 |{t: t ∈ L}|where 𝔏^𝔟𝔢𝔰𝔱 is the set that includes the labels associated with each neuron by the algorithm. ImRoU It is a modified version of the IoU score where sparse overlapping between concepts' masks and activations are penalized. It has been used as an optimization metric to reduce the number of scene concepts <cit.>. It is based on the idea of penalizing concepts whose activation is distributed like in a random activation.ImRoU_r(𝔇, [τ_1,τ_2], L) = ∑_x ∈𝔇 |M_[τ_1,τ_2](x) ∩ S(x,L)| - r ×1/n_s× |M_[τ_1,τ_2]| × |S(x,L)|/|M_[τ_1,τ_2](x) ∪ S(x,L)|where the constant r controls the weight of the random intersection. Pearson Coefficent It measures the correlation between the IoU score, the firing rate of a neuron, and the performance. This metric has been used to measure the correlation between the interpretability of a latent space and performance. It is not directly connected to the quality of the returned explanations. Pearson = PearsCoeff(IoU_𝔛, Accuracy_𝔛)where 𝔛 is the set of samples where the neuron fires in the considered range[τ_1,τ_2], and IoU is the set of intersections over union per sample.Average Activation Size The average size of the masks covering the considered range[τ_1,τ_2] over the dataset.AvgActSize(τ_1,τ_2, 𝔇 )= ∑_x ∈𝔇| M_[τ_1,τ_2](x)|/∑_x ∈𝔇 n_sAverage Label's Mask Size The average size of the label's masks over the dataset.AvgLabSize(L, 𝔇)= ∑_x ∈𝔇|S(x,L)|/∑_x ∈𝔇 n_s Average Overlapping The average size of the overlapping between label's masks and neuron's activation covering the considered range[τ_1,τ_2] over the dataset.AvgOverlap(L, τ_1,τ_2, 𝔇)= ∑_x ∈𝔇| M_[τ_1,τ_2](x) ∪ S(x,L)|/∑_x ∈𝔇 n_s Absolute Label Masking The difference in the neuron's activations between normal and fully masked inputs where only the associated label is kept. A low score means the linkage between the label and the activation range is strong. With respect to the Label Masking presented in <Ref>, this variant is not normalized, and thus, it is difficult to compare different neurons or aggregate their scores. In preliminary experiments, we observe that the mean scores reached in this metric are similar among different algorithms, and they suffer from high variance due to the different ranges captured by different neurons.LabMask(L, τ_1,τ_2, 𝔇 ) = ∑_x ∈𝔇| (A_[τ_1,τ_2](θ(x,L)) - ∑_x ∈𝔇|| A_[τ_1,τ_2](x) ∩ S(x,L)|§ ACTIVATION IMPORTANCE <Ref> shows how many times the network changes its prediction when we mask the activations covered by each cluster or covered by the CoEx thresholds. Intuitively, if an activation band is not used in the decision process, then it should never change the prediction if it is masked out.Conversely, we can observe that the change in prediction is similar in almost all the clusters but Cluster 1, which often contains default rules and unspecialized activations. This means that the full spectrum of activations has an impact on the decision process.§ THESHOLD IMPACT In this section, we analyze the impact of the threshold's value. Specifically, we analyze the category distribution of the labels returned by the NetDissect and Compositional Explanations when the threshold is lowered. We first consider the following list of top quantiles ranges: {[0.005,∞],[0.01,∞],[0.05,∞],[0.1,∞],[0.2,∞],[0.5,∞]} and then we apply similar ranges but extracting the lowest quantiles, namely {[ϵ,0.005],[ϵ,0.01],[ϵ,0.05],[ϵ,0.1],[ϵ,0.2],[ϵ,0.5]} where ϵ=1e-6.<Ref> shows the percentage of the labels associated by CoEx to 100 randomly extracted units when changing the threshold. We can observe that lowering the threshold (or equivalently increasing the range of considered quantiles) penalizes some label categories and rewards others. For example, we can observe that the colors benefit from larger ranges while objects benefit from smaller ones.At first sight, one could hypothesize that this behavior is due to the larger size of the mask M_[τ_1,τ_2](x) generated by a lower threshold since a larger mask increases the intersection of a large concept when this is not fully detected using a higher threshold. However, this observation is not enough to explain the results. Indeed, lowering the thresholdalso increases the intersection of small concepts. Since their IoU scores converge to 1 faster than ones of large concepts, the distribution of returned labels should be similar on average. reports the average segmentation area per category, confirming this analysis. We can observe that two categories with similar areas, like Color and Object, have opposite behavior in the plots of <Ref>. Moreover, we can see that the distribution of categories is not consistent when considering high and low quantiles, even though their range size is equal.Putting all together, we explain these results using the main hypothesis of this paper: neurons recognize different concepts at different activation levels. § ADDITIONAL DETAILS ABOUT THE SETUP All the models considered in this paper have been pre-trained on the Place365 dataset <cit.>. In particular, the checkpoints used are the same used in the CoEx and NetDissect papers. Annotation for Pascal <cit.> and Ade20K <cit.> datasets are retrieved from the Broden dataset <cit.>.Regarding the formulas' length, we fix the limit to 3, as previously done in literature by <cit.>. According to the results of <cit.>, increasing the formula's length should not impact the results presented in this paper. Another difference with respect to the implementation of CoEx is that we actively check logical equivalences between formulas. This difference means that we use a beam of size 10 only during the first beam, and then we set the beam size to 5 to replicate the configuration of <cit.>.Finally, we choose to use a clustering algorithm over manually splitting the activation space since we desire clusters that aggregate activations associated with a shared semantic (e.g., all the activations that recognize a car inside a single cluster). Conversely, a manual split (e.g., using the percentile) can often separate activations associated with the same concept/s to multiple subsets. In this case, the concept can be overlooked by the algorithm since the overlapping mask is split into different subsets, or multiple splits could be associated with the same label, thus penalizing other concepts.
http://arxiv.org/abs/2310.18443v1
{ "authors": [ "Biagio La Rosa", "Leilani H. Gilpin", "Roberto Capobianco" ], "categories": [ "cs.LG", "cs.AI" ], "primary_category": "cs.LG", "published": "20231027193950", "title": "Towards a fuller understanding of neurons with Clustered Compositional Explanations" }
#1 1 0 Automated threshold selection and associated inference uncertainty for univariate extremes Conor MurphyThis paper is based on work completed while Conor Murphy was part of the EPSRC funded STOR-i centre for doctoral training (EP/S022252/1), with part-funding from Shell Research Ltd., Jonathan A. TawnDepartment of Mathematics and Statistics, Lancaster UniversityandZak VartyDepartment of Mathematics, Imperial College London Received date; accepted date ============================================================================================================================================================================================================================================================================================================================================================================ 1 Automated threshold selection and associated inference uncertainty for univariate extremes Threshold selection is a fundamental problem in any threshold-based extreme value analysis. While models are asymptotically motivated, selecting an appropriate threshold for finite samples can be difficult through standard methods. Inference can also be highly sensitive to the choice of threshold. Too low a threshold choice leads to bias in the fit of the extreme value model, while too high a choice leads to unnecessary additional uncertainty in the estimation of model parameters. In this paper, we develop a novel methodology for automated threshold selection that directly tackles this bias-variance trade-off. We also develop a method to account for the uncertainty in this threshold choice and propagate this uncertainty through to high quantile inference. Through a simulation study, we demonstrate the effectiveness of our method for threshold selection and subsequent extreme quantile estimation. We apply our method to the well-known, troublesome example of the River Nidd dataset.Keywords: extreme values, generalised Pareto distribution, river flows, return level, threshold selection, uncertainty quantification. 2§ INTRODUCTIONAn inherent challenge in risk modelling is the estimation of high quantiles or extrapolation beyond observed levels. This is important when designing policies or protections against future extreme events, e.g., in finance or hydrology <cit.>. Extreme value methods achieve this extrapolation by using asymptotically exact models to approximate the tail of a distribution above a high within-sample threshold. The first challenge is to select a threshold u, above which the GPD gives a good approximation to the data. Here, we develop an automatic threshold selection procedure and novel inference methods that account for the uncertainty in this selection. Throughout, we assume all data considered consist of independent, identically-distributed (iid) observations.To estimate high quantiles of a distribution, known as return levels, a single flexible family of distributions can be used. Suppose a univariate continuous random variable X has distribution function F, with upper endpoint x^F := sup{x : F(x) < 1}. When considering values of X that exceed a high threshold u < x^F, <cit.> showed that after suitable rescaling, the excesses Y = X - u, for X>u, converge in distribution to the generalised Pareto distribution (GPD) as u → x^F. In practice, a suitably high threshold u must be chosen, so that excesses Y are well-modelled as GPD(σ_u, ξ), which has distribution functionH(y; σ_u, ξ) = 1 - (1 + ξ y/σ_u)_+^-1/ξ,with y > 0, w_+ = max(w,0), shape parameter ξ∈ℝ and threshold-dependent scale parameter σ_u > 0. For ξ = 0,distribution (<ref>)is evaluated in the limit as ξ→ 0, resulting in the exponential distribution. For ξ < 0, X has a finite upper end-point at u - σ_u/ξ but is unbounded above for ξ≥ 0.<cit.> provide an overview of the properties of the GPD and illustrate simple visual threshold selection techniques and diagnostics. Threshold selection involves a bias-variance trade-off: too low a threshold is likely to violate the asymptotic basis of the GPD model, leading to bias, whilst too high a threshold results in very few threshold excesses with which to fit the model, leading to high parameter uncertainty. Thus, we must choose as low a threshold as possible subject to the GPD providing a reasonable fit to the data. There are a variety of methods aiming to tackle this problem, see <cit.> for a review and<cit.> and <cit.> for recent developments. The most commonly used methods suffer from subjectivity. There are a limited number of existing automated threshold selection methods, and we find our approach compares favourably to these. When estimating high quantiles, the available data are often few and so inference is sensitive to the chosen threshold. Reliance on a single thresholdwhen accounting for estimation uncertainty is misleading for inference.Using a simple bootstrap procedure together with our threshold selection approach,both threshold and parameter uncertainty are incorporated into tail inference. Our approach builds closely upon that of <cit.>, adjusting the method to select a constant threshold.Given this, we believe the framework will hold more generally, e.g., with covariate dependence in threshold and/or GPD parameters.In Section <ref>, we outline the standard, subjective approaches to threshold selection based on visual assessments. Section <ref>describes a number of existing automated methods. Section <ref> introduces our procedure for the selection of a threshold, while Section <ref> describes how to incorporate threshold and parameter uncertainty into return level inference. In Section <ref>, the proposed method is compared against existing methods on simulated data.In Section <ref>,the River Nidd dataset of <cit.> is used as a non-trivial example andillustrates the superiority ofour new methodology relativeto theexisting methods.§ BACKGROUNDThe threshold stability property of the GPD is key in manythreshold selection approaches: if excesses of a threshold u follow a GPD then excesses of a higher threshold v (u < v < x^F) will also follow a GPD, with adjusted parameter values, i.e., if X-u|(X>u) ∼GPD(σ_u,ξ), then X-v|(X>v) ∼GPD(σ_u+ξ(v-u),ξ),see supplementary material <ref> for details. By this property, the GPD shape parameter ξ should be equal for all valid choices of threshold. A modelling threshold can be selected as the lowest value for which this property holds, accounting for the sampling variability in the estimates of ξ. The conventional method for this assessment is known as a parameter stability plot <cit.>. For each of a set of candidate thresholds, a parameter stability plot displays the estimated ξ value and the associated confidence interval (CI). The threshold is selected as the lowest value for which the estimate of ξ for that level is consistent with estimates of ξ at all higher thresholds, i.e., where there is overlap between the associated CIs. Figure <ref> shows two examples of parameter stability plots; the first uses a simulated dataset of 200 random variates, generated from the Case 4 distribution described in Section <ref>, where excesses of the threshold u = 1.0 follow a GPD(0.6, 0.1); the second uses 154 measurements from the River Nidd dataset.Selecting an appropriate threshold using the shape parameter estimates in Figure <ref> is challenging and subjective in both cases because the parameter estimates are dependent across threshold choices and highly uncertain due to the small sample sizes that characterise extreme value analyses. The parameter stability plot for the simulated data shows that values as far from the truth (of ξ = 0.1) as ξ≈ -0.34 fall within all CIs, which incorrectly suggests that the entire dataset might be used, i.e., setting u=0.Focusing on the River Nidd dataset one might choose a threshold around the 80%-quantile, implying ξ̂≈ -0.1. A major drawback to selecting such a high threshold is the high level of uncertainty in the parameter estimates, shown by the wide bootstrap CIs. The parameter stability plot for the Nidd dataset is particularly difficult to interpret because of the large variation in point estimates for ξ over different threshold choices. Lower candidate threshold values imply a very heavy-tailed distribution (ξ̂≈ 0.5), much heavier than is common for almost all published analyses of environmental datasets. Conversely, high thresholds imply a very short tail, with estimates dropping as low as ξ̂≈ -1. As a result of this unusual behaviour, it has become a major example for non-trivial threshold selection and is analysed in many papers <cit.>. We apply our new method to this dataset in Section <ref>. Further examples of parameter stability plots are given in supplementary material <ref>. A range of model-based methods have been developed to improve on parameter stability plots for threshold selection. <cit.> treated the threshold as a parameter to be estimated and employed a gamma distribution to model observations below the threshold, which is spliced with the GPD model above the threshold. <cit.> developed a combined model for both extreme and non-extreme data, using a piecewise-constant density up to an unknown transition point that is to be estimated along with a GPD density above this point. <cit.> developed a penultimate non-homogeneous Poisson process (NHPP) model, under which the shape parameter is treated as a piecewise constant function of threshold; a likelihood-based test is used to assess if a single constant shape parameter is appropriate, but this is highly computationally intensive. <cit.> built upon this idea, constructing a hypothesis test of a constant shape parameter against a piecewise-constant parameter across an arbitrary number of thresholds. An illustration of this methodapplied to a simulated dataset from Section <ref> is given in supplementary material <ref>. While this method formalises the approach of parameter stability plots, the resulting plot of p-values suffers from similar subjectivity of interpretation. § CURRENT AUTOMATED METHODSAutomated methods seek to remove the problem of subjectivity by selecting a threshold based on some metric. <cit.> select a threshold by minimising the mean squared error (MSE) of the Hill estimator for ξ <cit.> through a double-bootstrap procedure. The first bootstrap stage computes the optimal size n_1 for their second bootstrap stage, where n_1<n and n is the data sample size. To reduce computations, the tea package <cit.> fixes n_1=0.9n. Further to the computational intensity, the reliance on asymptotic theory leads to inadequate finite sample performance. <cit.> attempts to address the shortcomings of the previous method by using an adaptation of the Kolmogorov-Smirnov statistic to quantify goodness of fit in the tail of the distribution but using quantiles rather than probabilities. This selects the threshold that minimises the maximum distance between the empirical and modelled quantiles. This approach has three main shortcomings. Firstly, imposing ξ > 0 precludes its use in applications where variables have light upper tails. Secondly, the largest deviations in model fit are often observed at the very highest quantiles of a distribution, which falsely pushes the threshold too high.Finally, and most critically, the method fails to reward the reduced uncertainty that accompanies a larger sample, leading to unnecessarily high threshold choices.The automated threshold selection method of <cit.> is based on the vector ξ̂^* of (standardised) increments in the estimates of ξ between successive ordered candidate thresholds. Above an appropriate threshold for a GPD, the limit distribution for the elements of ξ̂^* is iid standard normal. Below the appropriate threshold, the distribution of the corresponding elements of ξ̂^* will be better approximated by a non-standard normal distribution. A changepoint model describing this behaviour is used for ξ̂^*. By utilising asymptotic theory on the joint distribution of maximum likelihood estimators (MLEs) from overlapping samples of data, a likelihood ratio test is used to assess at which candidate threshold this changepoint model is most appropriate. An overview is given in supplementary material <ref>.The method of <cit.> relies on the large sample asymptotic theory of MLEs which restricts the set of candidate threshold choices in practice (due to failure to converge), with this particularly evident for small samples. This leads to considerable sensitivity to the chosen set of candidate thresholds. We also identify substantial problems when ξ < 0, which is a major restriction. These features are demonstrated through simulation studies and the analysis of the small River Nidd dataset inSections <ref> and <ref> respectively. <cit.> compare the Bayesian predictive density of GPD fits abovea fixed validation threshold v, based on a set of candidatethresholds {u_i}_i=1^k where v ≥max(u_1, … ,u_k). Inferences are averaged over the posterior distribution of parameters in order to incorporate parameter uncertainty for each candidate threshold. <cit.>fit, what they term, a binomial-GPD (BGPD) for each u_i. The BGPD takes into account the rate of exceedance of the threshold with an extra parameter λ_u = ℙ(X>u). They quantify the predictive ability of this model conditionally on being above v,for each candidate threshold via leave-one-out cross-validation, andchoose the candidate threshold which maximises this measure. An overview is given in supplementary material <ref>. The <cit.> methodrequires subjective choices of the validation threshold, the prior density, and the below-threshold model. As defined, the BGPD model is not a valid “density” as it integrates to ∞ and is discontinuous at any candidate threshold. Its results exhibit substantial sensitivity to thesechoices, e.g., in Section <ref> we illustrate this for the choice of v.Even ignoring the sensitivity due to the subjective choices, we find this method produces quite variable threshold estimates, see Section <ref>.<cit.> develop a procedure to select a time-varying threshold u(t) for earthquake magnitudes. They address a missing data problem by assuming excesses of a constant threshold, u_0 < u(t) for all t, are independent GPD(σ_u_0, ξ) random variables. Values exceeding u(t) are assumed never to be missed whereas, if u(t) > u_0, values in the interval (u_0, u(t)) are potentially missed such that the distribution of observed excesses of u_0 at time t is not GPD.A reduction in u(t) over time t would correspond to an improved detection of smaller earthquakes. A number of metrics are investigated to quantify the fit of this GPD model, accounting for parameter uncertainty, thus permitting automated selection of u(t) over t within a specified parametric family of model choices for u(t).When u(t) is not constant, its excesses do not share a marginal GPD. To quantify the model fit, <cit.> transform the data onto shared margins using an estimated u(t), the estimated GPD parameters for excesses of u(t), andthe probability integral transform. The method also accounts for data rounding. They use the mean absolute deviation from the diagonal of a QQ-plot on Exponential(1) margins to measure fit, with the modelling threshold selected by minimising this average deviation over k bootstrap replications. The chosen threshold then rewards low bias and little sampling variability in the fitted model.§ NOVEL METRIC-BASED CONSTANT THRESHOLD SELECTIONWe adapt the approach of <cit.> to select a constant threshold for continuous, non-missing, iid data. We use a similar QQ-plot-based metric to select a constant value u above which a GPD model is consistent with the data, but make this comparison on the original scale, i.e., without the transformation to Exponential(1) margins. For clarity, our threshold selection method on the original margins will be referred to as the expected quantile discrepancy (EQD) while the method of <cit.> will be termed the Varty method. The following makes the difference between the EQD and Varty methods precise. Let x_u = (x_1, … , x_n_u) be the sample of excesses of candidate threshold u.To incorporate sampling variability into the threshold choice, the expected (average) deviation is calculated across bootstrapped samples of x_u, denoted x^b_u for the b^th bootstrap sample, b=1, … ,k, for an appropriate choice of k. Let T(x;σ,ξ) = F^-1_Exp{H(x;σ, ξ)} where F^-1_Exp is the inverse distribution function of an Exponential(1) variable and define T(x_u;σ̂_u,ξ̂_u) ={T(x_1;σ̂_u, ξ̂_u), …,T(x_n_u;σ̂_u, ξ̂_u)}. The sample quantile function Q(p,x_u) : [0,1] →ℝ^+ is defined as the linear interpolations of the points {(j-1/n-1, x^(j)_u): j = 1,…,n_u}, where x^(j)_u is the j^th order statistic of x_u (increasing with j). For the two methods, the mean absolute deviation for x^b_u is calculated for the probabilities {p_j = j / (m+1): j = 1,…,m}by d_b(u) =1/m∑_j=1^m|σ̂^b_u/ξ̂^b_u[(1-p_j)^-ξ̂^b_u-1] - Q(p_j, x^b_u)| EQD 1/m∑_j=1^m|-log(1-p_j) - Q(p_j, T̂(x^b_u; σ̂^b_u, ξ̂^b_u))| Varty,where (σ̂^b_u, ξ̂^b_u) are the estimated GPD parameters fitted to the bootstrapped sample x^b_u. The overall measure of fit isd̂_E(u) = 1/k∑_b =1^k d_b(u). The selected threshold minimises d̂_E(u) over the set of candidate thresholds. Unless stated otherwise, m=500 and k=100 throughout.In supplementary material <ref>, a small simulation study found that for iid data, the EQD outperforms the Varty method in threshold selection and subsequent quantile inference. The Varty method produced lower threshold choices, resulting in additional bias without sufficient variance reduction to offset this. Thus, in Section <ref>, we focus on comparison of the EQD against the <cit.> and <cit.> methods.§ ACCOUNTING FOR PARAMETER AND THRESHOLD UNCERTAINTYEven in cases where a true threshold is known, relying on point estimates for the parameters of the fitted model could result in misleading inference <cit.>. CIs can be obtained using the standard error or profile likelihood. However, both methods rely on asymptotic arguments and since threshold exceedances tend to be sparse, bootstrap methods are preferable. Algorithm 1 details a standard parametric bootstrapping procedure to account for parameter uncertainty when the threshold is known. We first fit a GPD to the n_u excesses of the known threshold u from a total sample of size n (n ≥ n_u). Using the fitted parameters, we simulate m_1 GPD bootstrap samples of n_u excesses of u, and re-estimate the parameters for each sample. A summary statistic of interest s(σ_u, ξ, λ_u), e.g., a high quantile, may be computed for each parametric bootstrap to obtain the relevant bootstrap sampling distribution, where (σ_u, ξ) are the GPD parameters as defined in distribution (<ref>) and λ_u is the rate of exceedance of threshold u[In Algorithm 1, we focus on the uncertainty of the estimates of (σ_u, ξ). Uncertainty in estimates of λ_u could also be incorporated by simulating a number of excesses drawn from a Bin(n, λ_u) distribution for each bootstrap. This uncertainty is included in Algorithm 2. ]. This enables the construction of CIs which represent sampling variability of the GPD parameter estimates. GPD inferences are sensitive to the choice of threshold <cit.> but uncertainty about this choice is not represented in Algorithm 1. This is particularly important when s(σ_u, ξ, λ_u) informs the design of hazard protection mechanisms, where omitting the uncertainty in the threshold choice could lead to overconfidence in the resulting estimates and have potentially dangerous consequences. Algorithm 2 provides a novel method to propagate both threshold and parameter uncertainty through to return level estimation, using a double-bootstrap procedure. To focus on the threshold uncertainty and to forgo the need for a parametric model below the threshold, we employ a non-parametric bootstrap procedure on the original dataset. We resample with replacement n values from the observed data m_2 times, estimate a threshold for each such bootstrap sample using the automated selection method of Section <ref>, and fit a GPD to the excesses of this threshold. For each one of the m_2 samples, we employ Algorithm 1 to account for the subsequent uncertainty in the GPD parameters. Calculating a summary statistic for each of the m_1 × m_2 samples leads to a distribution of bootstrapped estimates that accounts for both threshold and parameter uncertainty. Unless stated otherwise, m_1=m_2=200 throughout. In Section <ref> we illustrate how using Algorithm 2 improves the coverage probability of CIs, and in Section <ref> how it widensCIs for return levels of the River Nidd by accounting for threshold uncertainty.§ SIMULATION STUDY §.§ OverviewWe illustrate the performance of the EQD method against the <cit.> and <cit.> approaches, which we term the Wadsworth and Northrop methods respectively. The approaches of <cit.> and<cit.> performed considerably worse than all others in threshold selection and quantile estimation; results for these methods are given in supplementary material <ref>.To conduct this simulation study, we utilised the following R code. For the Wadsworth method, code is available in the https://www.tandfonline.com/doi/full/10.1080/00401706.2014.998345supplementary materials of <cit.>. For the Northrop method, we used the R package threshr <cit.>. For the results given in supplementary material <ref>, we utilised the tea package <cit.> for the <cit.> method. Note that the package was not built by the authors of the paper. We constructed our own function for the <cit.> method as there did not seem to be code freely available. R code to implement the EQD method and the analyses throughout this paper can be found at 0<https://github.com/conor-murphy4/threshold_selection_paper> <cit.>.We assess the performance of each method in two scenarios: one where a true threshold exists and the other where it does not exist but the true quantiles are computable. The comparison is based on the methods' ability to estimate the true threshold (when this exists) andhigh marginal quantiles. We use the root-mean-square error (RMSE) as a measure of performance. The true quantiles are given in supplementary material <ref> andall bias-variance components of RMSE, discussed in this section, arein supplementary material <ref>. For data that are GPD above a true threshold u, excesses of all candidate thresholds above u will follow a GPD but selecting these thresholds leads to less precise GPD parameter estimates. In contrast, thresholds which are selected below u lead to bias in the GPD fit as the distribution below u is not a GPD. So, we would anticipate some positive bias in threshold selection, with the level of this bias dependent on how different the distribution below the threshold is to the GPD above the threshold. This is particularly relevant for Scenario 1 where there is a true threshold. §.§ Scenario 1: True GPD tailWithin Scenario 1, we consider four cases, combining different properties above and below the threshold, with the true threshold being u=1.0 in all cases. Table <ref> provides a model description for each case as well as the average sample size. Case 1 is the simplest due to the clearly defined true threshold, large sample size and positive shape parameter. Case 2 consists of samples generated from the same distribution but with a smaller sample size. Case 3 has double the sample size of Case 1 and a negative ξ, close to zero. The Wadsworth method failed to estimate a threshold in samples with ξ < -0.05 irrespective of sample size. Case 3 therefore considers ξ = -0.05 with 2000 exceedances of u, as required for that method to work reasonably. Further case studies, using samples where the Wadsworth method failed to estimate any thresholds, are provided in supplementary material <ref>. Cases 1-3 all have a distinct changepoint in the density at the true threshold. Case 4 provides a more difficult example. The data are derived from a partially observed GPD, denoted GPDp, with data from a GPD above 0 and rejected if points lay below an independent random variate from a Beta distribution on (0,1). Thus, conditional ona GPDp variable being above 1, the excesses of 1 follow a GPD. The results are based on analysis of 500 replicated samples for each case. For each simulated dataset, we tested the set of candidate thresholds {u_i}_i=1^k, k=20, corresponding to sample quantile levels of 0%, 5%, …, 95% in each replicated dataset, with the true threshold corresponding to the 16.67% quantile for Cases 1-3 and the 52.5% quantile for Case 4. Threshold recovery: Table <ref> shows the RMSE of the chosen thresholds for each method. The EQD achieves RMSEs between 1.9 and 8.0 times smaller than the Wadsworth method and between 4.8 and 9.9 times smaller than the Northrop method. The EQD has the lowest bias and variance in all cases. As for Case 2, the failure rate of the Wadsworth method is so large, RMSEs were also derived solely based on samples where the method estimated a threshold. The RMSEs calculated using only these samples showed an increased differential in favour of the EQD method relative to that reported in Table <ref>. Table <ref> shows the RMSEs of threshold choice calculated on 500 replicated samples from Case 1 with a larger sample size of n=20000. The results are shown for each method using two different candidate grids of thresholds. In contrast to the previous results, the Wadsworth method slightly outperforms the EQD achieving the smallest RMSEs for these large samples. However, the sample size for this to be achieved significantly exceeds that for data in practice. This illustrates the potential benefits, but also serious limitations, of relying on asymptotic methods to guide threshold selection.Quantile recovery: Table <ref> presents the RMSEs for the (1-p_j)-quantiles where p_j = 1/(10^jn) for j=0,1,2 with n denoting the size of the simulated sample. We use exceedance probabilities of this form to ensure that, for all n, extrapolation is equally difficult. For j=0, extrapolation is not required and so, the choice of threshold should not be too important; the similar RMSEs across methods reflect this, yet as j increases, all RMSEs increase and the differences between methods become clear. In each case and across all quantiles, the EQD method is best uniformly, followed by the Wadsworth and then, the Northrop method. This pattern reflects the findings in Table <ref>, although here, the differential performances depend on j. The EQD achieves the lowest bias in the majority of cases and quantiles and shows considerably less variance in quantile estimates in all cases, particularly as j increases. However, there are cases (Case 3 in particular) where the Wadsworth method lies close to the EQD method in terms of RMSE. Table <ref> shows the RMSEs of quantile estimation for n=20,000 in Case 1, where for threshold selection the Wadsworth method has a slight benefit over EQD. Here, for quantile estimation, there are similar findings, the Wadsworth method achieves the lowest RMSEs, followed closely by the EQD and then, the Northrop method which obtains significantly higher RMSEs. True quantile coverage: To demonstrate the merit of including the uncertainty in threshold selection in our inference,we apply Algorithms 1 and 2 to data from Case 4.Table <ref> presents the coverage probabilities of the nominal 80% and 95% CIs of the estimated (1-p_j)-quantiles over the 500 samples. Incorporating only parameter uncertainty (Alg 1) leads to underestimation of interval widths and inadequate coverage of the true quantiles, especially as we extrapolate further. The inclusion of the additional threshold uncertainty (Alg 2) leads to more accurate coverage of the true quantiles(particularly for the nominal 95% CIs) across all exceedance probabilities, with the coverage probabilities rising to being slightly above the confidence level in each case. §.§ Scenario 2: Gaussian dataIn applications, there is no true or known value for the threshold, so here, we explore the case where there is no threshold above which excesses follow a GPD. It is well known that Gaussian tails have very slow convergence towards an extreme value limit <cit.> yet for the standard form of this distribution, the true (1-p)^th quantiles are simply Φ^-1(1-p). Therefore, Gaussian data are likely to be particularly difficult for threshold selection but methods can be easily assessed via resultant high quantile estimation. We simulate 500 samples, separately for n (n=2000 and 20000) iid standard Gaussian random variables and use candidate thresholds at sample quantile levels of 50%, 55%, …, 95% and 50%, 50.5%, …, 95% for the smaller and larger samples respectively. Lower candidate thresholds are unnecessary as the GPD density is monotonically decreasing in contrast to the Gaussian below its mode. Quantile recovery: Table <ref> shows the RMSEs of the estimated (1-p_j)-quantiles where p_j=1/(10^jn), for j=0,1,2 and n=2000,20000. For n=2000, the EQD method achieves the smallest RMSE with the Northrop method in second followed by the Wadsworth method. For n=20000, the Northrop methods performs best, closely followed by the EQD and then, the Wadsworth method. The median and 95% CI of the chosen thresholds for each method are given in supplementary material <ref>. The Northrop method tends to choose slightly higher thresholds than the EQD method in both cases, which leads to a small reduction in bias but for only the smaller n is the additional variability relative to the EQD a disadvantage. The Wadsworth method incurs significantly more bias due to finding lower thresholds in general.True quantile coverage: We are also interested in assessing the coverage of true quantiles using Algorithms 1-2 for Gaussian data. Table <ref> presents the coverage probabilities of the nominal 80% and 95% CIs of the estimated (1-p_j)-quantiles with n=2000. Alg 1 leads to drastic underestimation of uncertainty shown by the very low coverage probabilities at both nominal confidence levels. The added threshold uncertainty results in significant increases in coverage of the true quantiles across the 500 samples. While these fall short of the nominal confidence level, this is to be expected due to the known slow convergence of Gaussian data and the large increase in coverage demonstrates the importance of including this additional uncertainty in inference. § APPLICATION TO RIVER FLOW DATAThe widely-studied River Nidd dataset consists of all 154 observed peak daily river flow rates that exceeded 65 m^3/s in the period 1934-1969. Each observation can therefore be deemed “extreme", though not necessarily well-described by a GPD. <cit.> describes the preprocessing applied to the data so that observations can be treated as independent and the difficulties this dataset presents for threshold selection and parameter uncertainty which we reiterated in discussion of Figure <ref>. For the EQD method, we use sample percentiles as candidate thresholds (0%, 1%, …, 99%) and k=200 bootstrap samples for each candidate threshold. The EQD provides a robust estimate û=67.10, lower than previous analyses which allows the use of far more exceedances in the extreme value analysis, reducing the uncertainty in parameter estimates and subsequent quantile estimation. The Wadsworth and Northrop methods fail to estimate a threshold on this grid due to convergence issues. Table <ref> shows the selected thresholds of each of the methods on a range of different candidate grids[In marked cases, the Northrop method outputted a chosen threshold with some convergence warnings. ]. The Wadsworth method fails to estimate a threshold for a dataset of this size unless the candidate grid is made very coarse. This is problematic in practice as using a coarse grid of candidate thresholds is likely to remove the most appropriate threshold from consideration. The Northrop method exhibits substantial variability in the estimated thresholds as the candidate grid is adjusted and requires the grid to be bounded at the validation threshold of the 90%-quantile (if that level is increased, the method either fails or provides convergence warnings). The EQD, however, shows only small variations in the threshold choice as the candidate grid changes, owing to bootstrap sampling variability, however this can be controlled by increasing k. Figure <ref> shows a QQ-plot for the GPD model using the EQD threshold estimate of û=67.10. The tolerance bounds (shaded) show a reasonable agreement between model and data. Figure <ref> also shows the T-year return level estimates calculated for this threshold, with T ∈{1, …, 1000}. The 95% CIs incorporate parameter uncertainty alone (dark-shaded) and both parameter and threshold uncertainty (light-shaded) using Algorithm 1 and 2 respectively, with a substantial increase in uncertainty from the latter for larger T; e.g., for the 100- and 1000-year return levels, the CI width increases by a factor of 1.38 and 1.52 respectively. This reiterates how vital it is to incorporate threshold uncertainty into inference.§ CONCLUSIONSWe proposed a novel and simple approach (EQD) for automatic threshold selection and a technique to propogate uncertainty in the threshold choice in inference. We compared the EQD method to existing threshold selection methods on the basis of threshold selection and high quantile estimation for iid, continuous data and illustrated its superiority across examples using a range of metrics. We have shown the greater robustness of our method, relative to existing approaches, to changes in the candidate threshold set and demonstrated that incorporating threshold uncertainty with parameter uncertainty improves coverage properties in return level inference. The EQD method avoids many of the shortcomings relating to reliance on asymptotic theory or sensitivity to subjective choices made prior to threshold selection. While this paper has demonstrated the effectiveness of the EQD method in the iid setting, its simple nature leaves it open for easy adjustment for more complex settings. § ACKNOWLEDGEMENTSWe are grateful to Ross Towe (Shell) and Peter Atkinson (Lancaster University) for their support and helpful comments during this work. apalikesupplement
http://arxiv.org/abs/2310.17999v2
{ "authors": [ "Conor Murphy", "Jonathan A. Tawn", "Zak Varty" ], "categories": [ "stat.ME", "stat.AP" ], "primary_category": "stat.ME", "published": "20231027091700", "title": "Automated threshold selection and associated inference uncertainty for univariate extremes" }
^1School of Modern Posts & Institute of Modern Posts, Nanjing University of Posts and Telecommunications, Nanjing, P.R. China ^2Key Laboratory of Symbolic Computation and Knowledge Engineering of Ministry of Education, Jilin University, Changchun, 130012, P.R. China [email protected] ^3National Data Center ofTraditional Chinese MedicineChina Academy of ChineseMedical Sciences, Beijing, 100700, P.R. China [email protected] ^4College of Computer Science and Technology, Jilin University, Changchun, 130012, P.R. China [email protected] ^5School of Computer Science and Engineering, Southeast University, Nanjing, China ^6Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, Nanjing,[email protected] ^7School of Mathematics, Southeast University, Nanjing, P.R. China [email protected], [email protected] ^8Chien-Shiung Wu College, Southeast University, Nanjing,[email protected], [email protected] Ontology revision aims to seamlessly incorporate a new ontology into an existing ontology and plays a crucial role in tasks such as ontology evolution, ontology maintenance, and ontology alignment. Similar to repair single ontologies, resolving logical incoherence in the task of ontology revision is also important and meaningful, because incoherence is a main potential factor to cause inconsistency and reasoning with an inconsistent ontology will obtain meaningless answers. To deal with this problem, various ontology revision approaches have been proposed to define revision operators and design ranking strategies for axioms in an ontology. However, they rarely consider axiom semantics which provides important information to differentiate axioms. In addition, pre-trained models can be utilized to encode axiom semantics, and have been widely applied in many natural language processing tasks and ontology-related ones in recent years. Therefore, in this paper, we study how to apply pre-trained models to revise ontologies. We first define four scoring functions to rank axioms based on a pre-trained model by considering various information from an ontology. Based on the functions, an ontology revision algorithm is then proposed to deal with unsatisfiable concepts at once. To improve efficiency, an adapted revision algorithm is designed to deal with unsatisfiable concepts group by group. We conduct experiments over 19 ontology pairs and compare our algorithms and scoring functions with existing ones. According to the experiments,our algorithms could achieve promising performance. The adapted revision algorithm could improve the efficiency largely, and at most about 90% of the time could be saved for some ontology pairs. Some of our scoring functions likecould help a revision algorithm obtain better results in many cases, especially for those challenging ontology pairs like .We also provide discussion about the overall experimental results and guidelines for users to choose a suitable revision algorithm according to their semantics and performance. Ontology revision Inconsistency handling Ontology matching Pre-trained modelsKnowledge reasoning § INTRODUCTION Ontologies play a crucial role in the formal representation of knowledge. An ontology could define a set of entities including classes, properties or individuals, and it can also define axioms to describe the relationships among entities. After the Web Ontology Language (OWL) [<https://www.w3.org/TR/owl-overview/>] based on Description Logics (DLs) became a recommended specification of the W3C, especially with the development of knowledge graph <cit.>, ontologies play an increasingly important role. Currently, numerous OWL ontologies have been developed and applied in various fields like biological medicine <cit.>, public transportation <cit.> and financial area <cit.>. Furthermore, ontologies provide schema restrictions for knowledge graphs <cit.> to facilitate their integration, querying, and maintenance. With the rigorous logical semantics provided by DLs, new logical consequences can be inferred from the axioms explicitly defined in an OWL ontology by applying a standard DL reasoner. Reasoning support is an essential characteristic of an OWL ontology. It requires that the ontology to be inferred is consistent. Otherwise, meaningless results will be obtained. A typical reason for inconsistency in DLs is due to unsatisfiable concepts which are interpreted as empty sets. An ontology is incoherent if it contains at least one unsatisfiable concept. Incoherence often occurs when developing, maintaining and revising ontologies <cit.>. Resolving incoherence is critical to eliminate the potential inconsistency and make reasoning support work correctly.To resolve unsatisfiable concepts in single ontologies, researchers have proposed various approaches<cit.>. In this paper, we focus on resolving incoherence when revising ontologies. For the task of ontology revision, an original ontology should be revised consistently when a new ontology is received, namely adding a new ontology to an original one should not lead to any inconsistency or incoherence. It is assumed that both the original ontology and the new one need to be consistent and coherent. Ontology revision has broad application scenarios. In the case of ontology alignment or ontology matching, two source ontologies are used to revisean alignment between them <cit.>, where an alignment consists of a set of mappings describing the relationships between entities from the two source ontologies. Each mapping can be translated to an OWL axiom <cit.>.Even for a single ontology, it can be divided into a static part and a rebuttal one, and then the static part can be used to revise the rebuttal one <cit.>.So far, researchers have proposed various approaches to revising ontologies by deleting some axioms. One critical task is to decide which axioms should be removed for regaining the consistency or coherence of an ontology.The work in <cit.> proposed a kernel revision operator and chose axioms by considering their weights or frequencies.The work in <cit.> also adopted the notions of kernel revision operators and incision functions to deal with inconsistency by utilizing trust information.To improve efficiency, the authors in <cit.> proposed a graph-based method to revise DL-Lite ontologies, and ranked axioms according to their logical closures.The work in <cit.> revised ontologies based on a partial order of axioms.The method proposed in <cit.> revised ontologies interactively and ranked axioms based on logical reasoning.In the case of revising ontology mappings, the weights of mappings can also be exploited to differentiate the axioms in an alignment <cit.>. Although existing ontology revision approaches provide various strategies or scoring functions to rank axioms, they rarely consider the semantics of axioms which provides an efficient way to differentiate axioms. To make use of the semantics of axioms, pre-trained models can be applied, because they can learn universal language representations on the large corpus and represent words in context <cit.>. Pre-trained models have been widely applied in many natural language processing tasks and ontology-related tasks in recent years and have achieved promising performance <cit.>. Through a pre-trained model, ontology axioms can be encoded as vectors to preserve their semantics.In this paper, we study how to revise ontologies based on pre-trained models. Specifically, ontology axioms need to be translated into natural language sentences first, and then a dense vector could be computed for each sentence based on a pre-trained model. With the obtained vectors, the similarity between any two axioms could be calculated by using a traditional distance metric like Cosine Distance. Based on the similarities, four scoring functions are defined for the ontology revision task. Afterwards, two concrete algorithms are designed to resolve the incoherence encountered when revising an ontology according to a new one. One algorithm needs to compute all minimal incoherence-preserving sub-ontologies (MIPS) first and then associates a score to each axiom in an obtained MIPS according to a scoring function. After that, a revision solution can be obtained based on the subsets extracted from MIPS according to axiom scores. The other algorithm deals with unsatisfiable concepts group by group to cope with the high computation cost of all MIPS.To verify our algorithms and scoring functions, we compare our algorithms with existing algorithms over 19 pairs of ontologies. Each single ontology is coherent and consistent, but their combination is incoherent. Through the experiments, it is revealed that our ontology revision algorithms could achieve promising performances by utilizing the semantics of axioms. Finally, we discuss the overall experimental results and provide guidelines for users to choose a suitable revision algorithm.The main contributions of this paper are summarized as follows: * Four scoring functions are designed for the task of ontology revision and rank axioms based on a pre-trained model to encode the semantics of axioms. These functions consider various aspects of an ontology to be revised and a new ontology to be combined. * We propose two ontology revision algorithms. One needs to compute all MIPS for all unsatisfiable concepts, and the other deals with unsatisfiable concepts group by group. Both algorithms can be configured with various parameters such assimilarity measures and scoring functions. * We implement and evaluate our algorithms over 19 ontology pairs by comparing them with existing ones. Experimental results reveal that our adapted algorithms could achieve much higher efficiency than the ones based on all MIPS, and at most about 90% of the time has been saved. Additionally, our adapted algorithmhas excellent performance in many cases, especially for those challenging ontologies. Finally, some guidelines are provided to users for choosing a specific revision algorithm. The rest of the paper is organized as follows. Section <ref> provides a brief introduction to background knowledge. Section <ref> presents our approach to revising ontologies by providing some key definitions. Two concrete ontology revision algorithms are designed in Section <ref>. Section <ref> describes the experimental results. Related works are introduced in Section <ref>, followed by conclusions and future works in Section <ref>.§ BACKGROUND KNOWLEDGEThis section introduces some preliminary of Description Logic ontologies and the basic definitions related to ontology revision. It also provides some basic knowledge about pre-trained models and similarity metrics. §.§ Description Logics ontologies A Description Logic (DL) ontology consists of a set of concepts, roles, or individuals. Individuals represent individual instances in a domain, concepts represent collections of instances in the domain, and roles represent binary relations between instances. Entities can be atomic concepts, atomic roles, or individuals. They can also be complex concepts or roles constructed by connecting entities using various constructors, such as existential quantifiers (∃), universal quantifiers (∀), conjunction (⊓), or disjunction (⊔). A DL ontology can also define axioms to describe the relationships between entities. The axioms are typically divided into TBox axioms and ABox axioms. A TBox includes concept axioms and role axioms, which have the formats like C ⊑ D and R ⊑ S, where C and D are concept descriptions, and R and S are role descriptions. An ABoxmay include concept assertions and role assertions, which have formats like C(a) and R(a,b), where a and b are specific instances.Take an ontology including the following axioms as an example: ϕ_0: ,ϕ_1: , ϕ_2: , ϕ_3: ,ϕ_4: , ϕ_5: , ϕ_6: , ϕ_7: ,ϕ_8: ,ϕ_9: . Among these axioms, the axiom ϕ_2 represents that two concepts Judge and Student are disjoint. ϕ_6 indicates that the roleis a sub-role of . ϕ_7 means that the concepthas an individual . Similarly, ϕ_8 describes that the individualbelongs to the concept . ϕ_9 tells that the relation between two individualsandis . Each of other axioms represent the subsumption relation between two atomic concepts.For this ontology, its TBox includes the axioms from ϕ_0 to ϕ_6, and its ABox contains the axioms left. Different DL languages may contain diverse constructors and axiom types. A basic DL language is 𝒜ℒ which allows negation, union of atomic concepts, universal quantifiers and limited existential quantifiers. A more expressive DL language can be obtained by adding various factors to 𝒜ℒ, such as 𝒩: quantity restriction, ℐ: inverse roles, 𝒪: nominals, 𝒟: datatypes and ℋ: role hierarchy. In addition, 𝒮 is obtained by adding transitive roles to 𝒜ℒ𝒞.In DLs, concepts and roles correspond to classes and properties in OWL, respectively. DL-Lite is an important sub-language of OWL, and is specifically tailored to capture basic ontology languages while keeping all reasoning tasks tractable <cit.>. Since the complexity of reasoning with DL-Lite ontologies is polynomial time, many inconsistency handling approaches were designed for DL-Lite ontologies. §.§ Ontology revision in DLsBefore introducing the definitions related to ontology revision, we first provide those related to a single ontology.(Inconsistent Ontology) An ontology K is inconsistent if and only if the model set of K is empty. For inconsistent ontologies, the conclusions derived by standard DL reasoners are likely to be completely meaningless.(Unsatisfiable Concept)<cit.> A named concept C in an ontology K is unsatisfiable if and only if for every model I of K, C^(I)=∅. Otherwise, C is satisfiable.Definition <ref> states that C in K is unsatisfiable if and only if it is interpreted as an empty set in every model of K. (Incoherent Ontology)<cit.> An ontology K is incoherent if and only if there exists at least one unsatisfiable named concept in K . Since declaring an instance of an unsatisfiable concept would lead to inconsistency, incoherence is a potential factor to cause inconsistency. Usually, incoherence occurs in a TBox, while inconsistency is discussed across an entire ontology. In this paper, we only focus on incoherence and follow the definitions given in <cit.>.To explain why a concept is unsatisfiable, several or allminimal unsatisfiability-preserving subsets (MUPS) can be computed. (MUPS) Let C be an unsatisfiable concept in an ontology K. A subset K'⊆ K is considered as a MUPS of C in K if C is unsatisfiable in K', but satisfiable in any K”⊂ K'. A MUPS of C is actually a minimal sub-ontology of K to explain the unsatisfiability of C. In real-life incoherent ontologies, an unsatisfiable concept often has more than one MUPS.To explain the incoherence of an ontology, a set of minimal incoherence-preserving subsets (MIPS) can be computed.(MIPS) For an incoherent ontology K, its sub-ontology K'⊆K is a MIPS of K if K' is incoherent and every sub-ontology K”⊂K' is coherent. Obviously, a MIPS is a MUPS, and a MUPS includes all axioms of a MIPS. MUPS or MIPS can be used to resolve the unsatisfiability of a concept or the incoherence of an ontology. For the ontology given in Example <ref>, it is incoherent and inconsistent. There is one unsatisfiable conceptwhich has one MUPS {ϕ_2, ϕ_4, ϕ_5}. This MUPS means that the concepthas two disjoint super-concepts. Since there is only one MUPS, it is also a MIPS.In the case of ontology revision, it is usually assumed that we have an ontology K to be revised and a new ontology K_0 to be combined without making any changes. Each of the two ontologies is coherent and consistent while their combination is inconsistent or incoherent. In this paper, we focus on resolving incoherence since incoherence is a potential factor to cause inconsistency. Since K is changeable and K_0 is unchangeable, we call K as a rebuttal ontology and K_0 as a reliable ontology.To resolve incoherence, existing works designed algorithms to delete or modify some axioms in K. We focus on deleting axioms. The task of ontology revision in this work can be formally defined as follows.(Ontology Revision) Let K be a rebuttal ontology and K_0 be a reliable ontology. Ontology revision is to remove a set of axioms S from K such that removing the axioms in S from K makes the union of K_0 and the modified K coherent. Namely, (K∖ S)∪ K_0 C⊑ for any C∈ K.From Definition <ref> we can see that the selection of axioms in S is the key to revise an ontology. Among existing ontology revision approaches, computing a set of axioms to be removed based on MIPS is a popular way <cit.>. In this work, we also follow the idea. Since the computation of MIPS is often depend on all MUPS of all unsatisfiable concepts, we provide the formal definitions about MUPS and MIPS for ontology revision. For clarity, we use R-MUPS and R-MIPS to separately indicate MUPS and MIPS used in the task of ontology revision.(R-MUPS) Assume we have a reliable ontology K_0 andan ontology K to be revised. For an unsatisfiable concept C in K w.r.t. K_0, a R-MUPS K'⊆ K of C w.r.t. K_0 satisfies: (1) C is unsatifiable in K' ∪ K_0; (2) For each K”⊂ K', C is satisfiable in K”∪ K_0. For convenience, the set of all R-MUPS of C in K w.r.t. K_0 is denoted by MUPS_K_0(K, C). (R-MIPS) Assume we have a reliable ontology K_0 andan ontology K to be revised. A R-MIPS K' of K w.r.t. K_0 is a subset of K and satisfies the following conditions: (1) K' ∪ K_0 is incoherent; (2) For each K”⊂ K', K”∪ K_0 is coherent. The set of all R-MIPS of K w.r.t. K_0 is denoted by MIPS_K_0(K). According to the definitions of R-MUPS and R-MIPS, we can see that they separately become MUPS and MIPSwhen K_0 is an empty set. Following Example <ref>, assume we have a reliable ontology K_0={ϕ_0, ϕ_1, ϕ_2} and a rebuttal ontology K={ϕ_3, ϕ_4, ϕ_5, ϕ_6}. The conceptin K is unsatisfiable. It has a R-MUPS M={ϕ_4, ϕ_5}, because it is unsatisfiable in M∪ K_0 but satisfiable in any M'∪ K_0 where M'⊂ M. The R-MUPS M is also a R-MIPS.§.§ Pre-trained models and similarity metrics Pre-trained models primarily leverage a large amount of unlabeled data available on the web for training, avoiding the high cost of supervised annotation. They could represent words or sentences with high-dimensional vectors in a semantic way, and thus have been widely accepted in both academic and industrial fields <cit.>. They can be used to perform various tasks without a resource-consuming training process. One of the most popular pre-trained models is BERT (Bidirectional Encoder Representations from Transformers) <cit.>, which was proposed by Google and consists of a bidirectional encoder based on the Transformer architecture. BERT has achieved excellent results in various ontology or knowledge graph-related tasks <cit.>. In this paper, BERT is adopted to obtain vectors of sentences due to its semantic representation.Before applying a pre-trained model, axioms need to be transformed into natural language sentences first. Similar to our previous work in <cit.>, we use the tool of NaturalOWL[<http://www.aueb.gr/users/ion/publications.html>] which is described in the work of<cit.>. After that, the sentences can be converted into vectors by applying a pre-trained model. With these vectors, the similarity between two sentences could be computed by exploiting a distance metric or similarity metric like Cosine Distance and Euclidean Distance. To be specific, the following definitions show how to compute a similarity value for two vectors based on a distance metric.(sim_cos) The similarity metric based on Cosine Distance (marked as sim_cos) is formally defined as follows: sim_cos(v_1,v_2) =12(1+v_1· v_2||v_1||× ||v_2||)=12(1+∑_i=1^d v_1i× v_2i√(∑_i=1^d (v_1i)^2)×√(∑_i=1^d (v_2i)^2)) Here, v_1i and v_2i indicate the ith element in the vectors v_1 and v_2 respectively. ||v_1|| and ||v_1|| indicate the norms of v_1 and v_2 separately. d is the dimension of v_1 or v_2, both of them have the same dimension.(sim_euc) The similarity metric based on Euclidean Distance (marked as sim_euc) is defined as follows: sim_euc(v_1,v_2)=kk+√(∑_i=1^d (v_1i-v_2i)^2) Here, k is a positive integer and d indicates the dimension of v_1 or v_2. Both vectors have the same dimension. According to this definition, we can observe that the greater k is, the greater the similarities based on Euclidean Distance could reach.Both similarity functions range from 0 to 1 since they have been normalized already. § APPROACH In this section, we present our approach to resolving the incoherencein the case of ontology revision. To resolve the incoherence in a rebuttal ontology K w.r.t. a reliable ontology K_0 by removing some axioms from K, a natural way is to remove at least one axiom from each R-MIPS in MIPS_K_0(K). In the following, we first define an incision function to choose axioms from R-MIPS.(Incision Function) Assume we have a reliable ontology K_0 and a rebuttal ontology K. An incision function σ for K w.r.t. K_0 is a function from 2^2^K to 2^K and is defined as follows: (i) σ(MIPS_K_0(K))⊆⋃_M∈MIPS_K_0(K) M; (ii) if M ∈MIPS_K_0(K), then M∩σ(MIPS_K_0(K))≠∅. In Definition <ref>, the first condition indicates the axioms selected from K by applying an incision function must belong to the union of all R-MIPS of K w.r.t. K_0. The second one means the set of selected axioms must have an intersection with each R-MIPS. Overall, an incision function provides a general standard about how to choose axioms for removing to regain coherence.As explained in our previous work in <cit.>, an incision function is desired to be minimal for catering to the characteristic of minimal change. A minimal incision function can be formally defined as follows.(Minimal Incision Function) Assume we have a reliable ontology K_0 anda rebuttal ontology K. An incision function σ for K w.r.t. K_0 is minimal if there is no other incision function σ' for K such that σ'(MIPS_K_0(K))⊂σ(MIPS_K_0(K)).A minimal incision function selects a minimal set of axioms from each R-MIPS. Among all of the minimal incision functions, some of them may select more axioms than others. For example, if two minimal incision functions select axioms {a_1, a_2, a_3} and {a_1, a_4} separately, the first incision function chooses one more axiom than the second one. In the following, an incision function is defined to choose axioms with the minimal cardinality. (Cardinality-Minimal Incision Function) Assume we have a reliable ontology K_0 anda rebuttal ontology K. An incision function σ for K w.r.t. K_0 is cardinality-minimal if there is no other incision function σ' for K such that |σ'(MIPS_K_0(K))|< |σ(MIPS_K_0(K))|. In Definition <ref>, for a set S, |S| indicates the number of elements in the set S, namely the cardinality of S. The definition shows that, for the given ontologies K and K_0, the number of axioms chosen by a cardinality-minimal incision function is minimal.Note that the resulting set of an incision function corresponds to a diagnosis, and that of a minimal (or cardinality-minimal) incision function corresponds to a minimal (or cardinality-minimal) diagnosis <cit.>. Removing all axioms in a diagnosis from an ontology will regain coherence of the ontology.With an incision function, a kernel revision operator can be formally defined as follows. (Kernel Revision Operator) Assume we have a reliable ontology K_0,a rebuttal ontology K, and an incision function σ. The kernel revision operator ∘_σ for K w.r.t. K_0 is defined below: K∘_σ K_0=(K∖σ(MIPS_K_0(K))) ∪K_0.Definition <ref> means that a unique ontology can be obtained by removing those axioms selected by an incision function. According to the definition of an incision function, we know that the resulting ontology of such an operator must be coherent.As we can see, it is a critical step to decide which axioms in a R-MIPS should be removed. Usually, the principle of a minimal change is desired when deleting information. It would be better to remove fewer axioms or remove some axioms with less information loss (e.g., weights or trusts) <cit.>.In this work, we consider the semantic information of axioms and provide the following scoring functions to rank each axiom in a R-MIPS. When ranking an axiom, we consider the semantic relationship between it and any other axiom in a given set. The similarity between two axioms can be measured by the similarity metrics sim_cos or sim_euc based on a pre-trained model (see Definition <ref> and <ref>). The low similarity between two axioms usually means that they have weak semantic relationship, or they have no semantic relationship at all. To reduce the influence of low similarity values, a threshold is used and only those similarity values over the threshold are considered. (Similarity between an axiom set and an axiom) Assume we have a reliable ontology K_0 and a rebuttal ontology K. Given an axiom α in K, an axiom set S⊆ K∪ K_0 and a predefined threshold t, we define the similarity between S andα with respect to t as follows: sim_K_0^K(S, α, t) = 1|S'|+1∑_β∈ S'sim(v_α,v_β), where S'={β∈ S | sim(v_α,v_β)>=t}.Definition <ref> computes the average similarity between an axiom α and all axioms in an axiom set S with a threshold t. Firstly, it extracts a subset S' from S, so that the similarity between each axiom in S' and α is not less than t. Then, the average similarity between α and an axiom in S' is calculated.This definition is similar to the threshold-based degree given in our previous work <cit.>. Their main difference is that we add 1 to |S'| for smoothing our scoring function. In this way, we could avoid the case that denominator equals zerowhen S' is an empty set.Based on Definition <ref>, a scoring function could be defined based on all R-MIPS. Since the axioms in a R-MIPS are often regarded as problematic information, an axiom in a R-MIPS would be more problematic if it had higher similarity with other axioms in R-MIPS. Definition <ref> realizes this idea.(Scoring function based on MIPS union) Assume we have a reliable ontology K_0 and a rebuttal ontology K. For an axiom α in K and a predefined threshold t, the score of α based on R-MIPS union can be defined: score_mipsUnion(K,K_0,α, t) = sim_K_0^K(⋃_M ∈ MIPS_K_0(K)M, α, t). This definition roughly calculates the average similarity between the axiom α and an axiom in the union of all R-MIPS. Alternatively, for an axiom to be ranked, we could consider the relationship between it and each R-MIPS. (Scoring function based on MIPS) Assume we have a reliable ontology K_0 and a rebuttal ontology K. For an axiom α in K and a predefined threshold t, the score of α based on R-MIPS can be defined as follows: score_mips(K,K_0,α,t) = 1|MIPS_K_0(K)|∑_ M ∈ MIPS_K_0(K)sim_K_0^K(M,α,t). The definition first calculates the similarity between the axiom to be ranked and a R-MIPS, and then average all of the obtained similarities. Similarly, since all axioms in a rebuttal ontology are unreliable, an axiom should have a high priority to be removed if it has a high similarity with the rebuttal ontology. Thus, we obtain another scoring function score_rebuttalOnt.(Scoring function based on a rebuttal ontology) Assume we have a reliable ontology K_0 and a rebuttal ontology K. For an axiom α in K and a predefined threshold t, the score of α based on K can be defined as follows: score_rebuttalOnt(K,K_0,α, t) = sim_K_0^K(K, α, t). The definition ranks an axiom with the average similarity between the axiom and an axiom in a rebuttal ontology.Additionally, because all axioms in a reliable ontology are stable and reliable and it is often desired to have a compact ontology containing axioms that are semantically connected, an axiom should have a high priority to be removed if it has a low similarity with the reliable ontology. To realize this idea, we obtain the following scoring function. (Scoring function based on a reliable ontology) Assume we have a reliable ontology K_0 and an ontology K to be revised. For an axiom α in K and a predefined threshold t, we define its score: score_reliableOnt(K,K_0,α, t) = sim_K_0^K(K_0,α,t). The definition ranks an axiom with the average similarity between an axiom and an axiom in the reliable ontology K_0. Following the two ontologies given in Example <ref>, we provide the process of ranking axioms in a R-MIPS by taking the scoring function score_reliableOntwith the similarity metric sim_cosas an example. For R-MIPS {ϕ_4, ϕ_5} in K w.r.t. K_0, we rank the axiom ϕ_4 first. After applying the pre-trained model BERT to compute vectors for the axioms in K and K_0, we obtain: sim_cos(ϕ_4, ϕ_0)≈ 0.81, sim_cos(ϕ_4, ϕ_1)≈ 0.78 and sim_cos(ϕ_4, ϕ_2)≈ 0.74. Assuming t=0.5, we obtain the score of ϕ_4: score_reliableOnt(K,K_0,ϕ_4, 0.5)=sim_K_0^K(K_0,ϕ_4,0.5) =1/3+1(sim_cos(ϕ_4, ϕ_0)+sim_cos(ϕ_4, ϕ_1)+sim_cos(ϕ_4, ϕ_2)) ≈1/4(0.81+0.78+0.74) ≈ 0.58. Similarly, we havesim_cos(ϕ_5, ϕ_0)≈ 0.61, sim_cos(ϕ_5, ϕ_1)≈ 0.54 and sim_cos(ϕ_5, ϕ_2)≈ 0.56. The score of ϕ_5 can be calculated: score_reliableOnt(K,K_0,ϕ_5, 0.5) =1/3+1(sim_cos(ϕ_5, ϕ_0)+sim_cos(ϕ_5, ϕ_1)+sim_cos(ϕ_5, ϕ_2)) ≈1/4(0.61+0.54+0.56) ≈ 0.43. § ALGORITHMIn this section, we design two specific algorithms for ontology revision based on the scoring functions defined in the previous section. Before introducing the algorithms, we first present the details to compute a diagnosis for aset of R-MIPS. Algorithm <ref> describes the steps to compute a diagnosis. It takes a reliable ontology K_0, a rebuttal ontology K and a set of R-MIPS as inputs, and outputs a diagnosis to resolve all inputted R-MIPS. When computing R-MIPS, an existing incoherence-detecting algorithm (see <cit.>) could be used.In the algorithm, a scoring function should be first selected to assign scores to the axioms in the union of all R-MIPS. In this algorithm, the scoring function score_mips is selected (see Line 4) . From Line 6 to Line 8, the algorithm chooses a subset from each R-MIPS by considering those axioms with the highest score.Line 9 computes a diagnosis over the extracted subsets by applying an integer linear programming (abbreviated as ILP) solver which is similar to our previous algorithm given in <cit.>. The ILP-based method (𝒞) first associates a binary variable to each axiom in the union of the elements in 𝒞, and then constructs an objective function over these variables. For each element in 𝒞, a constraint is constructed. An optimal assignment is finally generated such that all constraints are satisfied, and it can be translated to DL axioms easily.It is noted that, the algorithm invokes scoring functionscore_mipsto rank axioms (see Line 3), which can be replaced by scoring functions score_mipsUnion or score_rebuttalOnt.If score_reliableOnt is applied, Line 7 should be modified by changing greater-than symbol to less-than symbol. Namely, we choose those axioms with the lowest score from each R-MIPS. In the following, an example is provided to illustrate the step of subset extraction. Following Example <ref>, we illustrate how to extract a subset from a R-MIPS when score_reliableOnt is applied in Algorithm <ref>. For the R-MIPS {ϕ_4, ϕ_5}, the subset {ϕ_5} is extracted from it since score_reliableOnt(K,K_0,ϕ_5,0.5) < score_reliableOnt(K,K_0,ϕ_4,0.5). Based on Algorithm <ref>, we propose an algorithm to revise an ontology based on all R-MIPS (see Algorithm <ref>). It takes a reliable ontology K_0 and a rebuttal ontology K as inputs, and outputs a diagnosis to resolve the incoherence in K w.r.t. K_0.In this algorithm, all R-MIPS are computed first and a diagnosis can be calculated by invoking Algorithm <ref> (see Lines 2-3). Although the obtained diagnosis is minimal with respect to the subsets obtained in Algorithm <ref>, it may not be minimal regarding the R-MIPS. For example, assume we have two R-MIPS {a_1: 0.6, a_2: 0.5} and {a_1: 0.6, a_3: 0.7}, and the corresponding subsets extracted by an extraction strategy are {a_2: 0.5} and {a_1: 0.6} separately, where a_i (i=1, 2, 3) indicates an axiom and a real number like 0.5 or 0.6 represents a weight. The final diagnosis is {a_1, a_2}. In the set, a_2 is actually redundant since removing a_1 has resolved the two R-MIPS already. Therefore, the algorithm checks all axioms in the diagnosis D' and removes those redundant axioms (see Lines 5-8 in Algorithm <ref>). Finally, a minimal diagnosis D is obtained.By removing all axioms in D from K, the union of modified K and K_0 becomes coherent. Since computing all R-MIPS is often time-consuming or memory-consuming <cit.>, it may be infeasible to obtain all of them within limited resources. To deal with this problem, our previous work in <cit.> proposed an adapted revision algorithm to resolve unsatisfiable concepts one by one. However, it may not be necessary to revise ontologies in this way since computing all R-MUPS for an unsatisfiable concept is much easier due to their small sizes. Therefore, we design a trade-off revision algorithm to deal with unsatisifable concepts group by group. Namely, a set of local R-MIPS can be obtained over all R-MUPS of all unsatisfiable concepts in a group, and then we focus on ranking the axioms in local R-MIPS. In this way, a local revision solution may be obtained.A local R-MIPS can be formally defined as follows: (Local R-MIPS) For a reliable ontology K_0 and a rebuttal ontology K, assume we have a set of unsatisfiable concepts S in K w.r.t. K_0. A sub-ontology K'⊆K is a local R-MIPS of K w.r.t. K_0 if the following conditions hold: (1) There exists a concept C in S such that C is unsatisfiable in K' w.r.t. K_0; (2) Each concept in S is satisfiable in every sub-ontology K”⊂K' w.r.t. K_0. Different with a global R-MIPS, a local R-MIPS is calculated based on a set of unsatisifalbe concepts in a rebuttal ontology with respect to a reliable one while not based on all unsatisifalbe concepts. Thus, a local R-MIPS is a R-MUPS, but it may not be a global one. Based on the definition of local R-MIPS, we design an adapted algorithm to resolve all unsatisfiable concepts group by group (see Algorithm <ref>). The inputs of the algorithm include an ontology K to be revised, a reliable ontology K_0 and a step length n to deal with a fixed number of unsatisfiable concepts for each iteration. Its output is a diagnosis to resolve all unsatisfiable concepts in K w.r.t. K_0. In Algorithm <ref>, all unsatisfiable concepts need to be calculated firstby using a standard DL reasoner such as Pellet <cit.>, and then iterates on all of these concepts (see lines 4-5). For each unsatisfiable concept, if it is still unsatisfiable in the modified ontology K w.r.t. K_0, all of its R-MUPS will be computed (see lines 6-9). We use a variable k to control the number of unsatisfiable concepts to be dealt with (see Line 10). If k reaches the predefined length n, all local R-MIPS w.r.t. the n unsatisfiable concepts will be computed based on all found R-MUPS (see lines 11-12). Once a set of local R-MIPS is obtained, a local diagnosis can be computed by invoking the algorithm(i.e., Algorithm <ref>), and the global diagnosis D should be updated by adding all elements in D' (see lines 13-14). Afterwards, K needs to be updated by removing all axioms in D', and the set of R-MUPS ℳ𝒰 and the counter k should be reset (see lines 15-17).When all unsatisfiable concepts in UC have been checked, the “for" loop will be terminated.Outside the loop, ℳ𝒰may not be empty as less than n unsatisfiable concepts may have not been resolved. in such a case, all local R-MIPS are computed based on ℳ𝒰, and a local diagnosis is calculated (see lines 18-20). The global diagnosis D should be updated again (see Line 21).Finally, after removing redundant axioms (see Lines 23-26), aminimal diagnosis G is obtained, and removing all axioms in G from K will regain coherence. § EXPERIMENTS In this section, we first introduce the data set and experimental settings, and then provide experimental results. It should be noted that all algorithms were implemented with OWL API[<http://owlcs.github.io/owlapi/>] in Java. The functionality of computing vectors for sentenceswas implemented in Python,and the pre-trained model BERT was applied to calculate vectors (see the introduction in Section <ref>). To perform standard reasoning tasks, the widely used DL reasoner Pellet <cit.> was selected.In addition, according to our experience, the parameter k in the definition of sim_euc (see Definition <ref>) was set to be 15 for our experiments, and the threshold t in the definition of similarity between an axiom set and an axiom (see Definition <ref>) was assigned to be 0.5.All implementations together with our data set and experimental results are available online[<https://github.com/QiuJi345/ontRevision>]. §.§ Data setThe data set consists of two groups of single ontologies (see Table <ref>). One group comes from the conference track on the platform of Ontology Alignment Evaluation Initiative (OAEI) [<http://oaei.ontologymatching.org/2021/conference/index.html>], which started in 2004 and is to evaluateontology matching systems from various dimensions. In this group, a rebuttal ontology (see ontologies from M0 to M9) is an alignment between two single ontologies, and its full name is named with the names of a matching system and two single ontologies. A reliable ontology (see ontologies from O0 to O6) combines two single ontologies. For example, the reliable ontologycombines single ontologiesand . The rebuttal ontologyindicates the alignment betweenandwhich is generated by the matching system<cit.>. Similarly, other rebuttal ontologies are produced by ontology matching systems<cit.>,Lily<cit.>,OTMapOnto <cit.> and TOM<cit.> separately. According to the matching results provided by OAEI 2021, 16 systems have participated the conference track, and the alignments produced by half of them contain unsatisfiable concepts. Among the 42 incoherent alignments, we chose 10 incoherent alignments and obtained 10 ontology pairs (see pairs from OM0 to OM9 in Table <ref>) with different numbers of unsatisfiable concepts and various sizes of axioms. The other group comes from the consistent but incoherent ontologythat was learned on a text corpus consisting of the abstracts from the “knowledge management" information space of the BT Digital Library <cit.>. The original ontologycontains more than 10,000 axioms and 1,000 unsatisfiable concepts. It is very challenging to resolve all unsatisfiable concepts. In our experiments, 10 coherent sub-ontologies were extracted (see ontologies from km0 to km8 in Table <ref>), and 9 ontology pairs were constructed for revision (see pairs from KM0 to KM9 in Table <ref>). For each pair, the combination of its contained ontologies is incoherent.Comparing the two groups of single ontologies or ontology pairs, it can be observed that each rebuttal ontology in the first group mainly contains axioms representing equivalent classes (i.e., EquClass in Table <ref>), and no more than 50 axioms are included in such an ontology. The corresponding ontology pairs usually contain less than 1,000 axioms in total, but the number of contained unsatisfiable concepts varies from 5 to 84.Each ontology pair in the second group contains 2,000 axioms. Such ontologies only contain subsumptions (i.e.,in Table <ref>), axioms representing equivalent classes, domain and range. Their expressivity is 𝒜ℒ𝒞. They are much less expressive than all ontology pairs in the first group, and vary greatly in number of unsatisfiable concepts (ranging from 4 to 120).§.§ Experimental SettingsAll experiments were performed on a laptop with 1.99 GHz Intel^® Core^TM CPU and 16GB RAM, using a 64-bit Windows 11 operating system. A time limit of 1,000 seconds is set to compute R-MUPS for an unsatisfiable concept or compute a diagnosis. A black-box algorithm implemented in <cit.> was exploited to compute R-MUPS or R-MIPS.We evaluate our two revision algorithms by using different subset extraction strategies configured in the following ways. * : The two extraction strategies indicate ranking axioms by the scoring function score_mipsUnion (see Definition <ref>) with similarity measures sim_cos and sim_euc(see Definition <ref> and Definition <ref>) separately, and then selecting axioms with the highest score from each R-MIPS. * : The two extraction strategies represent ranking axioms by the scoring function score_mips (see Definition <ref>) with similarity measures sim_cos and sim_euc separately, and then selecting axioms with the highest score from each R-MIPS. * : The two extraction strategies indicate ranking axioms by the scoring function score_rebuttalOnt (see Definition <ref>) with similarity measures sim_cos and sim_euc separately,and then selecting axioms with the highest score from each R-MIPS. * : The two extraction strategies indicate ranking axioms by the scoring function score_reliableOnt (see Definition <ref>) with similarity measures sim_cos and sim_euc separately, and then selecting axioms with the lowest score from each R-MIPS. Additionally, four existing extraction strategies below were chosen to compare with ours because they were frequently used in the existing works <cit.>. * : This is a baseline strategy to compute a diagnosis based on all R-MIPS directly without extracting subsets <cit.>. Since the diagnosis found by this strategy is minimal regarding to all inputted R-MIPS, it is not necessary to check redundancy of axioms. * : The strategy ranks axioms with their frequency <cit.>. Namely, for an axiom, its frequency is the number of R-MIPS containing it. The strategy chooses the axioms with the highest frequency from each R-MIPS. * : This strategy assigns a penalty to an axiom α in a R-MIPS, where the penalty is inversely proportional to the size of a R-MIPS whereα is contained <cit.>. Thus, the strategy regards those axioms with the highest penalty score in a R-MIPS as candidates to remove. * : It is a signature-based strategy that originally ranks an axiom by summing the reference counts in other axioms for all entities appearing in the axiom <cit.>. An entity can be a class name, a property name or an individual name.In this paper, we compute the reference counts for an axiom based on all axioms in a reliable ontology. Since it is often desired to have a reliable ontology whose axioms are more relevant to some extent, this strategy selects those axioms with the lowest score from each R-MIPS.§.§ Experimental Results In this section, we first present the results about the preparation about computing vectors, R-MUPS and R-MIPS. We then describe the results about revision based on all R-MIPS and local R-MIPS. Finally, a brief discussion of all experimental results is provided. §.§.§ Results about preparationBefore revising a rebuttal ontology by applying one of our semantics-based scoring function, the vectors of all axioms in an ontology needs to be calculated first. This process can be done offline. Table <ref> provides the time to compute vectors for a single ontology. From the table, we can obviously observe that the efficiency of computing vectors mainly relies on the number of axioms. Eachontology contains 1000 axioms, and thus they spent similar time (i.e., about 6 seconds). For ontologies O2 and O4, they also contain around 1000 axioms and took similar time to finish the computation of vectors. As for the ontologies obtained by translating ontology alignments, they usually include no more than 50 axioms and each of them took no more than 0.5 seconds.§.§.§ Results about computing all R-MIPSIn this section, we present the results about all R-MIPSin a rebuttal ontology with respect to its corresponding reliable ontology together with R-MUPS information, since all R-MIPS are obtained based on all R-MUPS of all unsatisfiable concepts. Table <ref> gives the details about the results of R-MUPS and R-MIPS. The 2nd column displays the average number of R-MUPS per unsatisfiable concept. The 3rd and 4th columns present the maximal and minimal number of R-MUPS separately. The columns from 6 to 8 provide average, maximal and minimal sizes of a R-MIPS. The last column describes the consumption time to compute all R-MIPS for an ontology, which includes the time to compute all R-MUPS for the considered concepts.In the last column, the time in bold means the computation of all R-MIPS cannot be finished successfully within limited memory. In such a case, we execute the code again to find all R-MUPS for the remaining unsatisfiable concepts. In this way, all R-MIPS could be calculated based on all R-MUPS of all unsatisfiable concepts, and the revision algorithm based on all R-MIPS could be applied. This process may be performedmany times until all R-MUPS for all unsatisfiable concepts in a rebuttal ontology were found. Specifically, the code was performed twice for(or ) and the total time is more than 300 seconds. Forand , the process was repeated for six and seven times separately. It took more than 900 seconds to finish the computation for each of them.From Table <ref>, we first observe that:Although aontology pair often contains much more unsatisfiable concepts than anpair,each unsatisfiable concept in it has fewer R-MUPS on average, and its maximal number of R-MUPS is no more than 3. While for 60%pairs, the maximal number of R-MUPS is larger than 3. For instance, an unsatisfiable concept inhas 23 R-MUPS at most, and 15 for.That's one main reason causing that anpair spent much more time than apair to compute all R-MIPS. Take pairsandas examples. It tookabout 200 seconds while no more than 40 seconds for .Except for the maximal or average number of R-MUPS, the expressivity, total size of a pair and number of unsatisfiable concepts are alsomain reasons to influence the efficiency of computing R-MIPS. For example, it took more time for the pairs -than the pairs -, because they contain more axioms and unsatisfiable concepts. §.§.§ Results about ontology revision based on all R-MIPSIn this section, we evaluate our revision algorithms based on R-MIPS by comparing with existing ranking strategies or existing algorithms. Specifically, eight revision algorithms based on pre-trained models are obtained from Algorithm <ref> by applying different scoring functions defined in this paper and similarity measures. Another four revision algorithms are obtained by replacing the extraction strategy in Algorithm <ref> with , ,and .For simplicity, each revision algorithm is named by its extraction strategy in this section.It should be noted that, when applying the extraction strategyto Algorithm <ref> proposed in this paper, we ignore subset extraction and redundancy checking as a diagnosis is computed based on all R-MIPS directly and no redundant axioms exist. This algorithm is actually the same as the revision algorithm given in <cit.>.In addition, when applyingto Algorithm <ref>, the obtained algorithm can be seen as an enhanced version of the revision algorithm given in <cit.>. The main difference is that the original revision algorithm in <cit.> applies a hitting set tree algorithm <cit.> to compute a diagnosis while ours uses ILP due to its high efficiency <cit.>. All of these algorithms were evaluated with respect to the efficiency and number of removed axioms (see Figure <ref>). The time presented in Figure <ref> does not include the time to calculate all R-MIPS since these algorithms are all based on all R-MIPS and we only compare their difference.From Figure <ref> we can see the algorithms are all efficient to compute a diagnosis based on all R-MIPS, and usually spent no more than 14 seconds. Especially,is the most efficient one and often spent less than 0.2 seconds since it is not necessary to check redundancy of axioms. In fact, except , all algorithms spent most of their time checking redundancy of axioms.Among the four existing algorithms,is more time-consuming than others. It needs to compute reference counts for an axiom to be ranked based on all axioms in a reliable ontology, while others rank an axiom only based on all R-MIPS which usually involve fewer axioms. Among our eight algorithms, , ,andoften outperform others since fewer axioms need to be considered when ranking axioms. Comparing two similarity measures sim_cos and sim_euc, the algorithms with the same extraction strategy but different similarity measures behave similarly.Comparing our algorithms with existing ones, ours usually took a little more time, especially the ones considering a rebuttal or reliable ontology, since they need to spent more time to compute similarities. Takeas an example. Existing algorithms ,andtook about 600 milliseconds, our algorithms took around 1100 milliseconds.According to the number of removed axioms shown in Figure <ref>, it can be observed that different revision algorithms removed similar number of axioms, especially forontologies. It should be noted thatremoved slightly more axioms than other existing algorithms.For example,removed 10, 12 and 21 axioms for ontology pairs , and , while other existing algorithms removed 8, 8 and 12 axioms separately. We also present the number of redundant axioms found by each algorithm in Figure <ref>. In this figure, not all ontology pairs or extraction strategies are shown as no redundant axioms were found for such a case. For example, all algorithms did not find any redundant axioms for ontology pairs fromto , andanddid not produce redundancy for all tested ontology pairs.From this figure, we can observe that usually no more than 2 redundant axioms were found, and a revision algorithm is easily to produce redundancy for ontology pairwhich contains much more R-MIPS and is the most challenging one to be revised. For , , , ,andfound more than 6 redundant axioms.§.§.§ Ontology revision results based on local R-MIPS In this section, we discuss the experimental results of the adapted revision algorithms obtained by using various extraction strategies. Namely, an adapted algorithm is obtained by replacing the subset extraction strategy in Algorithm <ref> with one the strategies mentioned in Section <ref>. Similar to the revision algorithms based on all R-MIPS, an adapted algorithm is also named by the name of its extraction strategy for simplicity. We first conducted an experiment by setting the step length to 10. We selected those ontology pairs with more than 30 unsatisfiable concepts since computing all R-MIPS for such a pair may be more challenging. In addition, although ontology paircontains 84 unsatisfiable concepts, we did not test it in this experiment because all adapted algorithms failed to compute a diagnosis for it within limited memory. In this way, eight ontology pairs were selected (see Figure <ref>).The time presented in this figure includes both the time to compute local R-MIPS and the time to compute a diagnosis. From Figure <ref>, we obtain the following main observations: *still cannot perform well within the adapted revision framework. Take ontology pairas an example.took about 50 seconds and removed 12 axioms in total, while nearly all other algorithms removed no more than 9 axioms within 40 seconds. * The adapted algorithm with baseline extraction strategymay not always remove cardinality-minimal axioms. For instance, when revising ontology pair ,removed 6 axioms while our algorithmremoved 5 axioms. It is mainly caused by the fact that each final diagnosis consists of multiple local diagnoses, and different adapted algorithms may have distinct local diagnoses. Although each final diagnosis is minimal, the baseline adapted algorithmmay not find a cardinality-minimal diagnosis. * Comparing the number of explained unsatisfiable concepts in this figure with the total number of unsatisfiable concepts given in Table <ref>, the adapted revision algorithms only need to explain much fewer unsatisfiable concepts than the algorithms based on all R-MIPS. For all of these selected ontology pairs, no more than 30 unsatisfiable concepts for each pair need to be explained. * Comparing the number of removed axioms in this figure and that in Figure <ref>, there is no obvious difference. For instance, all revision algorithms removed 8 axioms for , and no more than 9 axioms were removed by the adapted ones. It shows that the adapted algorithms do not cause too much information loss. * Since computing R-MUPS and R-MIPS is the most resource-consuming step in a revision process, we compare the revision time in Figure <ref> with the explanation time of computing all R-MIPS in Table <ref>. It can be obviously seen that the adapted algorithms are much more efficient. For example, it took nearly 1000 seconds to revisebased on all R-MIPS while no more than 100 seconds for the adapted algorithms. Namely, about 90% time was saved. It needs to be mentioned that the explanation time may be influenced by many factors such as number of unsatisfiable concepts,number and size of R-MUPS per unsatisfiable concept and the expressivity (see the analysis in <cit.>). Furthermore, to see the performance of the adapted algorithms with different step lengths, we chose the most challenging ontology pairs ,andto test, and varied the step length from 5 to 10. The consumption time and number of removed axioms are presented in Figure <ref>. In this figure, we do not show the number of removed axioms for those ontology pairs that an algorithm failed to finish their revision processes within limited memory, and set their consumption time to be 300 seconds. Additionally, we did not test the algorithms with sim_euc as a similarity measure since there is no big difference between sim_euc and sim_cos.From Figure <ref>, we obtain the following observations: * For each ontology pair, the revision algorithms with the same configuration but different step lengths removed similar number of axioms. For example, when the steps range from 5 to 10,removed 25 or 26 axioms for , andremoved 15 axioms. This reflects that the changing step length leads to a slight effect on the number of axioms removed. * The greater the step length, the more difficult it is to revise an ontology pair. Take the most challenging ontology pairas an example. All revision algorithms successfully revised this pair when the step length n is less than 9. When n=9, , ,andfailed. When n=10, only ,andcan revise the ontology pair successfully. * The consumption time increases with the increase of the step length for the adapted algorithms with the same configuration but different step lengths. This is mainly caused by the computation of R-MUPS. Since computing R-MUPS is the most time-consuming step during a revision process, the consumption time varies when the number of all found R-MUPS varies (see the third figure in Figure <ref>). * Comparing our four algorithms,is the most efficient one and removes the least number of axioms. Take the ontology pairas an example. When the step length is 8,spent 55 seconds and removed 15 axioms, but other three algorithms spent more than 85 seconds and removed more than 17 axioms.This reflects that removing different axioms may affect the efficiency of a revision algorithm to a great extent. §.§.§ Discussion of Experimental Results Based on the analysis given in the sections from <ref> to <ref>, we provide a brief discussion here to help readers grasp the main conclusions of our experiments and summary guidelines to choose different revision algorithms.Firstly, the adapted algorithms are much more efficient than those considering all R-MIPS. Furthermore, although the adapted algorithms are based on local R-MIPS and may remove more axioms, the difference is minor. Of course, the adapted algorithms may fail to revise a rebuttal ontology with too many R-MUPS for some unsatisfiable concepts either. In such cases, it is not suitable to revise ontologies based on all R-MIPS or local R-MIPS. This problem will be studied in the future.Secondly, our adapted algorithms with different step lengths present promising results, especially the algorithm based on a reliable ontology . For the adaptedalgorithms,a suitable step length is critical, and it may be varied when the tested ontologies change. According to our observations, a step length could be set according to the expressivity, number of unsatisfiable concepts and size of an ontology pair. For example, if the considered ontologies are expressive or many unsatisfiable concepts are involved, a lower length is preferred. Thirdly, for the adapted revision algorithms with the same configuration but different extraction strategies, they may remove similar number of axioms while the removed axioms are different. This may cause that distinct unsatisfiable concepts need to be explained, and the efficiency of an revision algorithm will be influenced accordingly. Finally, the users could select an extraction strategy according toits efficiency, number of removed axioms, the characteristics of ontologies, or semantics. For example, if a user prefers to choose an efficient algorithm considering semantics, or keep those rebuttal axioms that are more relevant to the reliable ones,orcould be selected. If those axioms that are less relevant to the axioms in R-MIPS are preferred to be kept, , ,orare good choices. If removing less axioms is the most important thing,is recommended. § RELATED WORKS In this section, we discuss the related works on ontology revision and ontology mapping revision. §.§ Ontology revision approachesOntology revision approaches can be generally divided into automatic and interactive approaches. For those theoretical ones like <cit.>, revision works without considering logical conflicts such as <cit.>, and axiom-weakening approaches <cit.>, we recommend readers to read the relevant references.Automatic ontology revision approaches resolve incoherence or inconsistency without people's participation.The authors in <cit.> proposed a kernel revision operator to deal with incoherence, and assigned scores to axioms by considering their weights or frequencies in all R-MIPS.Similarly, the authors in <cit.> also defined a kernel revision operator and incision functions, but they dealt with inconsistency and exploited trust information to choose axioms.To improve the efficiency of revising ontologies, the authors in <cit.> focused on DL-Lite, and converted DL-Lite ontologies into graphs. They computed the MIPS from minimal incoherence-preserving path-pairs, and scored an axiom according to its logical closure. The work in <cit.> also followed the idea of defining a kernel revision operator, but its novelty lies in considering a partial order of axioms to stratify axioms and selecting axioms based on integer linear programming.In addition, there are some works to revise ontologies by defining new semantics. For instance, the authors in <cit.> focused on DL-Lite ontologies and defined type semantics instead of standard DL semantics.In this paper, we also exploit a kernel revision operator like existing works. One main difference is that we consider semantic similarity between axioms to define incision functions based on a pre-trained model. The other main difference is that we design various revision algorithms considering the semantic similarity, especially the adapted ones which are trade-off to balance the revision of all unsatisfiable concepts at one time or one by one.An interactive ontology revision approach needs the participation of users to make some decisions. According to our investigation, most of the existing ontology revision approaches focus on resolving incoherence or inconsistency automatically, and few of them proposed interactive algorithms. One typical interactive ontology revision approach was introduced in <cit.>.This work separated a DL knowledge base into two parts: one including the axioms that should be inferred (marked as M_1) and the other containing those that cannot be inferred(marked as M_2). Its goal is to find a complete and consistent revision state which makes any axiom in M_2 cannot be inferred by M_1. The revision process is interactive and displays one unlabeled axiom each time to a user for deciding whether to accept it or not.This process is repeated until all unlabeled axioms are labeled.The scoring function in this work was defined as the number of axioms in an ontology that can be inferred by a specific axiom set.It relies on logical reasoning which is always resource-consuming. Our scoring functions are based on pre-trained models and independent of any logical reasoner. The most resource-consuming part of them is to calculate vectors by using a pre-trained model which can be done offline.§.§ Ontology mapping revision approachesOntology revision is closely related to ontology mapping revision. Most approaches of ontology revision can be applied to revise ontology mappings. However, ontology mappings have their own characteristics so that various approaches to revising ontology mappings have been designed. Such approaches can also be divided into automatic and interactive ones. We mainly focus on those works where a scoring function is defined.Among the works about interactive ontology mapping revision, the authors in <cit.> employed a reasoning-based approach to locating the incoherence of mappings and defined a bridge rule function to rank the impacts of mappings, which can reduce the number of decisions made by an expert.The work in <cit.> followed the approach proposed in the work of interactive ontology revision in <cit.>. One main difference is that the work in <cit.> focused on DL-Lite ontologies and transferred them into a graph for improving efficiency. Another main difference is that the scoring function defined in <cit.> considered both the number of mapping arcs in a specific set and the weights of mappings.As for the approaches of automatic ontology mapping revision, an early work was proposed in <cit.> which defined a conflict-based revision operator and designed two specific algorithms to instantiate this operator. One algorithm stratified the axioms in an alignment according to their weights, and the other utilized a signature-relevance selection function to distinguish axioms.The ontology matching system LogMap <cit.> exploited horn propositional logic to model unsatisfiable concepts and the incoherence of mappings, and repaired mappings by removing one axiom with the lowest weight from each MIPS. The matching system AMLR <cit.> also removed axioms according to their weights and other heuristics.Some other ontology matching systems like ELog <cit.> and PDLMV <cit.> employed probabilistic reasoning techniques based on the weights of mappings. Obviously, the ranking strategies used in these existing works mainly depend on weights, logical reasoning, trust information or rankings obtained according to ontology syntax. Few of them consider the semantics of axioms. This problem has been addressed in our previous work <cit.>, but it only considers repairing a single ontology. This work deals with the task of ontology revision so that different scoring functions and algorithms were proposed.§ CONCLUSION AND FUTURE WORKSIn this paper, we first defined four scoring functions to rank an axiom in R-MIPS by considering its semantic relationship with axioms in R-MIPS, all rebuttal axioms or all reliable axioms. The semantic relationship between two axioms is measured by the similarity between their corresponding vectors.We then proposed a pre-trained model-based ontology revision algorithm by considering all R-MIPS, and then an adapted algorithm was designed to deal with those challenging ontology pairs that it is hard to compute all of their R-MIPS within limited resources. The adapted algorithm relies on local R-MIPS computed based on all R-MUPS of some unsatisfiable concepts. We implemented our algorithms and evaluated them with 19 ontology pairs coming from real-life ontologies. The experimental results reflect that the adapted algorithms are very efficient and could save at most about 90% of the time for some tested ontology pairs. Among our algorithms based on pre-trained models,outperforms others in many cases.We also provided a brief discussion about the overall experimental results to conclude main observations, and provided several guidelines for users to choose different algorithms.In the future, we first plan to study how to deal with the cases that an unsatisfiable concept in a rebuttal ontology contains too many R-MUPS. In such a case, it is hard to compute all R-MUPS within limited time and memory, and both algorithms presented in this paper may not be applied. Instead of computing all R-MUPS, we will compute some R-MUPS using a depth-first search strategy or a predefined number of R-MUPS. Secondly, we will evaluate the reasoning ability of some Large Language Models like ChatGPT which is an advanced AI language model developed by OpenAI for natural language processing and conversation <cit.>. Its key advantage lies in its proficiency in understanding and generating human-like text, making it a valuable foundation for a wide range of natural language processing tasks and applications. Thus, we will explore the reasoning capability of ChatGPT and employ it to select axioms for removing. § ACKNOWLEDGEMENTSThis work was partially supported by the CACMS Innovation Fund(CI2021A00512), the Fundamental Research Funds for the Central Universities, JLU, the Natural Science Foundation of China grants (U21A20488, U19A2061, 42050103 and 62076108) and the Fundamental Research Funds for the Central Public Welfare Research Institutes undergrant(ZZ140319-W). plain 107 Zhu23 Xixi Zhu, Bin Liu, Li Yao, Zhaoyun Ding, Cheng Zhu. TGR: Neural-symbolic ontological reasoner for domain-specific knowledge graphs. Appl. Intell. 53(20): 23946-23965, 2023 Revello23 Jorge Rodriguez-Revello, Cristobal Barba-Gonzalez, Maciej Rybinski, and Ismael Navas-Delgado. KNIT: Ontology reusability through knowledge graph exploration. Expert Syst. Appl. 228: 120239, 2023 NiuLPZ22 Ke Niu, You Lu, Xueping Peng, and Jingni Zeng. Fusion of sequential visits and medical ontology for mortality prediction. J. Biomed. Informatics, 127:104012, 2022.RuckhausASC23 Edna Ruckhaus, Adolfo Anton-Bravo, Mario Scrocca, and Óscar Corcho. Applying the LOT Methodology to a Public Bus Transport Ontology aligned with Transmodel: Challenges and Results. Semantic Web, 14(4):639-657, 2023.BunnellOY21 Lawrence Bunnel, Kweku-Muata Osei-Bryson, and Victoria Y. Yoon. Development of a consumer financial goals ontology for use with FinTech applications for improving financial capability. Expert Syst. Appl., 165:113843, 2021.JiPCMY22 Shaoxiong Ji, Shirui Pan, Erik Cambria, Pekka Marttinen, and Philip S. Yu. A Survey on Knowledge Graphs: Representation, Acquisition, and Applications. IEEE Trans. Neural Networks Learn. Syst., 33(2):494-514, 2022.JohannaESWC10 Johanna Völker, and Mathias Niepert. Statistical Schema Induction. In Proceedings of the 8th Extended Semantic Web Conference, pages 124-138. Springer, 2011. LemboRSST17 Domenico Lembo, Riccardo Rosati, Valerio Santarelli, Domenico Fabio Savo, and Evgenij Thorstensen. Mapping Repair in Ontology-based Data Access Evolving Systems. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI), pages 1160-1166, 2017.JiSeu20 Qiu Ji, Guilin Qi, and Khaoula Boutouhami. Revision of Stratified OWL Ontologies based on Linear Integer Programming. Journal of Southeast University (English Version), 36(1):1-7, 2020. YuKbs21 Yu Zhang, Ruxian Yao, Dantong Ouyang, Jinfeng Gao, Fang Liu. Debugging incoherent ontology by extracting a clash module and identifying root unsatisfiable concepts. Knowl. Based Syst. 223: 107043, 2021. JiAs23 Qiu Ji, Guilin Qi, Yinkai Yang, Weizhuo Li, Siying Huang, and Yang Sheng. An Embedding-Based Approach to Repairing OWL Ontologies. Journal of Applied Sciences, 12:12655, 2022. LiEswc23 Ying Li, Patrick Lambrix. Repairing EL Ontologies Using Weakening and Completing. Proceedings of the 20th Extended Semantic Web Conference (ESWC), pages 298-315, 2023. li2023graph Weizhuo Li, Qiu Ji, Songmao Zhang, Xuefeng Fu, and Guilin Qi. A graph-based method for interactive mapping revision in DL-Lite. Expert Syst. Appl., 211:118598, 2023. JiLZQL22 Qiu Ji, Weizhuo Li, Shiqi Zhou, Guilin Qi, and Yuan-Fang Li. Benchmark construction and experimental evaluations for incoherent ontologies. Knowl. Based Syst., 239:108090, 2022.fu2016graph Xuefeng Fu, Guilin Qi , Yong Zhang, and Zhangquan Zhou. Graph-based approaches to debugging and revision of terminologies in DL-Lite. Knowl. Based Syst., 100:1-12, 2016. qi2008kernel Guilin Qi, Peter Haase, Zhisheng Huang, Qiu Ji, Jeff Z. Pan, and Johanna Völker. A Kernel Revision Operator for Terminologies - Algorithms and Evaluation. In Proceedings of the 7th International Semantic Web Conference (ISWC), pages 419-434. Springer, 2008.golbeck2009trust Jennifer Golbeck, and Christian Halaschek-Wiener. Trust-based Revision for Expressive Web Syndication. J. Log. Comput., 19(5):771-790, 2009.nikitina2012interactive Nadeschda Nikitina, Sebastian Rudolph and Birte Glimm. Interactive ontology revision. Journal of web semantics, 12: 118-130, 2012. logmap Ernesto Jiménez-Ruiz, and Bernardo Cuenca Grau. LogMap: Logic-Based and Scalable Ontology Matching. In Proceedings of the 10th international semantic web conference (ISWC), pages 273-288. Springer, 2011.amlr Emanuel Santos, Daniel Faria, Catia Pesquita, and Francisco M. Couto. Ontology alignment repair through modularization and confidence-based heuristics. CoRR, abs/1307.5322, 2013.elog Jan Noessner, and Mathias Niepert. ELOG: A Probabilistic Reasoner for OWL EL. In Proceedings of the 5th international conference on web reasoning and rule systems (RR), pages 281-286. Springer, 2011.pdlmv Weizhuo Li, and Songmao Zhang. Repairing mappings across biomedical ontologies by probabilistic reasoning and belief revision. Knowl. Based Syst., 209:106436, 2020.QiuSun20 Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, Xuanjing Huang. Pre-trained Models for Natural Language Processing: A Survey. CoRR abs/2003.08271,2020. ptmSurvey2020 Xipeng Qiu, Tianxiang Sun, Yige Xu, Yunfan Shao, Ning Dai, and Xuanjing Huang. Pre-trained Models for Natural Language Processing: A Survey. CoRR, abs/2003.08271, 2020.He22aaai Yuan He, Jiaoyan Chen, Denvar Antonyrajah, and Ian Horrocks. BERTMap: A BERT-Based Ontology Alignment System. In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI), pages 5684-5691. AAAI Press, 2022.ma23bert Zuyang Ma, Kaihong Yan, and Hongwei Wang. BERT-based Question Answering using Knowledge Graph Embeddings in Nuclear Power Domain. In Proceedings of the 26th International Conference on Computer Supported Cooperative Work in Design (CSCWD), pages 267-272. IEEE, 2023.dl-lite Diego Calvanese, Giuseppe De Giacomo , Domenico Lembo, Maurizio Lenzerini, and Riccardo Rosati. Tractable Reasoning and Efficient Query Answering in Description Logics: The DL-Lite Family. J. Autom. Reason., 39(3):385-429, 2007.schlobach2003non Stefan Schlobach, and Ronald Cornet. Non-Standard Reasoning Services for the Debugging of Description Logic Terminologies. In Proceedings of the 18th International Joint Conference on Artificial Intelligence (IJCAI), pages 355-362. Morgan Kaufmann, 2003.HousseinMA21 Essam H. Houssein, Rehab E. Mohamed, and Abdelmgeid A. Ali. Machine Learning Techniques for Biomedical Natural Language Processing: A Comprehensive Review. IEEE Access, 9:140628-140653, 2021. Bhargava022 Prajjwal Bhargava, and Vincent Ng. Commonsense Knowledge Reasoning and Generation with Pre-trained Language Models: A Survey. In Proceedings of the 36th AAAI Conference on Artificial Intelligence (AAAI), pages 12317-12325. AAAI Press, 2022.devlin2018bert Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. CoRR, abs/1810.04805, 2018.AndroutsopoulosLG13 Ion Androutsopoulos, Gerasimos Lampouras, and Dimitrios Galanis. Generating Natural Language Descriptions from OWL Ontologies: the NaturalOWL System. J. Artif. Intell. Res., 48:671-715, 2013.Ji19access Qiu Ji, Khaoula Boutouhami, and Guilin Qi. Resolving Logical Contradictions in Description Logic Ontologies Based on Integer Linear Programming. IEEE Access, 7:71500-71510, 2019.SirinPGKK07 Evren Sirin, Bijan Parsia, Bernardo Cuenca Grau, Aditya Kalyanpur, and Yarden Katz. Pellet: A practical OWL-DL reasoner. J. Web Semant., 5(2):51-53, 2007. ALOD2Vec Jan Portisch, and Heiko Paulheim. ALOD2Vec matcher results for OAEI 2021. In Proceedings of the 16th International Workshop on Ontology Matching co-located with the 20th International Semantic Web Conference (ISWC), pages 117-123. CEUR-WS.org, 2021.GMap Weizhuo Li, Shiqi Zhou, Qiu Ji, and Bingjie Lu. GMap results for OAEI 2021. In Proceedings of the 16th International Workshop on Ontology Matching co-located with the 20th International Semantic Web Conference (ISWC), pages 152-159. CEUR-WS.org, 2021.Lily Shiyi Zuo, Jiajun Liu, Zherui Yang, Yunyan Hu, and Peng Wang. Lily results for OAEI 2021. In Proceedings of the 16th International Workshop on Ontology Matching co-located with the 20th International Semantic Web Conference (ISWC), pages 167-174. CEUR-WS.org, 2021.OTMapOnto Yuan An, Alexander Kalinowski, and Jane Greenberg. OTMapOnto: optimal transport-based ontology matching. In Proceedings of the 16th International Workshop on Ontology Matching co-located with the 20th International Semantic Web Conference (ISWC), pages 185-192. CEUR-WS.org, 2021.TOM Daniel Kossack, Niklas Borg, Leon Knorr, and Jan Portisch. TOM matcher results for OAEI 2021. In Proceedings of the 16th International Workshop on Ontology Matching co-located with the 20th International Semantic Web Conference (ISWC), pages 193-198. CEUR-WS.org, 2021.text2onto Philipp Cimiano, and Johanna Völker. Text2Onto. In Proceedings of 10th International Conference on Applications of Natural Language to Information Systems (NLDB), pages 227-238. Springer, 2005.TeymourlouieZNT18 Mehdi Teymourlouie, Ahmad Zaeri, Mohammadali Nematbakhsh, Matthias Thimm, and Steffen Staab. Detecting hidden errors in an ontology using contextual knowledge. Expert Syst. Appl., 95:312-323, 2018.KalyanpurPSG06 Aditya Kalyanpur, Bijan Parsia, Evren Sirin, and Bernardo Cuenca Grau. Repairing Unsatisfiable Concepts in OWL Ontologies. In Proceedings of the 3rd European Semantic Web Conference (ESWC), pages 170-184. Springer, 2006. Ji14kbs Qiu Ji, Zhiqiang Gao, Zhisheng Huang, and Man Zhu. Measuring effectiveness of ontology debugging systems. Knowl. Based Syst., 71:169-186, 2014. reiter87 Raymond Reiter, A theory of diagnosis from first principles, Artif. Intell., 32(1): 57?95, 1987.ribeiro2021revising Jandson S. Ribeiro, Ricardo Guimarães, and Ana Ozaki. Revising Ontologies via Models: The ALC-formula Case. CoRR, abs/2108.12331, 2021.cardoso2018supporting Silvio Domingos Cardoso, Cédric Pruski, and Marcos Da Silveira. Supporting biomedical ontology evolution by identifying outdated concepts and the required type of change. J. Biomed. Informatics, 87:1-11, 2018.PesquitaC12 Catia Pesquita, and Francisco M. Couto. Predicting the Extension of Biomedical Ontologies. PLoS Comput. Biol., 8(9), 2012.MicalizioP18 Roberto Micalizio, and Gian Luca Pozzato. Revision of Ontologies to Accommodate Exceptions: a Typicality-based Approach. Fundam. Informaticae, 161(1-2):163-189, 2018. ZhuangWWQ16 Zhiqiang Zhuang, Zhe Wang, Kewen Wang, and Guilin Qi. DL-Lite Contraction and Revision. J. Artif. Intell. Res., 56:329-378, 2016.wang2015instance Zhe Wang, Kewen Wang, Zhiqiang Zhuang, and Guilin Qi. Instance-Driven Ontology Evolution in DL-Lite. In Proceedings of the 29th AAAI Conference on Artificial Intelligence, pages 1656-1662. AAAI Press, 2015.Meilicke08mapping Christian Meilicke, Heiner Stuckenschmidt, and Andrei Tamilin. Supporting Manual Mapping Revision using Logical Reasoning. In Proceedings of the 23rd AAAI Conference on Artificial Intelligence (AAAI), pages 1213-1218. AAAI Press, 2008.qi2009conflict Guilin Qi, Qiu Ji, and Peter Haase. A Conflict-Based Operator for Mapping Revision. In Proceedings of the 8th International Semantic Web Conference (ISWC), pages 521-536. Springer, 2009.chatgpt Timm Teubner, Christoph M. Flath, Christof Weinhardt, Wil M. P. van der Aalst, and Oliver Hinz. Welcome to the Era of ChatGPT et al. Bus. Inf. Syst. Eng, 65(2): 95-101, 2023.
http://arxiv.org/abs/2310.18378v2
{ "authors": [ "Qiu Ji", "Guilin Qi", "Yuxin Ye", "Jiaye Li", "Site Li", "Jianjie Ren", "Songtao Lu" ], "categories": [ "cs.AI" ], "primary_category": "cs.AI", "published": "20231027005201", "title": "Ontology Revision based on Pre-trained Language Models" }
Using a sample of (10087±44)× 10^6 J/ψ events, which is about fifty times larger than that was previously analyzed, a further investigation on the →γ 3() decay is performed.A significant distortion at 1.84 GeV/c^2 in the line-shape of the 3() invariant mass spectrum is observed for the first time, which is analogous to the behavior of X(1835) and could be resolved by two overlapping resonant structures, X(1840) and X(1880). The new state X(1880) is observed with a statistical significance of 14.7σ. The mass and width of X(1880) are determined to be 1882.1±1.7±0.7 MeV/c^2 and30.7±5.5 ±2.4 MeV, respectively, which indicates the existence of a pp̅ bound state.Observation of the Anomalous Shape of X(1840) in →γ 3() M. Ablikim^1, M. N. Achasov^5,b, P. Adlarson^75, X. C. Ai^81, R. Aliberti^36, A. Amoroso^74A,74C, M. R. An^40, Q. An^71,58, Y. Bai^57, O. Bakina^37, I. Balossino^30A, Y. Ban^47,g, V. Batozskaya^1,45, K. Begzsuren^33, N. Berger^36, M. Berlowski^45, M. Bertani^29A, D. Bettoni^30A, F. Bianchi^74A,74C, E. Bianco^74A,74C, A. Bortone^74A,74C, I. Boyko^37, R. A. Briere^6, A. Brueggemann^68, H. Cai^76, X. Cai^1,58, A. Calcaterra^29A, G. F. Cao^1,63, N. Cao^1,63, S. A. Cetin^62A, J. F. Chang^1,58, T. T. Chang^77, W. L. Chang^1,63, G. R. Che^44, G. Chelkov^37,a, C. Chen^44, Chao Chen^55, G. Chen^1, H. S. Chen^1,63, M. L. Chen^1,58,63, S. J. Chen^43, S. L. Chen^46, S. M. Chen^61, T. Chen^1,63, X. R. Chen^32,63, X. T. Chen^1,63, Y. B. Chen^1,58, Y. Q. Chen^35, Z. J. Chen^26,h, W. S. Cheng^74C, S. K. Choi^11A, X. Chu^44, G. Cibinetto^30A, S. C. Coen^4, F. Cossio^74C, J. J. Cui^50, H. L. Dai^1,58, J. P. Dai^79, A. Dbeyssi^19, R.  E. de Boer^4, D. Dedovich^37, Z. Y. Deng^1, A. Denig^36, I. Denysenko^37, M. Destefanis^74A,74C, F. De Mori^74A,74C, B. Ding^66,1, X. X. Ding^47,g, Y. Ding^35, Y. Ding^41, J. Dong^1,58, L. Y. Dong^1,63, M. Y. Dong^1,58,63, X. Dong^76, M. C. Du^1, S. X. Du^81, Z. H. Duan^43, P. Egorov^37,a, Y. H. Fan^46, Y. L. Fan^76, J. Fang^1,58, S. S. Fang^1,63, W. X. Fang^1, Y. Fang^1, R. Farinelli^30A, L. Fava^74B,74C, F. Feldbauer^4, G. Felici^29A, C. Q. Feng^71,58, J. H. Feng^59, K Fischer^69, M. Fritsch^4, C. Fritzsch^68, C. D. Fu^1, J. L. Fu^63, Y. W. Fu^1, H. Gao^63, Y. N. Gao^47,g, Yang Gao^71,58, S. Garbolino^74C, I. Garzia^30A,30B, P. T. Ge^76, Z. W. Ge^43, C. Geng^59, E. M. Gersabeck^67, A Gilman^69, K. Goetzen^14, L. Gong^41, W. X. Gong^1,58, W. Gradl^36, S. Gramigna^30A,30B, M. Greco^74A,74C, M. H. Gu^1,58, Y. T. Gu^16, C. Y Guan^1,63, Z. L. Guan^23, A. Q. Guo^32,63, L. B. Guo^42, M. J. Guo^50, R. P. Guo^49, Y. P. Guo^13,f, A. Guskov^37,a, T. T. Han^50, W. Y. Han^40, X. Q. Hao^20, F. A. Harris^65, K. K. He^55, K. L. He^1,63, F. H H.. Heinsius^4, C. H. Heinz^36, Y. K. Heng^1,58,63, C. Herold^60, T. Holtmann^4, P. C. Hong^13,f, G. Y. Hou^1,63, X. T. Hou^1,63, Y. R. Hou^63, Z. L. Hou^1, H. M. Hu^1,63, J. F. Hu^56,i, T. Hu^1,58,63, Y. Hu^1, G. S. Huang^71,58, K. X. Huang^59, L. Q. Huang^32,63, X. T. Huang^50, Y. P. Huang^1, T. Hussain^73, N Hüsken^28,36, W. Imoehl^28, N. in der Wiesche^68, J. Jackson^28, S. Jaeger^4, S. Janchiv^33, J. H. Jeong^11A, Q. Ji^1, Q. P. Ji^20, X. B. Ji^1,63, X. L. Ji^1,58, Y. Y. Ji^50, X. Q. Jia^50, Z. K. Jia^71,58, H. J. Jiang^76, P. C. Jiang^47,g, S. S. Jiang^40, T. J. Jiang^17, X. S. Jiang^1,58,63, Y. Jiang^63, J. B. Jiao^50, Z. Jiao^24, S. Jin^43, Y. Jin^66, M. Q. Jing^1,63, T. Johansson^75, X. K.^1, S. Kabana^34, N. Kalantar-Nayestanaki^64, X. L. Kang^10, X. S. Kang^41, M. Kavatsyuk^64, B. C. Ke^81, A. Khoukaz^68, R. Kiuchi^1, R. Kliemt^14, O. B. Kolcu^62A, B. Kopf^4, M. Kuessner^4, A. Kupsc^45,75, W. Kühn^38, J. J. Lane^67, P.  Larin^19, A. Lavania^27, L. Lavezzi^74A,74C, T. T. Lei^71,58, Z. H. Lei^71,58, H. Leithoff^36, M. Lellmann^36, T. Lenz^36, C. Li^48, C. Li^44, C. H. Li^40, Cheng Li^71,58, D. M. Li^81, F. Li^1,58, G. Li^1, H. Li^71,58, H. B. Li^1,63, H. J. Li^20, H. N. Li^56,i, Hui Li^44, J. R. Li^61, J. S. Li^59, J. W. Li^50, K. L. Li^20, Ke Li^1, L. J Li^1,63, L. K. Li^1, Lei Li^3, M. H. Li^44, P. R. Li^39,j,k, Q. X. Li^50, S. X. Li^13, T.  Li^50, W. D. Li^1,63, W. G. Li^1, X. H. Li^71,58, X. L. Li^50, Xiaoyu Li^1,63, Y. G. Li^47,g, Z. J. Li^59, Z. X. Li^16, C. Liang^43, H. Liang^71,58, H. Liang^1,63, H. Liang^35, Y. F. Liang^54, Y. T. Liang^32,63, G. R. Liao^15, L. Z. Liao^50, Y. P. Liao^1,63, J. Libby^27, A.  Limphirat^60, D. X. Lin^32,63, T. Lin^1, B. J. Liu^1, B. X. Liu^76, C. Liu^35, C. X. Liu^1, F. H. Liu^53, Fang Liu^1, Feng Liu^7, G. M. Liu^56,i, H. Liu^39,j,k, H. B. Liu^16, H. M. Liu^1,63, Huanhuan Liu^1, Huihui Liu^22, J. B. Liu^71,58, J. L. Liu^72, J. Y. Liu^1,63, K. Liu^1, K. Y. Liu^41, Ke Liu^23, L. Liu^71,58, L. C. Liu^44, Lu Liu^44, M. H. Liu^13,f, P. L. Liu^1, Q. Liu^63, S. B. Liu^71,58, T. Liu^13,f, W. K. Liu^44, W. M. Liu^71,58, X. Liu^39,j,k, Y. Liu^81, Y. Liu^39,j,k, Y. B. Liu^44, Z. A. Liu^1,58,63, Z. Q. Liu^50, X. C. Lou^1,58,63, F. X. Lu^59, H. J. Lu^24, J. G. Lu^1,58, X. L. Lu^1, Y. Lu^8, Y. P. Lu^1,58, Z. H. Lu^1,63, C. L. Luo^42, M. X. Luo^80, T. Luo^13,f, X. L. Luo^1,58, X. R. Lyu^63, Y. F. Lyu^44, F. C. Ma^41, H. L. Ma^1, J. L. Ma^1,63, L. L. Ma^50, M. M. Ma^1,63, Q. M. Ma^1, R. Q. Ma^1,63, R. T. Ma^63, X. Y. Ma^1,58, Y. Ma^47,g, Y. M. Ma^32, F. E. Maas^19, M. Maggiora^74A,74C, S. Malde^69, Q. A. Malik^73, A. Mangoni^29B, Y. J. Mao^47,g, Z. P. Mao^1, S. Marcello^74A,74C, Z. X. Meng^66, J. G. Messchendorp^14,64, G. Mezzadri^30A, H. Miao^1,63, T. J. Min^43, R. E. Mitchell^28, X. H. Mo^1,58,63, N. Yu. Muchnoi^5,b, J. Muskalla^36, Y. Nefedov^37, F. Nerling^19,d, I. B. Nikolaev^5,b, Z. Ning^1,58, S. Nisar^12,l, W. D. Niu^55, Y. Niu ^50, S. L. Olsen^63, Q. Ouyang^1,58,63, S. Pacetti^29B,29C, X. Pan^55, Y. Pan^57, A.  Pathak^35, P. Patteri^29A, Y. P. Pei^71,58, M. Pelizaeus^4, H. P. Peng^71,58, K. Peters^14,d, J. L. Ping^42, R. G. Ping^1,63, S. Plura^36, S. Pogodin^37, V. Prasad^34, F. Z. Qi^1, H. Qi^71,58, H. R. Qi^61, M. Qi^43, T. Y. Qi^13,f, S. Qian^1,58, W. B. Qian^63, C. F. Qiao^63, J. J. Qin^72, L. Q. Qin^15, X. P. Qin^13,f, X. S. Qin^50, Z. H. Qin^1,58, J. F. Qiu^1, S. Q. Qu^61, C. F. Redmer^36, K. J. Ren^40, A. Rivetti^74C, M. Rolo^74C, G. Rong^1,63, Ch. Rosner^19, S. N. Ruan^44, N. Salone^45, A. Sarantsev^37,c, Y. Schelhaas^36, K. Schoenning^75, M. Scodeggio^30A,30B, K. Y. Shan^13,f, W. Shan^25, X. Y. Shan^71,58, J. F. Shangguan^55, L. G. Shao^1,63, M. Shao^71,58, C. P. Shen^13,f, H. F. Shen^1,63, W. H. Shen^63, X. Y. Shen^1,63, B. A. Shi^63, H. C. Shi^71,58, J. L. Shi^13, J. Y. Shi^1, Q. Q. Shi^55, R. S. Shi^1,63, X. Shi^1,58, J. J. Song^20, T. Z. Song^59, W. M. Song^35,1, Y.  J. Song^13, Y. X. Song^47,g, S. Sosio^74A,74C, S. Spataro^74A,74C, F. Stieler^36, Y. J. Su^63, G. B. Sun^76, G. X. Sun^1, H. Sun^63, H. K. Sun^1, J. F. Sun^20, K. Sun^61, L. Sun^76, S. S. Sun^1,63, T. Sun^1,63, W. Y. Sun^35, Y. Sun^10, Y. J. Sun^71,58, Y. Z. Sun^1, Z. T. Sun^50, Y. X. Tan^71,58, C. J. Tang^54, G. Y. Tang^1, J. Tang^59, Y. A. Tang^76, L. Y Tao^72, Q. T. Tao^26,h, M. Tat^69, J. X. Teng^71,58, V. Thoren^75, W. H. Tian^59, W. H. Tian^52, Y. Tian^32,63, Z. F. Tian^76, I. Uman^62B,S. J. Wang ^50, B. Wang^1, B. L. Wang^63, Bo Wang^71,58, C. W. Wang^43, D. Y. Wang^47,g, F. Wang^72, H. J. Wang^39,j,k, H. P. Wang^1,63, J. P. Wang ^50, K. Wang^1,58, L. L. Wang^1, M. Wang^50, Meng Wang^1,63, S. Wang^39,j,k, S. Wang^13,f, T.  Wang^13,f, T. J. Wang^44, W.  Wang^72, W. Wang^59, W. P. Wang^71,58, X. Wang^47,g, X. F. Wang^39,j,k, X. J. Wang^40, X. L. Wang^13,f, Y. Wang^61, Y. D. Wang^46, Y. F. Wang^1,58,63, Y. H. Wang^48, Y. N. Wang^46, Y. Q. Wang^1, Yaqian Wang^18,1, Yi Wang^61, Z. Wang^1,58, Z. L.  Wang^72, Z. Y. Wang^1,63, Ziyi Wang^63, D. Wei^70, D. H. Wei^15, F. Weidner^68, S. P. Wen^1, C. W. Wenzel^4, U. Wiedner^4, G. Wilkinson^69, M. Wolke^75, L. Wollenberg^4, C. Wu^40, J. F. Wu^1,63, L. H. Wu^1, L. J. Wu^1,63, X. Wu^13,f, X. H. Wu^35, Y. Wu^71, Y. H. Wu^55, Y. J. Wu^32, Z. Wu^1,58, L. Xia^71,58, X. M. Xian^40, T. Xiang^47,g, D. Xiao^39,j,k, G. Y. Xiao^43, S. Y. Xiao^1, Y.  L. Xiao^13,f, Z. J. Xiao^42, C. Xie^43, X. H. Xie^47,g, Y. Xie^50, Y. G. Xie^1,58, Y. H. Xie^7, Z. P. Xie^71,58, T. Y. Xing^1,63, C. F. Xu^1,63, C. J. Xu^59, G. F. Xu^1, H. Y. Xu^66, Q. J. Xu^17, Q. N. Xu^31, W. Xu^1,63, W. L. Xu^66, X. P. Xu^55, Y. C. Xu^78, Z. P. Xu^43, Z. S. Xu^63, F. Yan^13,f, L. Yan^13,f, W. B. Yan^71,58, W. C. Yan^81, X. Q. Yan^1, H. J. Yang^51,e, H. L. Yang^35, H. X. Yang^1, Tao Yang^1, Y. Yang^13,f, Y. F. Yang^44, Y. X. Yang^1,63, Yifan Yang^1,63, Z. W. Yang^39,j,k, Z. P. Yao^50, M. Ye^1,58, M. H. Ye^9, J. H. Yin^1, Z. Y. You^59, B. X. Yu^1,58,63, C. X. Yu^44, G. Yu^1,63, J. S. Yu^26,h, T. Yu^72, X. D. Yu^47,g, C. Z. Yuan^1,63, L. Yuan^2, S. C. Yuan^1, X. Q. Yuan^1, Y. Yuan^1,63, Z. Y. Yuan^59, C. X. Yue^40, A. A. Zafar^73, F. R. Zeng^50, X. Zeng^13,f, Y. Zeng^26,h, Y. J. Zeng^1,63, X. Y. Zhai^35, Y. C. Zhai^50, Y. H. Zhan^59, A. Q. Zhang^1,63, B. L. Zhang^1,63, B. X. Zhang^1, D. H. Zhang^44, G. Y. Zhang^20, H. Zhang^71, H. H. Zhang^59, H. H. Zhang^35, H. Q. Zhang^1,58,63, H. Y. Zhang^1,58, J. Zhang^81, J. J. Zhang^52, J. L. Zhang^21, J. Q. Zhang^42, J. W. Zhang^1,58,63, J. X. Zhang^39,j,k, J. Y. Zhang^1, J. Z. Zhang^1,63, Jianyu Zhang^63, Jiawei Zhang^1,63, L. M. Zhang^61, L. Q. Zhang^59, Lei Zhang^43, P. Zhang^1,63, Q. Y.  Zhang^40,81, Shuihan Zhang^1,63, Shulei Zhang^26,h, X. D. Zhang^46, X. M. Zhang^1, X. Y. Zhang^50, Xuyan Zhang^55, Y.  Zhang^72, Y. Zhang^69, Y.  T. Zhang^81, Y. H. Zhang^1,58, Yan Zhang^71,58, Yao Zhang^1, Z. H. Zhang^1, Z. L. Zhang^35, Z. Y. Zhang^76, Z. Y. Zhang^44, G. Zhao^1, J. Zhao^40, J. Y. Zhao^1,63, J. Z. Zhao^1,58, Lei Zhao^71,58, Ling Zhao^1, M. G. Zhao^44, S. J. Zhao^81, Y. B. Zhao^1,58, Y. X. Zhao^32,63, Z. G. Zhao^71,58, A. Zhemchugov^37,a, B. Zheng^72, J. P. Zheng^1,58, W. J. Zheng^1,63, Y. H. Zheng^63, B. Zhong^42, X. Zhong^59, H.  Zhou^50, L. P. Zhou^1,63, X. Zhou^76, X. K. Zhou^7, X. R. Zhou^71,58, X. Y. Zhou^40, Y. Z. Zhou^13,f, J. Zhu^44, K. Zhu^1, K. J. Zhu^1,58,63, L. Zhu^35, L. X. Zhu^63, S. H. Zhu^70, S. Q. Zhu^43, T. J. Zhu^13,f, W. J. Zhu^13,f, Y. C. Zhu^71,58, Z. A. Zhu^1,63, J. H. Zou^1, J. Zu^71,58 (BESIII Collaboration)^1 Institute of High Energy Physics, Beijing 100049, People's Republic of China^2 Beihang University, Beijing 100191, People's Republic of China^3 Beijing Institute of Petrochemical Technology, Beijing 102617, People's Republic of China^4 BochumRuhr-University, D-44780 Bochum, Germany^5 Budker Institute of Nuclear Physics SB RAS (BINP), Novosibirsk 630090, Russia^6 Carnegie Mellon University, Pittsburgh, Pennsylvania 15213, USA^7 Central China Normal University, Wuhan 430079, People's Republic of China^8 Central South University, Changsha 410083, People's Republic of China^9 China Center of Advanced Science and Technology, Beijing 100190, People's Republic of China^10 China University of Geosciences, Wuhan 430074, People's Republic of China^11 Chung-Ang University, Seoul, 06974, Republic of Korea^12 COMSATS University Islamabad, Lahore Campus, Defence Road, Off Raiwind Road, 54000 Lahore, Pakistan^13 Fudan University, Shanghai 200433, People's Republic of China^14 GSI Helmholtzcentre for Heavy Ion Research GmbH, D-64291 Darmstadt, Germany^15 Guangxi Normal University, Guilin 541004, People's Republic of China^16 Guangxi University, Nanning 530004, People's Republic of China^17 Hangzhou Normal University, Hangzhou 310036, People's Republic of China^18 Hebei University, Baoding 071002, People's Republic of China^19 Helmholtz Institute Mainz, Staudinger Weg 18, D-55099 Mainz, Germany^20 Henan Normal University, Xinxiang 453007, People's Republic of China^21 Henan University, Kaifeng 475004, People's Republic of China^22 Henan University of Science and Technology, Luoyang 471003, People's Republic of China^23 Henan University of Technology, Zhengzhou 450001, People's Republic of China^24 Huangshan College, Huangshan245000, People's Republic of China^25 Hunan Normal University, Changsha 410081, People's Republic of China^26 Hunan University, Changsha 410082, People's Republic of China^27 Indian Institute of Technology Madras, Chennai 600036, India^28 Indiana University, Bloomington, Indiana 47405, USA^29 INFN Laboratori Nazionali di Frascati , (A)INFN Laboratori Nazionali di Frascati, I-00044, Frascati, Italy; (B)INFN Sezione diPerugia, I-06100, Perugia, Italy; (C)University of Perugia, I-06100, Perugia, Italy^30 INFN Sezione di Ferrara, (A)INFN Sezione di Ferrara, I-44122, Ferrara, Italy; (B)University of Ferrara,I-44122, Ferrara, Italy^31 Inner Mongolia University, Hohhot 010021, People's Republic of China^32 Institute of Modern Physics, Lanzhou 730000, People's Republic of China^33 Institute of Physics and Technology, Peace Avenue 54B, Ulaanbaatar 13330, Mongolia^34 Instituto de Alta Investigación, Universidad de Tarapacá, Casilla 7D, Arica 1000000, Chile^35 Jilin University, Changchun 130012, People's Republic of China^36 Johannes Gutenberg University of Mainz, Johann-Joachim-Becher-Weg 45, D-55099 Mainz, Germany^37 Joint Institute for Nuclear Research, 141980 Dubna, Moscow region, Russia^38 Justus-Liebig-Universitaet Giessen, II. Physikalisches Institut, Heinrich-Buff-Ring 16, D-35392 Giessen, Germany^39 Lanzhou University, Lanzhou 730000, People's Republic of China^40 Liaoning Normal University, Dalian 116029, People's Republic of China^41 Liaoning University, Shenyang 110036, People's Republic of China^42 Nanjing Normal University, Nanjing 210023, People's Republic of China^43 Nanjing University, Nanjing 210093, People's Republic of China^44 Nankai University, Tianjin 300071, People's Republic of China^45 National Centre for Nuclear Research, Warsaw 02-093, Poland^46 North China Electric Power University, Beijing 102206, People's Republic of China^47 Peking University, Beijing 100871, People's Republic of China^48 Qufu Normal University, Qufu 273165, People's Republic of China^49 Shandong Normal University, Jinan 250014, People's Republic of China^50 Shandong University, Jinan 250100, People's Republic of China^51 Shanghai Jiao Tong University, Shanghai 200240,People's Republic of China^52 Shanxi Normal University, Linfen 041004, People's Republic of China^53 Shanxi University, Taiyuan 030006, People's Republic of China^54 Sichuan University, Chengdu 610064, People's Republic of China^55 Soochow University, Suzhou 215006, People's Republic of China^56 South China Normal University, Guangzhou 510006, People's Republic of China^57 Southeast University, Nanjing 211100, People's Republic of China^58 State Key Laboratory of Particle Detection and Electronics, Beijing 100049, Hefei 230026, People's Republic of China^59 Sun Yat-Sen University, Guangzhou 510275, People's Republic of China^60 Suranaree University of Technology, University Avenue 111, Nakhon Ratchasima 30000, Thailand^61 Tsinghua University, Beijing 100084, People's Republic of China^62 Turkish Accelerator Center Particle Factory Group, (A)Istinye University, 34010, Istanbul, Turkey; (B)Near East University, Nicosia, North Cyprus, 99138, Mersin 10, Turkey^63 University of Chinese Academy of Sciences, Beijing 100049, People's Republic of China^64 University of Groningen, NL-9747 AA Groningen, The Netherlands^65 University of Hawaii, Honolulu, Hawaii 96822, USA^66 University of Jinan, Jinan 250022, People's Republic of China^67 University of Manchester, Oxford Road, Manchester, M13 9PL, United Kingdom^68 University of Muenster, Wilhelm-Klemm-Strasse 9, 48149 Muenster, Germany^69 University of Oxford, Keble Road, Oxford OX13RH, United Kingdom^70 University of Science and Technology Liaoning, Anshan 114051, People's Republic of China^71 University of Science and Technology of China, Hefei 230026, People's Republic of China^72 University of South China, Hengyang 421001, People's Republic of China^73 University of the Punjab, Lahore-54590, Pakistan^74 University of Turin and INFN, (A)University of Turin, I-10125, Turin, Italy; (B)University of Eastern Piedmont, I-15121, Alessandria, Italy; (C)INFN, I-10125, Turin, Italy^75 Uppsala University, Box 516, SE-75120 Uppsala, Sweden^76 Wuhan University, Wuhan 430072, People's Republic of China^77 Xinyang Normal University, Xinyang 464000, People's Republic of China^78 Yantai University, Yantai 264005, People's Republic of China^79 Yunnan University, Kunming 650500, People's Republic of China^80 Zhejiang University, Hangzhou 310027, People's Republic of China^81 Zhengzhou University, Zhengzhou 450001, People's Republic of China ^a Also at the Moscow Institute of Physics and Technology, Moscow 141700, Russia^b Also at the Novosibirsk State University, Novosibirsk, 630090, Russia^c Also at the NRC "Kurchatov Institute", PNPI, 188300, Gatchina, Russia^d Also at Goethe University Frankfurt, 60323 Frankfurt am Main, Germany^e Also at Key Laboratory for Particle Physics, Astrophysics and Cosmology, Ministry of Education; Shanghai Key Laboratory for Particle Physics and Cosmology; Institute of Nuclear and Particle Physics, Shanghai 200240, People's Republic of China^f Also at Key Laboratory of Nuclear Physics and Ion-beam Application (MOE) and Institute of Modern Physics, Fudan University, Shanghai 200443, People's Republic of China^g Also at State Key Laboratory of Nuclear Physics and Technology, Peking University, Beijing 100871, People's Republic of China^h Also at School of Physics and Electronics, Hunan University, Changsha 410082, China^i Also at Guangdong Provincial Key Laboratory of Nuclear Science, Institute of Quantum Matter, South China Normal University, Guangzhou 510006, China^j Also at Frontiers Science Center for Rare Isotopes, Lanzhou University, Lanzhou 730000, People's Republic of China^k Also at Lanzhou Center for Theoretical Physics, Lanzhou University, Lanzhou 730000, People's Republic of China^l Also at the Department of Mathematical Sciences, IBA, Karachi 75270, Pakistan========================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================A distinct resonance, X(1835) <cit.>, in the π^+π^-η^' invariant mass spectrum and a dramatic pp̅ mass threshold enhancement <cit.> in J/ψ→γ pp̅ were first observed by BESII, which stimulated both theoretical and experimental interests in their nature. Some theoretical models are proposed to interpret their internal structures, e.g. a pp̅ bound state <cit.>, a pseudoscalar glueball  <cit.>, or a radial excitation of themeson <cit.>.Subsequently these resonances were confirmed by BESIII <cit.> and CLEO <cit.> experiments and found to have the same J^PC of 0^-+ <cit.>. Meanwhile, a prominent structure, X(1840), was observed in the 3(π^+π^-) invariant mass (M(6π)) spectrum in J/ψ→γ 3(π^+π^-) with a mass of 1842.2±4.2_-2.6^+7.1 MeV/c^2 and a width of 83±14±11 MeV <cit.>. It was interpreted as a new decay mode of X(1835), although its width is substantially narrower than that of X(1835) <cit.>.Of interest is that an updated analysis of J/ψ→γπ^+π^-η^' observed a significant abrupt change in slope of the X(1835)→π^+π^-η^' line-shape at the pp̅ mass threshold, which could be originated from the opening of an additional pp̅ decay channel (threshold effect) or the interference between two different resonance contributions <cit.>. To understand whether a similar phenomenon to that of J/ψ→γπ^+π^-η^' exists around the pp̅ mass threshold in the M(6π) spectrum, it is worth a more detailed investigation on the X(1840) line-shape in J/ψ→γ 3(π^+π^-) with higher precision. In this letter we report an anomalous line-shape of X(1840) in the M(6π) spectrum in γ 3() with a sample of (10087±44) × 10^6 J/ψ events <cit.> collected with the BESIII detector. The size of the sample is about fifty times greater than that used in Ref. <cit.>.The BESIII detector records symmetric e^+e^- collisions provided by the BEPCII storage ring <cit.> in the center-of-mass energy range from 2.0 to 4.95 GeV, which is described in detail in <cit.>. Simulated data samples produced with a geant4-based <cit.> Monte Carlo (MC) package, which includes the geometric description of the BESIII detector <cit.> and the detector response, are used to determine detection efficiencies and to estimate backgrounds. The simulation models the beam energy spread and initial state radiation (ISR) in the e^+e^- annihilation with the generator kkmc <cit.>. All particle decays are modelled with evtgen <cit.> using branching fractions either taken from the Particle Data Group <cit.>, when available, or otherwise estimated with lundcharm <cit.>. Final state radiation (FSR) from charged final state particles is incorporated using the photos package <cit.>.Charged tracks detected in the main drift chamber (MDC) are required to be within a polar angle (θ) range of |cosθ|<0.93, where θ is defined with respect to the z-axis, the symmetry axis of the MDC. The distance of closest approach to the interaction point must be less than 10 cm along the z-axis, and less than 1 cm in the transverse plane. Photon candidates are reconstructed using clusters of energy deposited in theelectromagnetic calorimeter (EMC), where a minimum energy of 25 MeV for the barrel region (|cosθ|<0.8) and 50 MeV for the endcap region (0.86<|cosθ|<0.92) is required. To suppress electronic noise and showers unrelated to the event, the difference between the EMC time and the event start time is required to be within[0, 700] ns.Candidate events are required to have six charged tracks with zero net charge and at least one photon.All the charged tracks are assumed to be pions. A four-momentum-constraint (4C) kinematic fit is performed under the hypothesis of γ3(), and the χ^2_4C of this kinematic fit is required to be less than 30.For events with more than one photon candidate, the γ3() combination with the minimum χ^2_4C is retained.To suppress the backgrounds with a final state of γγ3(), the χ^2_4C is required to be less than that for the kinematically similar γγ3() hypothesis. Furthermore, for the candidate events contain at least two photons, the γγ invariant mass is required to be outside the π^0 mass window of | M_γγ-m_π^0|<0.01 GeV/c^2 to veto the backgrounds with π^0 in their final states. Theprocess with a subsequent decay oftohas the same final state as the signal decay. To suppress this background, the K_S^0 candidates are reconstructed from secondary vertex fit (SVF) to allpairs. The K_S^0 candidates are tagged by passing the SVF successfully and requiring theinvariant mass in a range of | M_-m_K_S^0|<0.005 GeV/c^2, where m_K_S^0 is the K_S^0 known mass. Events with the number of K_S^0 candidates less than 2 are retained for further analysis.After applying the above requirements, the M(6π) spectrum is shown in Figure <ref>, where, in addition to the well established η_c peak and the peak around 3.07 GeV/c^2 from J/ψ→ 3() background channel, a distinct structure around 1.84 GeV/c^2 is apparent, and an anomalous line-shape near the pp̅ mass threshold is clearly observed, as shown in the inset plot. With exactly the same processes of simulated inclusive J/ψ events as for the data, no peaking background contribution around 1.84 GeV/c^2 is found.The remaining background is mainly from π^03(), for which, we use a one-dimensional data-driven method to determine its contribution. We select the π^03() events from data firstly and then implement the signal selection criteria on these events. The M(6π) spectrum extracted based on these surviving events is further reweighted by the ratio of MC-determined efficiencies for 3() to π^03() events. To ensure that the anomalous line-shape in data is not caused by the distortion of the detection efficiency due to event selection bias, we studied the phase space MC events of 3(). As a result,neither the 1.84 GeV/c^2 peak nor the abrupt change in the line-shape near the pp̅ mass threshold is caused by the background processes or the distortion of the the event selection efficiency.We perform an unbinned maximum likelihood fit to the M(6π) spectrum between 1.55 and 2.07 GeV/c^2 with the X(1840) peak represented by the efficiency corrected Breit-Wigner (BW) function convolved with a Gaussian function to account for the mass resolution, which is determined to be 4 MeV/c^2 from the MC simulation.The dominant background to the X(1840) peak is from the non-resonant contribution of J/ψ→γ3(π^+π^-), whose shape is obtained through MC simulation and the fraction is free in the fit. The J/ψ→ 3(π^+π^-)π^0 background contributions are estimated with the data-driven approach as described above. The remaining background is described by a free second-order polynomial function. Without explicit mention, all components are treated as incoherent contributions. The fit quality is significantly poor, which implies that a single resonant structure fails to describe the M(6π) spectrum. To resolve the discrepancy from data, two different models for the line shape of the structure around 1.84 GeV/c^2 are applied to investigate the resonances in the M(6π) spectrum. With an assumption ofthe line-shape of 3() above the pp̅ mass threshold affected by the opening of the X(1840) p p̅ decay (model I),we try to describe the anomalous shape with a Flatté formula <cit.>, A=|1/M^2-s-i∑_j g_j^2ρ_j|^2 , where M is a parameter with the dimension of mass, s is the mass square of the 3() combination, ρ_j is the phase space for the decay mode j, and g^2_j is the corresponding coupling strength. The ∑_j g_j^2ρ_j term describes how the decay width varies with s. Approximately, ∑_j g_j^2ρ_j≈ g^2_0(ρ_0+g^2_pp̅/g^2_0ρ_pp̅), where g^2_0 is the sum of g^2 of all decay modes other than X(1840) pp̅, ρ_0 is the maximum two-body decay phase space volume <cit.> and g^2_pp̅/g^2_0 is the ratio between the coupling strength to the pp̅ channel and the sum of all other channels. This fit, as illustrated in Fig. <ref>, yields M = 1.818 ± 0.009 GeV/c^2, g^2_0 = 18.0 ± 2.8 GeV^2/c^4, and g^2_pp̅ = 51.4 ± 14.8 GeV^2/c^4. This model fit has a log ℒ that is improved over the simple Breit-Wigner one by 42.8. The significance of g^2_pp̅/g^2_0 being non-zero is 9.2σ. The goodness of fit is studied using a χ^2 test and the χ^2 value per number of degrees of freedom (ndof) is found to be χ^2/ndof=317.9/44, yet not enough to be acceptable for a good description of data.A comparison between the fit result of model I and the data reveals a tension aroundthe pp̅ mass threshold. To obtain a better description on data, another model allows for interference between the two resonant components (model II) and the coherent sum of them is defined as A=|1/M^2_1-s-iM_1Γ_1 + β1/M^2_2-s-iM_2Γ_2|^2, where M_1, Γ_1, M_2 and Γ_2 represent the masses and widths of the two resonant structures, denoted as X(1840) and X(1880), respectively. β is a complex parameter accounting for the contribution of X(1880) relative to the X(1840) as well as the phase between them.The fit with model II improves the fit quality significantly (χ^2/ndof=155.6/41), in particular for the region around the pp̅ mass threshold, and its projection is illustrated in Fig. <ref>. The masses, widths and signal yields of these two resonant components, as summarized in Table <ref>, are determined to be M_X(1840)=1832.5±3.1 MeV/c^2, Γ_X(1840)=80.7±5.2 MeV,N_X(1840)=20980±5341, M_X(1880)=1882.1±1.7 MeV/c^2, Γ_X(1880)=30.7±5.5 MeV, N_X(1880)=5460±3757, where the uncertainties are statistical only. The log ℒ of this fit is improved by 116.4 over that of the fit with one simple Breit-Wigner. The statistical significance of X(1880) is found to be 14.7σ, which is determined by the change of the log-likelihood value and the number of degrees of freedom in the fit with and without the X(1880) signal. As discussed in Ref. <cit.>, a fit using a coherent sum of two BW functions may result in two nontrivial solutions with the same resonant parameters.We make an extensive investigation on the fit and find the second solution with the same fit quality, whichyieldsN_X(1840)=36506±8740 and N_ X(1880)=22097±5794,corresponding to the destructive interference as described in Ref. <cit.>. Since the X(1835) is known as a pseudoscalar meson, the X(1840) is supposed to be a pseudoscalar particle as well considering the similar behaviours with those of the X(1835). For the radiativedecay to a pseudoscalar meson, the polar angle of the photon in therest frame, denoted as θ, is expected to follow a 1+cos^2θ distribution.The |cosθ| is divided into nine bins in a region of [0,0.9] toinvestigate the angular distribution. The number of signal events corresponding to the constructive interference solution in each bin is obtainedwith the same fit procedure as mentioned above. The result is shown in Fig. <ref>. As a result, the angular distributions of X(1840) and X(1880) both agree with 1+cos^2θ and support the interpretation of the pseudoscalar mesons. With the hypotheses of pseudoscalar mesons, the detection efficiencies forJ/ψ→γ X(1840) and J/ψ→γ X(1880), 17.4% and 18.4%, are obtained from the MC simulation. The product branching fractions corresponding to that two solutions are summarized in Table  <ref>. Sources of systematic uncertainties and their corresponding contributions to the measurement of the branching fractions are summarized in Table <ref>. The uncertainties come from data-MC differences (tracking, photon detection,4C kinematic fit etc.), total number ofevents, and background uncertainty from the change of fit range, MC model, MC statistics. For the MC model uncertainties due to the unknown spin-parity of the structures, we use the difference between phase space and a pseudoscalar meson hypothesis. In accordance with the previous publication <cit.>, we keep the efficiency with the track helix correction as the nominal value in this work, and take the difference between the efficiencies with and without correction as the systematic uncertainty from the 4C kinematic fit. The main contribution of systematic uncertainty comes from the uncertainty in the background estimation which is accessed by changing fit ranges.The uncertainty caused by the contribution above the fit range of the M(6π) spectrum has a considerable effect on theparameterization of the remaining background, which results in a large uncertainty of the branching fractions. The total systematic uncertainty is obtained by adding all of the mentioned ones in quadrature under an assumption that they are independent. The total systematic uncertainties on mass and width are estimated from the background uncertainty due to fit range and background description, and found to be ± 2.5 MeV/c^2 and ± 7.7 MeV for the X(1840), ± 0.7 MeV/c^2 and ± 2.4 MeV for the X(1880), respectively. Since the mass resolution of 4 MeV/c^2 is much smaller than the width of these structures, the uncertainty from the detector resolution is found to be negligible. Considering all systematic uncertainties, the final results are shown in Table <ref>. In summary, a study of the radiative decay 3() is performed with a sample of (10087 ± 44) × 10^6events accumulated at the BESIII detector. A significant distortion of the M(6π) distribution near the pp̅ mass threshold is observed for the first time, which is analogous to the distortion observed in the π^+π^-η^' invariant mass spectrum in →γ <cit.>.To understand this anomalous line-shape, a few interpretations including a single structure described by a simple BW or with threshold effect and a coherent sum of two structures are tested. We find that neither a simple BW nor a Flatté function could provide a reasonable description of data. The scheme of a coherent sum of two structures gives a much better description on the anomalous line-shape in the M(6π) spectrum. According to the fit results, the narrow structure, X(1880), has a mass of M=1882.1±1.7±0.7 MeV/c^2 and a width of Γ=30.7±5.5±2.4 MeV. The significance of X(1880) is 14.7σ compared to the fit result with a single BW. The mass and width of X(1840) are measured to be M=1832.5 ± 3.1 ±2.5 MeV/c^2 and Γ=80.7±5.2 ±7.7 MeV, which are in agreement with the previous work <cit.>. Two solutions with the same fit quality and the identical resonant parameters but different branching fractions due to the constructive or destructive interference are summarized in Table <ref>.Compared with the two structures observed in the M(π^+π^-η^') spectrum <cit.>, the X(1840) has a consistent mass with that X(1835) but much narrower width. The mass and width of the X(1880) obtained in this work are in reasonable agreement with those reported in Ref. <cit.>, which are 1870.2±2.2^+2.3_-0.7 MeV/c^2 and 13.0±6.1^+2.1_-3.8 MeV, respectively. This further supports the existence of a pp̅ bound state just below the pp̅ mass threshold. At present, more sophisticated parameterizations such as a mixture of above two models cannot be ruled out. The observed anomalous line-shape in the M(6π) spectrum in 3() and the π^+π^-η^' invariant mass spectrum in →γ reveal complex resonant structures near the pp̅ mass threshold. To establish the relationship between different resonances in the mass region of [1.8,1.9] GeV/c^2 and determine the nature of the underlying resonant structures, more data along with additional measurements including the determination of the spin-parity quantum numbers and the coupled channel amplitude analysis are highly desirable.The BESIII Collaboration thanks the staff of BEPCII and the IHEP computing center for their strong support. This work is supported in part by National Key R&D Program of China under Contracts Nos. 2020YFA0406300, 2020YFA0406400; National Natural Science Foundation of China (NSFC) under Contracts Nos. 11635010, 11735014, 11835012, 11935015, 11935016, 11935018, 11961141012, 12022510, 12025502, 12035009, 12035013, 12061131003, 12192260, 12192261, 12192262, 12192263, 12192264, 12192265, 12221005, 12225509, 12235017; the Chinese Academy of Sciences (CAS) Large-Scale Scientific Facility Program; the CAS Center for Excellence in Particle Physics (CCEPP); Joint Large-Scale Scientific Facility Funds of the NSFC and CAS under Contract No. U1832207; CAS Key Research Program of Frontier Sciences under Contracts Nos. QYZDJ-SSW-SLH003, QYZDJ-SSW-SLH040; 100 Talents Program of CAS; The Institute of Nuclear and Particle Physics (INPAC) and Shanghai Key Laboratory for Particle Physics and Cosmology; ERC under Contract No. 758462; European Union's Horizon 2020 research and innovation programme under Marie Sklodowska-Curie grant agreement under Contract No. 894790; German Research Foundation DFG under Contracts Nos. 455635585, Collaborative Research Center CRC 1044, FOR5327, GRK 2149; Istituto Nazionale di Fisica Nucleare, Italy; Ministry of Development of Turkey under Contract No. DPT2006K-120470; National Research Foundation of Korea under Contract No. NRF-2022R1A2C1092335; National Science and Technology fund of Mongolia; National Science Research and Innovation Fund (NSRF) via the Program Management Unit for Human Resources & Institutional Development, Research and Innovation of Thailand under Contract No. B16F640076; Polish National Science Centre under Contract No. 2019/35/O/ST2/02907; The Swedish Research Council; U. S. Department of Energy under Contract No. DE-FG02-05ER41374.
http://arxiv.org/abs/2310.17937v2
{ "authors": [ "BESIII Collaboration", "M. Ablikim", "M. N. Achasov", "P. Adlarson", "X. C. Ai", "R. Aliberti", "A. Amoroso", "M. R. An", "Q. An", "Y. Bai", "O. Bakina", "I. Balossino", "Y. Ban", "V. Batozskaya", "K. Begzsuren", "N. Berger", "M. Berlowski", "M. Bertani", "D. Bettoni", "F. Bianchi", "E. Bianco", "A. Bortone", "I. Boyko", "R. A. Briere", "A. Brueggemann", "H. Cai", "X. Cai", "A. Calcaterra", "G. F. Cao", "N. Cao", "S. A. Cetin", "J. F. Chang", "T. T. Chang", "W. L. Chang", "G. R. Che", "G. Chelkov", "C. Chen", "Chao Chen", "G. Chen", "H. S. Chen", "M. L. Chen", "S. J. Chen", "S. L. Chen", "S. M. Chen", "T. Chen", "X. R. Chen", "X. T. Chen", "Y. B. Chen", "Y. Q. Chen", "Z. J. Chen", "W. S. Cheng", "S. K. Choi", "X. Chu", "G. Cibinetto", "S. C. Coen", "F. Cossio", "J. J. Cui", "H. L. Dai", "J. P. Dai", "A. Dbeyssi", "R. E. de Boer", "D. Dedovich", "Z. Y. Deng", "A. Denig", "I. Denysenko", "M. Destefanis", "F. De Mori", "B. Ding", "X. X. Ding", "Y. Ding", "Y. Ding", "J. Dong", "L. Y. Dong", "M. Y. Dong", "X. Dong", "M. C. Du", "S. X. Du", "Z. H. Duan", "P. Egorov", "Y. H. Fan", "Y. L. Fan", "J. Fang", "S. S. Fang", "W. X. Fang", "Y. Fang", "R. Farinelli", "L. Fava", "F. Feldbauer", "G. Felici", "C. Q. Feng", "J. H. Feng", "K Fischer", "M. Fritsch", "C. Fritzsch", "C. D. Fu", "J. L. Fu", "Y. W. Fu", "H. Gao", "Y. N. Gao", "Yang Gao", "S. Garbolino", "I. Garzia", "P. T. Ge", "Z. W. Ge", "C. Geng", "E. M. Gersabeck", "A Gilman", "K. Goetzen", "L. Gong", "W. X. Gong", "W. Gradl", "S. Gramigna", "M. Greco", "M. H. Gu", "Y. T. Gu", "C. Y Guan", "Z. L. Guan", "A. Q. Guo", "L. B. Guo", "M. J. Guo", "R. P. Guo", "Y. P. Guo", "A. Guskov", "T. T. Han", "W. Y. Han", "X. Q. Hao", "F. A. Harris", "K. K. He", "K. L. He", "F. H. H. Heinsius", "C. H. Heinz", "Y. K. Heng", "C. Herold", "T. Holtmann", "P. C. Hong", "G. Y. Hou", "X. T. Hou", "Y. R. Hou", "Z. L. Hou", "H. M. Hu", "J. F. Hu", "T. Hu", "Y. Hu", "G. S. Huang", "K. X. Huang", "L. Q. Huang", "X. T. Huang", "Y. P. Huang", "T. Hussain", "N Hüsken", "W. Imoehl", "N. in der Wiesche", "J. Jackson", "S. Jaeger", "S. Janchiv", "J. H. Jeong", "Q. Ji", "Q. P. Ji", "X. B. Ji", "X. L. Ji", "Y. Y. Ji", "X. Q. Jia", "Z. K. Jia", "H. J. Jiang", "P. C. Jiang", "S. S. Jiang", "T. J. Jiang", "X. S. Jiang", "Y. Jiang", "J. B. Jiao", "Z. Jiao", "S. Jin", "Y. Jin", "M. Q. Jing", "T. Johansson", "X. K.", "S. Kabana", "N. Kalantar-Nayestanaki", "X. L. Kang", "X. S. Kang", "M. Kavatsyuk", "B. C. Ke", "A. Khoukaz", "R. Kiuchi", "R. Kliemt", "O. B. Kolcu", "B. Kopf", "M. Kuessner", "A. Kupsc", "W. Kühn", "J. J. Lane", "P. Larin", "A. Lavania", "L. Lavezzi", "T. T. Lei", "Z. H. Lei", "H. Leithoff", "M. Lellmann", "T. Lenz", "C. Li", "C. Li", "C. H. Li", "Cheng Li", "D. M. Li", "F. Li", "G. Li", "H. Li", "H. B. Li", "H. J. Li", "H. N. Li", "Hui Li", "J. R. Li", "J. S. Li", "J. W. Li", "K. L. Li", "Ke Li", "L. J Li", "L. K. Li", "Lei Li", "M. H. Li", "P. R. Li", "Q. X. Li", "S. X. Li", "T. Li", "W. D. Li", "W. G. Li", "X. H. Li", "X. L. Li", "Xiaoyu Li", "Y. G. Li", "Z. J. Li", "Z. X. Li", "C. Liang", "H. Liang", "H. Liang", "H. Liang", "Y. F. Liang", "Y. T. Liang", "G. R. Liao", "L. Z. Liao", "Y. P. Liao", "J. Libby", "A. Limphirat", "D. X. Lin", "T. Lin", "B. J. Liu", "B. X. Liu", "C. Liu", "C. X. Liu", "F. H. Liu", "Fang Liu", "Feng Liu", "G. M. Liu", "H. Liu", "H. B. Liu", "H. M. Liu", "Huanhuan Liu", "Huihui Liu", "J. B. Liu", "J. L. Liu", "J. Y. Liu", "K. Liu", "K. Y. Liu", "Ke Liu", "L. Liu", "L. C. Liu", "Lu Liu", "M. H. Liu", "P. L. Liu", "Q. Liu", "S. B. Liu", "T. Liu", "W. K. Liu", "W. M. Liu", "X. Liu", "Y. Liu", "Y. Liu", "Y. B. Liu", "Z. A. Liu", "Z. Q. Liu", "X. C. Lou", "F. X. Lu", "H. J. Lu", "J. G. Lu", "X. L. Lu", "Y. Lu", "Y. P. Lu", "Z. H. Lu", "C. L. Luo", "M. X. Luo", "T. Luo", "X. L. Luo", "X. R. Lyu", "Y. F. Lyu", "F. C. Ma", "H. L. Ma", "J. L. Ma", "L. L. Ma", "M. M. Ma", "Q. M. Ma", "R. Q. Ma", "R. T. Ma", "X. Y. Ma", "Y. Ma", "Y. M. Ma", "F. E. Maas", "M. Maggiora", "S. Malde", "Q. A. Malik", "A. Mangoni", "Y. J. Mao", "Z. P. Mao", "S. Marcello", "Z. X. Meng", "J. G. Messchendorp", "G. Mezzadri", "H. Miao", "T. J. Min", "R. E. Mitchell", "X. H. Mo", "N. Yu. Muchnoi", "J. Muskalla", "Y. Nefedov", "F. Nerling", "I. B. Nikolaev", "Z. Ning", "S. Nisar", "W. D. Niu", "Y. Niu", "S. L. Olsen", "Q. Ouyang", "S. Pacetti", "X. Pan", "Y. Pan", "A. Pathak", "P. Patteri", "Y. P. Pei", "M. Pelizaeus", "H. P. Peng", "K. Peters", "J. L. Ping", "R. G. Ping", "S. Plura", "S. Pogodin", "V. Prasad", "F. Z. Qi", "H. Qi", "H. R. Qi", "M. Qi", "T. Y. Qi", "S. Qian", "W. B. Qian", "C. F. Qiao", "J. J. Qin", "L. Q. Qin", "X. P. Qin", "X. S. Qin", "Z. H. Qin", "J. F. Qiu", "S. Q. Qu", "C. F. Redmer", "K. J. Ren", "A. Rivetti", "M. Rolo", "G. Rong", "Ch. Rosner", "S. N. Ruan", "N. Salone", "A. Sarantsev", "Y. Schelhaas", "K. Schoenning", "M. Scodeggio", "K. Y. Shan", "W. Shan", "X. Y. Shan", "J. F. Shangguan", "L. G. Shao", "M. Shao", "C. P. Shen", "H. F. Shen", "W. H. Shen", "X. Y. Shen", "B. A. Shi", "H. C. Shi", "J. L. Shi", "J. Y. Shi", "Q. Q. Shi", "R. S. Shi", "X. Shi", "J. J. Song", "T. Z. Song", "W. M. Song", "Y. J. Song", "Y. X. Song", "S. Sosio", "S. Spataro", "F. Stieler", "Y. J. Su", "G. B. Sun", "G. X. Sun", "H. Sun", "H. K. Sun", "J. F. Sun", "K. Sun", "L. Sun", "S. S. Sun", "T. Sun", "W. Y. Sun", "Y. Sun", "Y. J. Sun", "Y. Z. Sun", "Z. T. Sun", "Y. X. Tan", "C. J. Tang", "G. Y. Tang", "J. Tang", "Y. A. Tang", "L. Y Tao", "Q. T. Tao", "M. Tat", "J. X. Teng", "V. Thoren", "W. H. Tian", "W. H. Tian", "Y. Tian", "Z. F. Tian", "I. Uman", "S. J. Wang", "B. Wang", "B. L. Wang", "Bo Wang", "C. W. Wang", "D. Y. Wang", "F. Wang", "H. J. Wang", "H. P. Wang", "J. P. Wang", "K. Wang", "L. L. Wang", "M. Wang", "Meng Wang", "S. Wang", "S. Wang", "T. Wang", "T. J. Wang", "W. Wang", "W. Wang", "W. P. Wang", "X. Wang", "X. F. Wang", "X. J. Wang", "X. L. Wang", "Y. Wang", "Y. D. Wang", "Y. F. Wang", "Y. H. Wang", "Y. N. Wang", "Y. Q. Wang", "Yaqian Wang", "Yi Wang", "Z. Wang", "Z. L. Wang", "Z. Y. Wang", "Ziyi Wang", "D. Wei", "D. H. Wei", "F. Weidner", "S. P. Wen", "C. W. Wenzel", "U. Wiedner", "G. Wilkinson", "M. Wolke", "L. Wollenberg", "C. Wu", "J. F. Wu", "L. H. Wu", "L. J. Wu", "X. Wu", "X. H. Wu", "Y. Wu", "Y. H. Wu", "Y. J. Wu", "Z. Wu", "L. Xia", "X. M. Xian", "T. Xiang", "D. Xiao", "G. Y. Xiao", "S. Y. Xiao", "Y. L. Xiao", "Z. J. Xiao", "C. Xie", "X. H. Xie", "Y. Xie", "Y. G. Xie", "Y. H. Xie", "Z. P. Xie", "T. Y. Xing", "C. F. Xu", "C. J. Xu", "G. F. Xu", "H. Y. Xu", "Q. J. Xu", "Q. N. Xu", "W. Xu", "W. L. Xu", "X. P. Xu", "Y. C. Xu", "Z. P. Xu", "Z. S. Xu", "F. Yan", "L. Yan", "W. B. Yan", "W. C. Yan", "X. Q. Yan", "H. J. Yang", "H. L. Yang", "H. X. Yang", "Tao Yang", "Y. Yang", "Y. F. Yang", "Y. X. Yang", "Yifan Yang", "Z. W. Yang", "Z. P. Yao", "M. Ye", "M. H. Ye", "J. H. Yin", "Z. Y. You", "B. X. Yu", "C. X. Yu", "G. Yu", "J. S. Yu", "T. Yu", "X. D. Yu", "C. Z. Yuan", "L. Yuan", "S. C. Yuan", "X. Q. Yuan", "Y. Yuan", "Z. Y. Yuan", "C. X. Yue", "A. A. Zafar", "F. R. Zeng", "X. Zeng", "Y. Zeng", "Y. J. Zeng", "X. Y. Zhai", "Y. C. Zhai", "Y. H. Zhan", "A. Q. Zhang", "B. L. Zhang", "B. X. Zhang", "D. H. Zhang", "G. Y. Zhang", "H. Zhang", "H. H. Zhang", "H. H. Zhang", "H. Q. Zhang", "H. Y. Zhang", "J. Zhang", "J. J. Zhang", "J. L. Zhang", "J. Q. Zhang", "J. W. Zhang", "J. X. Zhang", "J. Y. Zhang", "J. Z. Zhang", "Jianyu Zhang", "Jiawei Zhang", "L. M. Zhang", "L. Q. Zhang", "Lei Zhang", "P. Zhang", "Q. Y. Zhang", "Shuihan Zhang", "Shulei Zhang", "X. D. Zhang", "X. M. Zhang", "X. Y. Zhang", "Xuyan Zhang", "Y. Zhang", "Y. Zhang", "Y. T. Zhang", "Y. H. Zhang", "Yan Zhang", "Yao Zhang", "Z. H. Zhang", "Z. L. Zhang", "Z. Y. Zhang", "Z. Y. Zhang", "G. Zhao", "J. Zhao", "J. Y. Zhao", "J. Z. Zhao", "Lei Zhao", "Ling Zhao", "M. G. Zhao", "S. J. Zhao", "Y. B. Zhao", "Y. X. Zhao", "Z. G. Zhao", "A. Zhemchugov", "B. Zheng", "J. P. Zheng", "W. J. Zheng", "Y. H. Zheng", "B. Zhong", "X. Zhong", "H. Zhou", "L. P. Zhou", "X. Zhou", "X. K. Zhou", "X. R. Zhou", "X. Y. Zhou", "Y. Z. Zhou", "J. Zhu", "K. Zhu", "K. J. Zhu", "L. Zhu", "L. X. Zhu", "S. H. Zhu", "S. Q. Zhu", "T. J. Zhu", "W. J. Zhu", "Y. C. Zhu", "Z. A. Zhu", "J. H. Zou", "J. Zu" ], "categories": [ "hep-ex" ], "primary_category": "hep-ex", "published": "20231027072416", "title": "Observation of the Anomalous Shape of $X(1840)$ in $J/ψ\\rightarrow γ3(π^+ π^-)$" }
[ [ Accepted: 8 August 2023 =========================== As more and more decisions that have a significant ethical dimension are being outsourced to AI systems, it is important to have a definition of moral responsibility that can be applied to AI systems. Moral responsibility for an outcome of an agent who performs some action is commonly taken to involve both a causal condition and an epistemic condition: the action should cause the outcome, and the agent should have been aware – in some form or other – of the possible moral consequences of their action. This paper presents a formal definition of both conditions within the framework of causal models. I compare my approach to the existing approaches of Braham and van Hees (BvH) and of Halpern and Kleiman-Weiner (HK).I then generalize my definition into a degree of responsibility. § INTRODUCTION As more and more decisions that have a significant ethical dimension are being outsourced to AI systems, it is important to have a definition of responsibility that can be applied to the decisions of AI systems, and that can be used by AI systems in the process of its decision-making <cit.>. To meet the first condition, such a definition should require only a minimal notion of agency and instead focus on those aspects of responsibility that are readily applicable to (current) AI systems. To meet the second condition, such a definition should be formulated in a language that can be implemented into an AI system, so that it can integrate judgments of responsibility into its decision-making. This paper sets out to propose such a definition using the well-established framework of causal models <cit.>.There exist different notions of moral responsibility that one might be interested in, and here we restrict attention to just one of them, namely responsibility for consequences, meaning the responsibility one has for a particular outcome that is the result of performing a particular action.This can be expressed more clearly by saying that the action caused the outcome, and therefore the first condition of concern here is the causal condition on responsibility <cit.>. The past two decades have seen immense progress on offering formal definitions of actual causation by way of using causal models, andthe definition here developed takes maximal advantage of this progress by comparing some recent proposals and choosing the one that correctly handles several complicated cases to be considered <cit.>.Our actions can cause all kinds of outcomes for which we are clearly not morally responsible: if a train crashes into a car that illegally crosses the railroad then the train conductor is not responsible for the car driver's death, if you turn on a light switch in a hotel room then you are not responsible if a short-circuit follows, etc.. The standard intuition that we have in such cases is that the agent “could not have known” that their action would cause the outcome. This is why definitions of responsibility also invoke an epistemic condition, stating roughly that the agent should have been able to foresee that they are performing an action which could result in them being responsible for the outcome <cit.>.In addition to the causal and the epistemic conditions, it is standard to demand that responsibility also requires the fulfilment of a controlcondition (sometimes also called freedom condition), which expresses the fact that the agent had the right sort of control whilst performing their action <cit.>.Due to its close connection to issues of free will and determinism, this condition is heavily debated within philosophy. Within the context of (current) AI systems, however, the control condition can take on a more mundane form: any action that was a result of the correct operation of its program can be viewed as being under the AI's control. Therefore I simply take there to be a specific action variable that ranges over a set of possible actions, and assume that whenever the AI system is running successfully it has control over the value that this variable takes.My approach proceeds along the same lines as that of Braham and van Hees (BvH) <cit.>. They offer the most influential formalization of moral responsibility that incorporates both the causal and the epistemic conditions, and therefore their work forms an appropriate point of comparison. Although I agree with the spirit of their approach, I disagree with its formulation. First, their causal condition defines causation as being a Necessary Element of a Sufficient Set (NESS).However, their use of game-theory instead of causal models results in an overly simplistic view of NESS-causation that cannot handle indirect causation. Therefore I first formulate their definition using causal models, and then show how to modify it so that it can overcome this limitation. Second, I disagree with the particulars of both their causal and their epistemic conditions. I argue for replacing the NESS definition of causation with my recently developed Counterfactual NESS (CNESS) definition <cit.>. Their epistemic condition states that the agent should minimize the probability of causation. I argue for giving that condition a secondary role: minimizing the probability of causing the outcome is subservient to minimizing the probability of the outcome simpliciter. I analyze several examples to illustrate the superiority of my conditions.More recently, Halpern & Kleiman-Weiner (HK) <cit.> used causal models to propose definitions of several concepts that are closely related to moral responsibility. Although they do not explicitly define moral responsibility, they do suggest using the modified Halpern & Pearl (HP) definition of causation for the causal condition <cit.>.The HP definition correctly handles most of the counterexamples to the NESS definition here presented, but I discuss two types of example for which it fails (whereas the CNESS definition does not). HK also offer a definition of “degree of blameworthiness” that for all intents and purposes is very similar to an epistemic condition: it measures the extent to which the agent minimized the probability of the outcome. I present a case in which the epistemic conditions of BvH and HK conflict in order to argue that a more elaborate epistemic condition is required. My epistemic condition combines that of HK with that of BvH by demanding that an agent minimizes the probability of the outcome, but if possible also minimizes the probability of causation. [Note that the epistemic conditions of HK and BvH are not necessarily inconsistent: if one simply defines causation as an increase in the probability of the outcome occurring, they become equivalent. Except for the fact that he uses objective probabilities rather than those of the agent, this is roughly the proposal defended in <cit.>. As exemplified by the examples to be discussed (and as exemplified by browsing the recent literature on causation) such a naive probabilistic approach to causation is unable to deliver sensible verdicts. ] An important conceptual restriction throughout this paper is that I assume there to be only one relevant outcome variable O, where relevant is understood as encompassing both moral and amoral preferences.[In terms of BvH, this amounts to assuming that all actions are “eligible”, in terms of HK this is equivalent to assuming that the agent's utility is exclusively a function of O.] Concretely, I am restricting attention to scenarios in which an agent's only concern is with the various possible outcomes O=o, so that all other possible consequences of their action (as well as the action itself) are deemed entirely irrelevant. I make this restriction in order to avoid complications due to the distinction between an outcome that is intended and an outcome that is a mere side-effect, which is at stake in trolley cases and similar scenarios. When only one variable is in play, this distinction disappears, and thus we need not worry about situations in which responsibility for some outcome is undermined by the responsibility for some other outcome. (Such situations are the subject matter of the so-called doctrine of double effect <cit.>.) Moreover, it allows us to ignore complications considered by HK of taking into account the “cost” of performing an action. Therefore in our setting the notion of responsibility is related to the notions of blame- and praiseworthiness as follows: if one is responsible for an outcome that has negative moral valence (i.e., it is “bad”), then one is blameworthy, and likewise for an outcome with positive valence (i.e., it is “good”) and being praiseworthy. Since understanding judgments of blame is a far more urgent matter in the context of AI, and arguably in the context of responsibility judgments more generally, for the most part the focus will be on bad outcomes. I realize the above conceptual restriction is unrealistic, since in most cases there are multiple, logically independent, outcomes that an agent is concerned with.Fourobservations are called for though. First, the restriction is relevant only for the epistemic condition and not for the causal condition, because whether some event causes another is (obviously) entirely independent of the agent's concerns. Second, this is an informal restriction: nowhere in my definitions do I require there to be only a single outcome variable. The point is merely that one should be cautious when applying my definition in case there are multiple variables that the agent considers relevant. Third, even in such cases, my definition can still be interpreted as capturing the type of responsibility that is associated with foreseeing an outcome when performing an action (as opposed to the stronger type of responsibility that is associated with intending an outcome when performing an action). This distinction is an important one in the legal context, where it shows up as the distinction between oblique – or indirect – intent and direct intent: the latter suffices for criminal liability, whereas the former is usually seen to only warrant less severe judgments of culpability <cit.>.Fourth, even multiple, distinct, outcomes can in many cases be suitably represented by a single outcome variable. For example, say – through no fault of his own – a driver is suddenly confronted with a binary choice to run into either one of two pedestrians, call them Right and Left. Obviously by choosing to turn right rather than left, the driver knows that they will cause Right to be injured rather than remain uninjured. Yet it sounds strange (or inappropriate, at least) to claim that the driver is to be blamed for the outcome that Right is injured, despite this knowledge. We can avoidthis claim by taking our single (binary) outcome variable O to represent whether someone is injured: in that case my definition reaches the verdict that the driver is not to be blamed. Although this strategy can be generalized to deal with cases in which different events are not causally related to each other (but instead have the agent's action as a possible common cause), it is appropriate only when we consider the different realizations of the outcome to be sufficiently similar. For example, if the injury to Left clearly would be mild whereas that to Right is severe, then it seems inappropriate to represent both events using a single value of a single variable. What makes for different events to be sufficiently similar in this regard is going to be highly context-dependent, and it is a problem that faces anyone representing the world using variables. A detailed discussion of this issue is beyond the scope of this paper.The upshot is that although the definition here presented needs to be extended with further conditions for it to properly handle all cases involving multiple outcomes, the restriction to a single outcome variable is less severe than it might initially seem.With those caveats out of the way, we can formulate the general schema that encompasses all the definitions of responsibility that I aim to consider.Here is the general schema that encompasses all definitions of responsibility that I aim to consider. An agent who performs A=a is responsible for outcome O=o if: * (Control Condition) The agent had control over A=a. * (Causal Condition) A=a causes O=o. * (Epistemic Condition) The agent believes that they could have avoided being responsible for O=o by performing some alternative action A=a'.As mentioned, I simply assume that the Control Condition is always met (as do BvH, who call it the Agency Condition). For sake of brevity, I leave it implicit from now on.Formalizing the Causal Condition comes down to settling the discussion on how to formalize actual causation, which has received considerable attention over the past two decades <cit.>. A full discussion of causation would be too ambitious for the present purposes. Instead, I evaluate the suitability of several definitions of causation within the context of responsibility by presenting examples that bring across how they differ. On the basis of this evaluation I suggest adopting the CNESS definition and refer the reader to <cit.> for a more general motivation.The Epistemic Condition requires settling the question: what does it take for the agent to believe that performing A=a' allows them to avoid responsibility? Since this condition uses the notion of responsibility, our Schema is circular. There exist different ways of filling it in so that it no longer is circular, and it is this flexibility that makes filling in the condition interesting. One possible suggestion is to demand that the agent believed A=a' would not result in the outcome O=o, another weaker suggestion is to demand that the agent believed A=a' would not cause O=o, etc..A note of clarification is in order before we proceed. The current work does not aim to offer a complete theory of moral responsibility for AI systems, but rather zooms in on the above conditions whilst ignoring certain others. Concretely, here are some important issues that I set aside in this paper. §.§ Some Limitations and Related Work There exist forms of moral responsibility that do not (always) involve causation, such as those that follow from certain societal norms and expectations. For example, a captain is responsible for everything that happens on their ship, a parent is responsible for the behavior of their child, etc.. More generally, assigning responsibility to AI systems should itself be seen as just one part of the wider discussion on accountability that arises from the introduction of such systems into our society <cit.>. Relatedly, responsibility is often associated with the morally stronger notions of blame and praise. I take responsibility to be a weaker notion that necessarily precedes judgments of blame and praise: one cannot be blameworthy for an outcome unless one is responsible for it, and similarly for praise. To develop definitions of blame and praise requires bringing into view both the absolute moral valence of the outcome O=o (was it good or bad?) and its relative valence (was it better/worse than an alternative which it prevented?), as well as the costs incurred by the agent when performing an action. As the vast literature on trolley cases and other moral dilemmas illustrates, these issues make matters significantly more complex <cit.>. One condition in particular that seems highly relevant to assigning blame (resp. praise) is to consider whether the outcome caused by the agent is harmful (resp. beneficial) or not. Indeed, one natural way of implementing a formal definition of responsibility within AI systems is to demand that it tries to avoid becoming responsible for harmful outcomes. This is confirmed by the recent European AI Act, which categorizes the risk that an AI system poses based on how likely it is to cause harm <cit.>. Beckers et. al. recently proposed a causal analysis of harm that is also formalized using causal models, and thus it could easily be integrated into the present proposal <cit.>.[I should note that they use the HP-definition of causation, which I criticize below. However, they state explicitly that their approach applies just as well to other definitions of causation.] In the present paper, however, I choose to focus exclusively on defining responsibility, thereby paving the way for future definitions of blame and praise.Duijf presents a formalization of moral responsibility for outcomes that is likewise inspired by, but not an endorsement of, BvH <cit.>. Rather than defending an alternative definition of responsibility as I do, he presents a broad lanscape of completely formal conditions for responsibility that one might consider and analyzes their logical relations. As with BvH, his definition of NESS causation is formulated using game-theory, and thus it is likewise restricted to applications of direct causation. The next section introduces the formalism of causal models that will be used to express all candidate definitions and related notions. Section <ref> presents the BvH and HK definitions of responsibility and their respective definitions of causation. Section <ref> discusses the Causal Condition by introducing two more definitions of causation and offers some examples to argue in favor of adopting the CNESS definition. (Some further examples are offered in the appendix.) We move on to a discussion of the Epistemic Condition in Section <ref>, which leads the way to my definition of moral responsibility. Since responsibility is often taken to come in degrees, in Section <ref> I define the degree of responsibility and sketch how it helps interpret recent empirical work in psychology on responsibility judgments. § CAUSAL MODELS This section reviews the definition of causal models as understood in the structural modeling tradition started by Pearl <cit.>, where I use the notation from Halpern <cit.>. A signature S is a tuple (,,), whereis a set of exogenous variables,is a setof endogenous variables, anda function that associates with every variable Y ∈ a nonempty set (Y) of possible values for Y (i.e., the set of values over which Y ranges). If X⃗ = (X_1, …, X_n), (X⃗) denotes the crossproduct (X_1) ×⋯×(X_n).Exogenous variables represent unobserved factors whose causal origins are outside the scope of the causal model, such as background conditions and noise.The values of the endogenous variables, on the other hand, are causally determined by other variables within the model. A causal model M is a pair ( S,),where S is a signature anddefines afunction that associates with each endogenous variable Y a structural equation F_Y giving the value of Y in terms of thevalues of other endogenous and exogenous variables. Formally, the equation F_Y maps ( - {Y})to (Y), so F_Y determines the value of Y,given the values of all the other variables in . We usually write the equation for an endogenous variable as Y = f(X⃗), where X⃗ are called the parents of Y (and Y is called a child of each variable in X⃗), and the function f is such that it only depends on the values of X⃗. The ancestor relation is the transitive closure of the parent relation. In this paper we restrict attention to acyclic models, that is, models where no variable is an ancestor of itself. A (directed) path is a sequence of variables in which each element is a child of the previous element. In this manner an acyclic causal model induces a unique DAG, i.e., a Directed Acyclic Graph, which is simply a graphical representation of all the ancestral relations. An intervention has the form X⃗x⃗, where X⃗is a set of endogenous variables.Intuitively, this means that the values of the variables in X⃗ are set to the values x⃗. The equations define what happens in the presence ofinterventions. The intervention X⃗x⃗ in a causalmodel M = ( S,) results in a new causal model, denoted M_X⃗x⃗, which is identical to M, except thatis replaced by ^X⃗x⃗: for each variable Y ∉X⃗, F^X⃗x⃗_Y = F_Y (i.e., the equation for Y is unchanged), while for each X' in X⃗, the equation F_X' for X' is replaced by X' = x' (where x' is the value in x⃗ corresponding to X').Given a signature S = (,,), an atomic formula is a formula of the form X = x, forX ∈ and x ∈(X). A causal formula (over S) is one of the form [Y_1y_1, …, Y_ky_k] ϕ, where * ϕ is a Boolean combination of atomic formulas, * Y_1, …, Y_k are distinct variables in , and* y_i ∈(Y_i) for each 1 ≤ i ≤ k.Such a formula is abbreviated as [Y⃗y⃗]ϕ. The special case where k=0 is abbreviated as ϕ. Intuitively, [Y_1y_1, …, Y_ky_k] ϕ says that ϕ would hold if Y_i were set to y_i, for i = 1,…,k.We call a setting u⃗∈() of values of exogenous variables a context. A causal formula ψ is true or false in a causal setting, which is a causal model given a context. As usual, we write (M,u⃗) ψif the causal formula ψ is true in the causal setting (M,u⃗). Therelation is defined inductively.(M,u⃗)X = x if the variable X has value x in the unique (since we are dealing with recursive models) solution to the equations in M in context u⃗ (i.e., the unique vector of values that simultaneously satisfies all equations in M with the variables inset to u⃗). The truth of conjunctions and negations is defined in the standard way. Finally, (M,u⃗)[Y⃗y⃗]ϕ if(M_Y⃗y⃗,u⃗) ϕ.In addition to the causal setting (M, u⃗) that describes both the objective causal relations and their actual realization, we also need to represent the agent's beliefs regarding what could possibly happen in order to fill in the Epistemic Condition. I do so in the same manner as proposed by HK: we taketo be a probability distribution over a set of causal settings , so thatexpresses the agent's subjective probabilities before the agent performs their action. As do HK, I assume for simplicity that all the causal models appearing inhave the same signature (i.e., the same exogenous and endogenous variables). We define an epistemic state of an agent to consist of a pair = (, ), and define a responsibility setting (M,u⃗, ) as the combination of a causal setting and an epistemic state. § THE BVH AND HK DEFINITIONS BvH <cit.>work within a game-theoretic framework and do not use causal models, so in order to compare their approach to mine we need to first translate it into the language of causal models. I do not delve into the details but rather offer a rough sketch of such a translation.Similar to causal models, BvH represent the agents' influence on the outcome O by way of a function. Yet instead of letting some endogenous variables A⃗ represent the actions of agents directly, they use variables to represent the strategies that each agent can adopt to guide their actions. Aside from that, the main difference between the two formalisms is that theirs is unable to represent indirect causal relations.[As I said, this is a rough sketch. Technically, one should distinguish between games in normal form, which is the form considered by BvH, and games in extensive form, from which the normal form games have been derived. Games in extensive form do allow for indirect relations, and thus there might be a way of representing indirect causal relations in game theory after all. ]In general, the equations of a causal model allow for an unlimited number of intermediate ancestors between variables A⃗ and an outcome variable O, so that causal influence from an agent's action can be passed on along intermediate variables to the outcome variable. BvH's outcome function on the other hand abstracts away from any mediated form of causal influence, so that the strategies causally determine the outcome directly. As a result, their games are to be interpreted as a single-equation causal model of the form O=f_O(A⃗).(Since the variablesA⃗ are determined directly by the context u⃗, I adopt the standard practice of leaving their equations implicit.)BvH use the famous NESS definition of causation that was proposed by Wright <cit.> – and also formed the inspiration for the Halpern & Pearl (HP) definitions <cit.> – which states that causes are Necessary Elements of a Sufficient Set for the effect. Taking into account the previous remarks, it is more accurate to speak of the direct NESS definition. I here present my recent formalization of both the direct and the indirect NESS definitions using causal models <cit.>. First we need to define causal sufficiency. As do BvH, I take it to mean that a set guarantees the effect regardless of the values of the variables outside of the set.We say that X⃗=x⃗ is sufficient for Y=y w.r.t. (M,u⃗) if Y ∉X⃗ and for all values z⃗∈(Z⃗) where Z⃗ =- (X⃗∪{ Y }), it holds that (M,u⃗)[X⃗x⃗, Z⃗z⃗] Y=y.Direct NESS-causation is then defined by stating that: * the candidate cause and the effect actually occurred;* the candidate cause is a member of a sufficient set; * and it is necessary for the set to be sufficient.X=x directly NESS-causes Y=y w.r.t. (M,u⃗) if there exists a W⃗=w⃗ so that the following conditions hold:DN1. (M,u⃗)X = x W⃗ = w⃗ Y=y. DN2. X=x W⃗ = w⃗ is sufficient for Y=y w.r.t. (M,u⃗). DN3. W⃗ = w⃗ is not sufficient for Y=y w.r.t. (M,u⃗).We can now formulate the counterpart of the BvH definition using causal models by filling in their conditions into our Responsibility Schema. An agent who performs A=a is responsible for outcome O=o w.r.t. a responsibility setting (M, u⃗, ) if: * (Causal Condition) A=a directly NESS-causes O=o w.r.t. (M, u⃗). * (Epistemic Condition) There exists a' ∈(A) so that (A=adirectly NESS-causesO=o) > (A=a'directly NESS-causesO=o).[Of course these probabilities have to be read as being conditioned on the corresponding action, i.e., as “the agent's probability that the action would cause the outcome if it were performed”.] Informally, the BvH definition of responsibility requires that an agent's action directly NESS-caused the outcome, and that the agent believes they failed to minimize the probability of their action causing the outcome. The following example (taken from BvH) illustrates their definition.Two assassins (Assassin_1 and Assassin_2), in place as snipers, shoot and kill Victim, with each of the bullets fatally piercing Victim's heart at exactly the same moment. Although neither of them could have prevented the outcome, each of them is clearly responsible for Victim's death. Let V stand for Victim's death (V=1) or survival (V=0), and let A_1, A_2 stand for the actions of the two assassins, where A_i=1 if and only if Assassin_i shoots. We can then capture this example with the single equation V = A_1A_2, and a context u⃗ such that A_1=1 and A_2=1. Does the BvH definition (Definition <ref>) succeed in establishing that each of the assassins is responsible for Victim's death? To find out, we first need to evaluate whether A_1=1 (resp. A_2=1) directly NESS-causes V=1 (Def. <ref>). We can choose W⃗=∅ to get the desired result, as follows. AC1 is fulfilled because A_1=1 and V=1 actually happened.AC2 is established by verifying that the following two claims hold: (M,u⃗)[A_11, A_21]V=1 and (M,u⃗)[A_11, A_20]V=1. Since W⃗=∅, verifying AC3 is easy: we need to find a single intervention on the variables other than V such that they result in V=0. The intervention [A_10, A_20] does the job.To evaluate the Epistemic Condition requires making some assumptions about the assassins's probability attributions. It sounds reasonable to assume that, without evidence to the contrary, each assassin attributed a higher probability to them shooting causing the outcome than them not shooting causing the outcome. Therefore the Epistemic Condition is also fulfilled for each assassin, and thus the BvH definition arrives at the right verdict for this example.We continue with the approach pursued by Halpern Kleiman-Weiner (HK) <cit.>, which uses the modified Halpern & Pearl definition of causation <cit.>: X⃗=x⃗ HP-causes Y=y w.r.t. (M,u⃗) if there exists a W⃗=w⃗ so that the following conditions hold: AC1. (M,u⃗) X⃗ = x⃗W⃗ = w⃗ Y=y. AC2. There is a setting x⃗'such that(M,u⃗)[X⃗x⃗', W⃗w⃗] Y ≠ y.AC3.AC3X⃗ is minimal; there is no strict subset X⃗' ofX⃗ such that X⃗' = x⃗” satisfies AC2, where x⃗” is the restriction of x⃗ to the variables in X⃗'.Note that, contrary to the direct NESS definition, the HP definition allows for conjunctive causes X⃗=x⃗, instead of merely atomic causes X=x. The minimality condition (AC3) is there to prevent irrelevant events to be added to such conjuncts. We can retrieve a definition of causation for atomic events by simply considering any conjunct X=x that appears in an HP-cause X⃗=x⃗ to be a cause as well, which is indeed what Halpern suggests himself repeatedly <cit.>. The heart of the HP definition is AC2: it states that the outcome Y=y counterfactually depends on the cause X⃗=x⃗ given that we intervene to hold fixed a suitably chosen set of variables W⃗ at their actual values w⃗. To see how this definition works, let us apply it to Example <ref>. First we try substituting X⃗=x⃗ with A_1=1. Alas, this will not allow us to get A_1=1 as a cause of V=1. We start with choosing W⃗=∅, and we get that (M,u⃗)[A_10] V=1. The reason is that u⃗ encodes the actual context, in which A_2=1, and thus also V=1. Yet what is required for AC2 would be (M,u⃗)[A_10] V=0. The only other choice for W⃗=w⃗ would be A_2=1, and that does not work either: (M,u⃗)[A_10, A_21] V=1.Second we try A_1=1A_2=1. If this works, then AC3 is satisfied due to the fact that neither of the conjuncts themselves satisfied AC2. W⃗ has to be ∅, since there are no other variables. Thus what remains is to find counterfactual values for A_1 and A_2. As they are binary, the only option is to consider A_1 = 0A_2 = 0. Clearly, for this choice AC2 is satisfied, as (M,u⃗)[A_10, A_20] V=0. Therefore A_1=1 is an HP-cause of V=1. We can now formulate a definition of responsibility that is closely inspired by HK.An agent who performs A=a is responsible for outcome O=o w.r.t. a responsibility setting (M, u⃗, ) if: * (Causal Condition) A=a HP-causes O=o w.r.t. (M, u⃗). * (Epistemic Condition) There exists a' ∈(A) so that (O=o | [Aa]) > (O=o | [Aa']). In addition to disagreeing about the definition of causation, the HK definition also disagrees with the BvH definition about the epistemic condition: rather than requiring that the agent failed to minimize the probability of causing the outcome, the HK definition focuses on the agent failing to minimize the probability of the outcome simpliciter. Note that both HK and BvH's epistemic condition satisfy our Responsibility Schema: an agent who believes that they failed to minimize a probability that they could have minimized, thereby also believes that they could have avoided satisfying the respective epistemic condition. Given that the epistemic condition is a necessary condition for being responsible, they also believe that they could have avoided being responsible for the actual outcome.Let us apply the HK definition to Example <ref>. We already established that each A_i=1 is an HP-cause of V=1, so the Causal Condition is met. Further, as long as each assassinattributes a strictly positive probability that the other assassin may fail to shoot, we get that (V=1 | [A_i1]) > (V=1 | [A_i0]), so that the Epistemic Condition is satisfied as well. (What if the assassinsare certain the other assassin will shoot? We come back to this in Section <ref>.) Therefore the HK definition also arrives at the correct verdict for this example.§ THE CAUSAL CONDITION Before discussing the problems with NESS- and HP-causation, I present CNESS-causation <cit.>. As a first step, we define NESS-causation as the transitive closure of direct NESS-causation, which is how it was conceived by Wright <cit.>. In addition, we pay explicit attention to the path along which the causal influence is transmitted. X=x NESS-causes Y=y along a path p w.r.t. (M,u⃗) if the values of the variables in p form a path of direct-NESS causes from X=x to Y=y. The Counterfactual NESS definition (CNESS) takes the NESS definition and adds a subtle counterfactual difference-making condition: there should be a counterfactual value so that it would not NESS-cause the outcome along the same path as the actual value, nor along any subpath. X=x CNESS-causes Y=y w.r.t. (M,u⃗) if X=x NESS-causes Y=y along some path p w.r.t. (M,u⃗) and there exists a x' ∈(X) such that X=x' does not NESS-cause Y=y along any subpath p' ⊆ p w.r.t. (M_Xx',u⃗).With all the definitions of causation at hand, I now motivate my choice for the CNESS definition by going over some well-chosen examples. We start with a case of Late Preemption.[Late Preemption] We return to our two assassins, but this time Assassin_1 is slightly faster, so that their bullet kills Victim, who collapses and thereby dodges Assassin_2's bullet.In this case Assassin_2 obviously did not cause Victim's death, and is thus not responsible for the outcome (despite the fact that their act itself is of course still blameworthy). BvH only allow variables for strategies and are thus unable to capture this result, since the asymmetry between both assassins is not a matter of strategy. As illustrated at length by Halpern <cit.>, using causal models this poses no problem. The equation V = BH_1BH_2 expresses the fact that either bullet hitting Victim would be fatal; BH_1= A_1 and BH_2=A_2 ¬ BH_1 captures the asymmetry between both assassins: Assassin_2's bullet only hits Victim if Assassin_1's bullet does not.In the context at hand, we have that A_1=A_2=BH_1=V=1, and BH_2=0. We now go through the various definitions to verify that A_1=1 NESS-causes, CNESS-causes, and HP-causes V=1, whereas A_1=1 does not directly NESS-cause V=1, thereby showing that the direct NESS definition is too simplistic.We start by verifying that A_1=1 does not directly NESS-cause V=1. By itself A_1=1 does not form a sufficient set for V=1, for setting both of the BH variables to 0 guarantees that the Victim survives: (M,u⃗)[A_11, BH_10, BH_20]V=0. In fact, in this context, any sufficient set for V=1 has to contain BH_1=1, yet BH_1 is sufficient for V=1 all by itself. Thus A_1=1 is not a necessary member of any sufficient set for V=1. Still, A_1=1 NESS-causes V=1 along p={A_1,BH_1,V}, because A_1=1 directly NESS-causes BH_1=1 and BH_1=1 directly NESS-causes V=1. To establish CNESS-causation requires having a look at the counterfactual setting (M_A_10,u⃗). In this setting we get that A_1=0, A_2=1, BH_1=0, and thus BH_2=1 (as well as V=1). (Informally: if Assassin_1 had not shot, thenAssassin_2's bullet would have hit and killed Victim.) Here A_1=0 directly NESS-causes BH_1=0, BH_1=0 directly NESS-causes BH_2=1 (since it forms a sufficient set together with A_2=1 and A_2=1 does not suffice on its own), and BH_2=1 directly NESS-causes V=1. Therefore A_1=0 NESS-causes V_1=1 along p^*={A_1, BH_1, BH_2, V}. (Take note of this surprising finding. We come back to it in Example <ref>.) Since p^* ⊈p, we get that A_1=1 CNESS-causes V=1 (whereas A_1=0 does not CNESS-cause V=1 in the counterfactual setting, since p ⊆ p^*). To see that A_1=1 HP-causes V=1, it suffices to note that (M,u⃗)BH_2=0 and (M,u⃗)[A_10, BH_20]V=0. Lastly, I leave it to the reader to verify that A_2=1 is not an HP-cause of V=1, and nor is it a direct NESS-cause of anything. Because of the latter, A_2=1 is not a NESS-cause or a CNESS-cause of anything either.Modifying the BvH definition so that it uses NESS-causation instead of direct NESS-causation is not a solution, for the NESS definition itself is problematic, as the following example shows. (In the appendix I discuss one more example, a so-called “Frankfurt-case”, to show that BvH's reliance on strategies as opposed to events forms another source of problems.)We revisit the counterfactual setting of Example <ref> in which Assassin_1 does not shoot, so that Victim is killed by Assassin_2's shot. We already established for this scenario that A_1=0 NESS-causes V=1. Thus if we use the NESS definition, we get the absurd result that Assassin_1 failing to shoot causes Victim to die. If we then supplement the example so that also BvH's Epistemic Condition is fulfilled, we get that Assassin_1 comes out as being responsible for Victim's death. (Imagine, for instance, that they mistakenly believe to be holding a flare gun that could sound a warning shot so that Victim ducks for cover to avoid Assassin_2's bullet.) We already established that A_1=0 does not CNESS-cause V=1, the reader may verify that the same holds for the HP-definition.This leaves CNESS-causation and HP-causation as candidates for the Causal Condition. I use Halpern & Pearl's own example to argue against HP-causation <cit.>.[Loader] “Suppose that a prisoner dies either if A loads B's gun and B shoots, or if C loads and shoots his gun. Taking D to represent the prisoner's death and making the obvious assumptions about the meaning of the variables, we have that D=1 iff (A=1B=1)C=1. Suppose that in the actual context u⃗, A loads B's gun, B does not shoot, but C does load and shoot his gun, so that the prisoner dies. Clearly C = 1 is a cause of D = 1. We would not want to say that A =1 is a cause of D =1, given that B did not shoot (i.e., given that B = 0).” [emphasis added] I agree with Halpern and Pearl. A fortiori, A is not responsible for the prisoner's death, even if A only loaded the gun because he was convinced that B would shoot.Now consider the following variant. In the original example, C's shot is determined directly by the context. Imagine we add a little twist, so that C would only fire his gun if B did not, i.e., the equation for C is C=¬ B. The above reasoning regarding A still applies, and therefore I believe it is a mistake to all of a sudden consider A=1 a cause of D=1. Yet A=1 now is an HP-cause of D=1 (as it appears in the HP-cause A=1B=0), and thus A would be considered responsible for the prisoner's death. The CNESS definition avoids this result (as does the NESS definition): the only candidate sufficient set for D=1 of which A=1 could be a necessary part, is { A=1,B=1}. So the mere fact that B=0 in both versions of the example implies that A=1 is not a NESS cause of D=1 in either. I leave a second counterexample to the HP definition for the appendix and refer the reader to <cit.> for a detailed critical examination of the HP definition. The alternative definition I there presented is in fact very similar to my CNESS definition, although the precise relation is the subject of further investigation.[I tentatively conjecture that the CNESS definition implies my other definition, and not vice versa.]This leads me to suggest adopting the CNESS definition for the Causal Condition. § THE EPISTEMIC CONDITION Recall that the difference between HK and BvH's Epistemic Conditions lies in whether an action minimizes the probability of the outcome occurring (HK) or of it causing the outcome (BvH).Given that one cannot cause an outcome unless the outcome actually occurs, and that vice versa, in many cases the best way to make sure that an outcome occurs is by causing it, both of these conditions often go hand in hand. However, as the following example illustrates, they do not always do so, and when they do not the appeal of HK's condition is stronger.BombingA bomb (B) is connected to three detonators (D_1, D_2, and D_3) by two switches (S_1 and S_2). D_1 is functional if only S_1 is on, D_2 is functional if only S_2 is on, and D_3 is functional whenever S_1 is on. The equations are thus as follows: B=D_1D_2D_3, D_1=S_1 ¬ S_2, D_2=S_2 ¬ S_1, and D_3=S_1. Assassin_2 (reasonably) assigns a probability of 0.6 to Assassin_1 turning on S_1. He decides to turn on S_2, thereby guaranteeing that the bomb will explode. Assassin_1 decides not to turn on S_1, so that the bomb explodes due to the functioning of D_2. Here we certainly would want to say that Assassin_2 is responsible for the explosion, and the reason for this seems to be precisely that he knowingly increased the probability of the bomb going off (from 0.6 if S_2=0 to 1 now that S_2=1). There is also no doubt that Assassin_2's action caused the explosion: if he had turned S_2 off, the bomb would not have exploded. However, Assassin_2 did act so as to minimize the probability that his act would cause the explosion, regardless of whether one chooses NESS-, HP-, or CNESS-causation. Concretely, for all three definitions of causation, Assassin_2's probability that S_2=1 would cause B=1 is 0.4, whereas his probability that S_2=0 would cause B=1 is 0.6. (The details are worked out in the appendix.) Note that in case S_1=1, then S_2=0 would result in the outcome being overdetermined, and thus although the latter action would also be a cause of the outcome, it does nothing to contribute to the probability of the outcome occurring. This is what explains why the two conditions can come apart, and why I take the general moral of this story to be that increasing the probability of the outcome trumps increasing the probability of causing the outcome.However, it does not follow that the probability of causation is irrelevant, but only that it should fulfill a secondary role. Consider again Example <ref>, and assume that Assassin_1 believes that Assassin_2 will shoot, and thus believes that Victim is facing certain death. (If that sounds too unrealistic, imagine Assassin_1 is one of ten members of a highly trained firing squad that is executing Victim.)Thus the action of Assassin_1 had no effect on the probability of the outcome, and would thus not be responsible for Victim's death according to HK's definition. If Assassin_2 has a similar belief, then we end up with nobody being responsible. I take this to be an unacceptable result. (Fischer & Ravizza reach the same conclusion when likewise discussing a case (Missile 2) in which an agent knows that the outcome will ensue no matter what they do, and yet the agent is still responsible for the outcome by choosing to cause it <cit.>.)The lesson I draw from this is that if one knowingly has the opportunity to reduce the probability of causation without thereby increasing the probability of the outcome, then an agent is responsible if she fails to do so. Therefore I propose the following definition of moral responsibility. An agent who performs A=a is responsible for O=o w.r.t. a responsibility setting (M, u⃗, ) if: * (Causal Condition) A=a CNESS-causes O=o w.r.t. (M, u⃗). * (Epistemic Condition)There exists a' ∈(A) so that one of the following holds: * (O=o | [Aa]) > (O=o | [Aa']) * (O=o | [Aa]) = (O=o | [Aa']) and (A=aCNESS-causesO=o) > (A=a'CNESS-causesO=o). § DEGREE OF RESPONSIBILITY My binary definition of responsibility can be complemented with a definition of the degree of responsibility in order to capture the widely shared sense that responsibility (as well as blame and praise) is a graded notion. Both BvH's and HK's Epistemic Conditions naturally suggest such a definition, and so does my combined condition. The obvious graded counterpart of HK's condition is to simply look at the causal effect <cit.>, which in the context of causal strength is referred to as the Eells measure of causal strength of A=a relative to A=a': CS_e(o,a,a')=(O=o | [Aa]) - (O=o | [Aa']) <cit.>. Sprenger <cit.> argues for accepting the Eells measure as a general measure of causal strength, which is in line with the priority that my Epistemic Condition attributes to it. Moreover, when restricted to positive values, this is in fact HK's definition of the degree of blameworthiness. Likewise, the obvious counterpart of BvH's condition is to look at the increase of probability in causing the outcome. Thus I also define the actual causation measure of causal strength as[Surprisingly, to my knowledge this rather obvious measure of causal strength has been overlooked so far in the literature. (For any definition of causation of course, not just CNESS.)] CS_ac(o,a,a')=(A=aCNESS-causesO=o | [Aa]) - (A=a'CNESS-causesO=o | [Aa']). Taking into account that our Epistemic Condition is a mixture of those of BvH and HK, I suggest the following definition, wherethe value of α expresses the relative importance of both measures. The degree of responsibility d for O=o of an agent who performs A=aw.r.t. a responsibility setting (M, u⃗, ) is 0 in case the agent is not responsible, otherwise let S=_a^* ∈(A)(O=o | [Aa^*]), and let a”=_a' ∈ S(A=a'CNESS-causesO=o | [Aa']),then d=CS_e(o,a,a”) + α· max(0,CS_ac(o,a,a”)). Informally, this measure works as follows. Among all actions that minimize the probability of the outcome, we take one that minimizes the probability of causing the outcome, and then take a weighted sum of both causal strength measures for that action (where the second measure is ignored if it is negative). This captures the idea that in order to avoid responsibility, the agent should choose an action that makes the outcome as unlikely as possible, and then further select their action so that it makes causing the outcome as unlikely as possible. The following example illustrates this definition. Imagine again our scenario from Example <ref>, but with the following change: Assassin_1 is known to be a reliable assassin, whereas Assassin_2 is known to have second doubts and almost never shoots. In other words, it is reasonable for Assassin_2 to expect that Assassin_1 will shoot, and it is reasonable for Assassin_1 to expect that Assassin_2 will not shoot. On this particular occasion, both assassins shoot and kill victim.Although both assassins are responsible according to my definition, it is easy to see that Assassin_1 is responsible to a higher degree: the measures of actual causation are identical for both and so are their respective probabilities of the outcome occurring given that they shoot (namely 1), but Assassin_1's probability of the outcome occurring given that they do not shoot is far lower, and thus[The superscripts Ass_i indicate that we are using each agent's subjective probabilities to assess their degree of responsibility.] CS^Ass_1_e(V=1,A_1=1,A_1=0) > CS_e^Ass_2(V=1,A_2=1,A_2=0).Interestingly, recent studies offer empirical confirmation that the agent's epistemic state does indeed impact people's judgments in precisely this way: in a disjunctive scenario (like ours), an agent who performs an action that is typical (for them) is considered to be more responsible than an agent who acts atypically <cit.>. The authors contrast this disjunctive scenario, which they have trouble explaining, with a conjunctive one in which both agents' actions are necessary for the outcome to occur, which their account explains quite well. In a conjunctive scenario (in other words, if the equation were V=A_1A_2), an agent who performs an action that is atypical is considered to be more responsible than an agent who acts typically, flipping the judgments compared to the disjunctive scenario. That is also the verdict of my degree of blameworthiness: in this case, the atypical agent can reasonably expect the outcome to depend on them performing the action whereas the typical agent can reasonably expect that their action has little impact, which translates into a larger measure of causal strength (both CS_e and CS_ac) for the former. So in contrast to the account of Kirfel and Lagnado <cit.>, my proposal applies equally to both scenarios and can thus be seen as a formal extension of their work. § CONCLUSION AND FUTURE WORK Based on a comparison with the work of BvH and HK, I have offered a novel formal definition of moral responsibility that is particularly suited for AI systems by filling in the causal and the epistemic conditions. I used contrasting examples to argue in favor of the Counterfactual NESS definition of causation over the NESS and the HP definition, and in favor of a nuanced epistemic condition that combines the two conditions of BvH and HK. I connected this work to measures of causal strength to define a degree of responsibility. This quantified approach can be further enhanced by also taking into account the robustness of causation, which recent research suggests plays a role in responsibility judgments that is somewhat independent of causal strength <cit.>, as well as by considering the collective responsibility of groups of agents <cit.>. Lastly, as discussed, a formal definition of responsibility is a necessary prerequisite for definitions of blame and praise. To develop definitions of the latter requires incorporating harm and benefit <cit.>, and possibly also intention. Therefore the current definition can be extended in several ways, which I aim to do in future work. Many thanks to Hein Duijf for helpful comments on an earlier version of this paper, as well as to the Neurips reviewers for their constructive criticism of the original submission. This research was made possible by funding from the Alexander von Humboldt Foundation.spbasic§ APPENDIX§.§ Frankfurt-Case The following is an example of a so-called “Frankfurt-case”, taken from HK. An enormous literature in philosophy is devoted to dealing with these kinds of examples, attempting to reconcile intuitions about responsibility with the counterfactual and causal features that these examples contain. Surprisingly, almost none of it uses causal models, and yet doing so reveals the causal structure to be entirely unproblematic. [Frankfurt]Imagine Jones poisons Smith, who dies. Unbeknownst to Jones, Black was observing his behavior: if Jones had not poisoned Smith, Black would have given Jones a gun and manipulated him in some way or other so that Jones would shoot Smith. Black is both a perfect observer and manipulator of Jones's behavior, and is thus guaranteed to succeed in his plans. Intuitively it is clear that Jones is responsible for Smith's death, despite the fact that he could not have prevented it. (Typical Frankfurt cases focus on responsibility for an action, as opposed to responsibility for the consequence of an action, and therefore scenarios are normally formulated such that Black manipulates Jones to perform the same action. Except for the shift from the action to the consequence though, those cases are structurally isomorphic.)[This analysis can just as easily be applied to these more typical Frankfurt cases. Still, for those who are sceptical that proponents of Frankfurt cases are equally comfortable as I am with moving from actions to consequences, I point out that Fischer & Ravizza apply this shift in exactly the same manner as I do when discussing responsibility for consequences <cit.>.]The Epistemic Condition of both BvH and HK is obviously fulfilled, for Jones believes that Smith's death is completely dependent on his poisoning. We consider the following equations to assess the causal condition: SD=JPJS to capture the fact that Smith dies (SD) if either Jones shoots (JS) or poisons (JP) him, JS=BM to capture that Jones shoots only when Black hands him a gun and manipulates him (BM), and finally BM=¬ JP to capture that Black's action depends on Jones's failure to poison. Regardless of whether we apply the NESS definition, the CNESS definition, or the HP definition, JP=1 comes out as a cause of SD=1, and thus the Causal Condition is satisfied. (This is easy to see by observing that the structure of this example is a standard case of Early Preemption.) BvH claim that their account can handle Frankfurt-cases like this, but that is a mistake. Recall that their variables represent the agents' strategies rather than their actions, and that we are limited to using a single equation. The outcome function they use when discussing a Frankfurt-case is equivalent to the equation SD=JPB, where B represents Black adopting his preferred strategy. Therefore on their account both Jones and Black come out as causes of Smith's death, which is not a sensible result.BvH admit that their NESS definition is unable to handle conditional strategies like that of Black, but contend that since we are here focussing on Jones this is not a problem. Obviously simply stating that one should only focus on the sensible results of one's theory is not a satisfactory way of defending it... (This example also highlights a more philosophical problem with their approach: it is not at all clear what it means for a strategy to be a cause. The broad consensus is that causal relata are either events/omissions or properties of events, whereas conditional strategies are neither.) §.§ Counterexample to the HP-definition We here consider a second counterexample to the HP definition that was suggested in <cit.>. The example is of particular interest as it was presented precisely within the context of the relation between causation and moral responsibility.We have equations Y=XD and X=D, and we consider a context such that D=1. This looks very much like a standard case of overdetermination in which X=1 and D=1 are both overdetermining causes. Yet X=1 is not an HP-cause of Y=1 (and it is a CNESS-cause). The reason for this is that Y=1 depends counterfactually on D=1 by itself, whereas it does not depend on X=1 by itself and nor does it when we take D=1 as our witness W⃗=w⃗. Rosenberg & Glymour <cit.> argue that this result shows the HP definition cannot offer a basis for moral responsibility, by offering the following scenario to go along with these equations:An obedient gang is ordered by its leader to join him in murdering someone, and does so, all of them shooting the victim at the same time, or all of them together pushing the plunger connected to a bomb. The action of any one of the gang would suffice for the victim's death. If responsibility implies causality, whom among them is responsible? ... Halpern's theory says the gang leader and only the gang leader is a cause of the victim's death. This is a morally intolerable result; absent a plausible general principle severing responsibility from causation, any theory that yields such a result should be rejected. §.§ Bombing We now go through the details for the Bombing example. (Ex. <ref>) We need to consider the following four scenarios: * S_2=1 and S_1=0* S_2=1 and S_1=1* S_2=0 and S_1=0* S_2=0 and S_1=1 We first go through the details for CNESS-causation. In scenario 1 we have that S_1=D_1=D_3=0 and S_2=D_2=B=1. Here {S_2=1,S_1=0} is sufficient for D_2, whereas {S_1=0} is not. Therefore S_2=1 directly NESS-causes D_2=1. Clearly also D_2=1 directly NESS-causes B=1, and thus S_2=1 NESS-causes B=1 along {S_2,D_2,B}. What about the counterfactual setting (M_S_20,u⃗)? That corresponds to scenario 3. There, the bomb doesn't even explode (so B=0), and thus there are no causes of B=1. We conclude that in scenario 1 S_2=1 CNESS-causes B=1. In scenario 2 we have that S_1=S_2=D_3=B=1 and D_1=D_2=0. In this scenario B=1 is directly NESS-caused only by D_3=1. Since S_2=1 does not directly NESS-cause D_3=1, it is not a NESS-cause of B=1.In scenario 4 we have that S_1=D_1=D_3=B=1 and S_2=D_2=0. Here {S_2=0,S_1=1} is sufficient for D_1, whereas {S_1=1} is not. Therefore S_2=0 directly NESS-causes D_1=1. Clearly also D_1=1 directly NESS-causes B=1, and thus S_2=0 NESS-causes B=1 along {S_2,D_1,B}. What about the counterfactual setting (M_S_21,u⃗)? That corresponds to scenario 2, in which S_2=1 does not NESS-cause B=1. So S_2=0 CNESS-causes B=1 in scenario 4.As a result, if Assassin_2 chooses S_2=1, the probability of CNESS-causing B=1 is the probability that S_1=0, which is 0.4. By contrast, if Assassin_2 chooses S_2=0, the probability of CNESS-causing B=1 is the probability that S_1=1, which is 0.6.NESS-causation for each scenario is already discussed in the above, so we move on to consider HP-causation. In scenario 1 we have counterfactual dependence of B=1 on S_2=1, and it is well-known that this suffices for HP-causation (as well as for CNESS-causation, by the way <cit.>). In scenario 2, note that D_3 suffices for B=1, and thus satisfying AC2 is possible only when either D_3=1 or S_1=1 is also part of the candidate cause X⃗=x⃗. However, B=1 counterfactually depends on D_3=1, meaning that D_3=1 is a cause all by itself. Thus {S_2=1,D_3=1} is not minimal, and because of AC3 this means that it is not a cause. That leaves {S_2=1,S_1=1}. But this is not minimal either, for S_1=1 is a cause all by itself: one can take W⃗={D_2} as a witness to get B=0 when S_1 is set to 0. Therefore S_2=1 is not part of any cause of B=1. Since B=0 in scenario 3, S_2=0 does not HP-cause B=1 there either, leaving scenario 4. As with scenario 2, the candidate cause will have to include D_3=1 or S_1=1. Contrary to scenario 2 though, D_3=1 is no longer a cause by itself, since D_1=1 holds, and will remain to hold also when we set D_3 to 0. Since B=1 counterfactually depends on {S_2=0, D_3=1}, we get that each of them HP-causes B=1.
http://arxiv.org/abs/2310.18040v1
{ "authors": [ "Sander Beckers" ], "categories": [ "cs.AI", "cs.CY", "cs.LO" ], "primary_category": "cs.AI", "published": "20231027103747", "title": "Moral Responsibility for AI Systems" }
[email protected] [email protected] ^1School of Physics and Optoelectronic Engineering, Foshan University, Foshan 528225, China ^2Physics Department and Solid-State Institute, Technion, Haifa 32000, Israel ^3Department of Physical Electronics, School of Electrical Engineering, Faculty of Engineering, Tel Aviv University, Tel Aviv 69978, Israel ^4Instituto de Alta Investigación, Universidad de Tarapacá, Casilla 7D, Arica, Chile ^5Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology, Foshan University, Foshan 528225, China We construct strongly anisotropic quantum droplets with embedded vorticity in the 3D space, with mutually perpendicular vortex axis and polarization of the atomic magnetic moments. Stability of these anisotropic vortex quantum droplets (AVQDs) is verified by means of systematic simulations. Their stability area is identified in the parametric plane of the total atom number and scattering length of contact interactions. The application of torque perpendicular to the vorticity axis gives rise to robust intrinsic oscillations or rotation of the AVQDs. Collisions between slowly and fast moving AVQDs give rise, respectively, to the elastic outcome or merger. Finally, we construct stable vortex-antivortex-vortex bound states and find their stability regions in the parameter space. Strongly anisotropic vortices in dipolar quantum droplets Yongyao Li^1,5 2023-10-25 =========================================================Quantum droplets (QDs), representing a novel form of quantum matter, have drawn much interest in recent years <cit.> . These are droplets of an ultra-dilute superfluid maintained by the balance between the mean-field (MF) and beyond-MF effects <cit.>, the latter one being the Lee-Huang-Yang correction <cit.> to the MF nonlinearity induced by quantum fluctuations. QDs have been experimentally observed in dipolar Bose-Einstein condensates (BECs) <cit.>, as well as in binary BECs of nonmagnetic atoms, with quasi-2D <cit.> and 3D <cit.>. Prior to that, experimental realization of self-trapped BECs in free space solely through MF effect was impossible due to the critical or supercritical collapse instability in the 2D and 3D settings, respectively <cit.> (nevertheless, weakly unstable quasi-2D Townes solitons were experimentally created in a binary BEC <cit.>). QDs in nonmagnetic condensates appear in the isotropic form, whereas their shapes are anisotropic in dipolar BECs <cit.>.The remarkable properties of QDs have been driving extensive work on this topic, such as Monte-Carlo simulations <cit.>, collective excitations <cit.>, supersolids <cit.>, etc. A particularly interesting direction of the studies is embedding vorticity into the self-bound droplets. It is well known that the creation of self-localized vortices in the multi-dimensional space is a challenging issue. The azimuthal instability, which is induced by the underlying attractive nonlinearity, tends to split the 2D vortex ring or 3D torus (“donut") into fragments <cit.>. This instability develops faster than the collapse driven by the cubic self-attraction. In QDs, the splitting instability may be arrested by the competition between the MF attraction and LHY self-repulsion, similar to the stabilizing effect of the cubic-quintic nonlinearity in optics models <cit.>. Stable vortex QDs with the winding numbers (topological charge) up to 5 and 2 (at least) were predicted in 2D <cit.> and 3D geometries <cit.>, respectively. A novel species of semi-discrete vortex QDs was predicted in arrays of one-dimensional traps <cit.>. These results indicate that the equilibrium state of the LHY-stabilized superfluid provide a versatile platform for the creation of the stable self-bound vortices.The above-mentioned findings were produced for the binary BECs of nonmagnetic atoms. For the dipolar QDs, isotropic vortex modes have been reported, with the vortex axis parallel to the polarization of atomic magnetic moments, represented by “Type 1" in Fig. <ref>. This configuration is rotationally symmetric with respect to the vorticity axis, but it is unstable <cit.>. The creation ofanisotropic vortex QDs in dipolar BECs and their stability is an open problem. This problem is also relevant in studies of other nonlinear systems, as no example of such states (i.e., anisotropic vortex solitons) in free space was reported. Very recently, the prediction of a stable vortex QD in a 2D dipolar BECs system has been made <cit.>. However, this problem was not previously addressed in the full 3D geometry.In this Letter, we predict the existence of stable 3D strongly anisotropic vortex quantum droplets (3D-AVQDs) in the dipolar BEC with the magnetic dipoles polarized perpendicular to the vortex' axis, corresponding to the “Type 2" configuration in Fig. <ref>. The respective 3D LHY-amended Gross-Pitaevskii equation (GPE) is written asiħ∂/∂ tψ =-ħ ^2/2m∇ ^2ψ +g|ψ |^2ψ + κψ∫ U_dd(𝐫-𝐫^')|ψ ( 𝐫^')|^2d𝐫^'+γ |ψ |^3ψ ,where ħ and m are the Planck's constant and atomic mass, g=4πħ ^2a_s/m with a_s being the s-wave scattering length of inter-atomic collisions, is the strength of the contact nonlinearity, which may be tuned by the Feshbach resonance <cit.>. The coupling coefficient of the dipole-dipole interaction (DDI) is κ =μ _0μ ^2/4π, where μ _0 and μ are the vacuum permeability and atomic magnetic moment of the atom. The coefficient in front of the LHY term is γ =( 32ga_s^3/2/3√(π)) ( 1+3ϵ _ dd^2/2) <cit.>, where the relative DDI strength ϵ _dd≡ a_dd/a_s is determined by the dipole scattering length, a_dd=μ _0μ ^2m/12πħ <cit.>. The DDI potential is U_dd(𝐫-𝐫^' )=( 1-3cos ^2Θ) /|𝐫-𝐫^'|^3 <cit.>, where cos ^2Θ =( x-x^') ^2/| 𝐫-𝐫^'|^2.Stationary solutions with chemical potential Ω are looked for in the usual form, ψ (𝐫,t)=ϕ (𝐫)e^-iΩ t/ħ, with a stationary wave function ϕ (𝐫). GPE (<ref>) conserves the total atom number, N=∫ |ψ (𝐫)|^2dr , total energy, E=∫ d𝐫[ ħ ^2/2m|∇ψ |^2+1/2g|ψ |^4+1/2κ |ψ |^2∫ U_dd (𝐫-𝐫^')|ψ (𝐫^')|^2d 𝐫^'..+2/5γ |ψ |^5], and the vectorial momentum (here we consider quiescent modes, with zero momentum).3D-AVQD solutions with integer vorticity S can be produced in the numerical form by means of the imaginary-time-integration method <cit.>, initiated with an anisotropic input,ϕ ^(0)(x,y,z)=Ar̃^Sexp( -α _1r̃ ^2-α _2z^2+iSθ̃) ,where A and α _1,2 are positive real constants which determine widths of the input, and the deformed polar coordinates in the ( x,y) plane are {r̃,θ̃}≡{√(x^2+β ^2y^2),arctan (β y/x)} with an anisotropy factor β >1. In this work, we select parameters of the BEC of dysprosium, ^164Dy, which has a significant dipole length, with a_ dd=131a_0 (a_0 is the Bohr radius) <cit.>. The control parameters of the system are N and a_s.3D-AVQDs with S=1 can be obtained as numerical solutions of Eq. (<ref>). The stability of the 3D-AVQDs was tested by real-time simulations of perturbed evolution. The numerically found stability area for them in the (N,a_s) plane is shown in Fig. <ref>(a), with a typical example of a stable 3D-AVQD shown in Fig. <ref>(b). The average atomic density of this state is 140× 10^20 atoms/m^3, in agreement with the prediction of the density in Ref <cit.>. In the simulations, stable 3D-AVQDs, which populate the blue areas in Fig. <ref>(a), maintain their integrity during a sufficient long time (at least, ∼ 100 ms), which is longer than the levitation time (∼ 90ms) in the experiment <cit.>. On the other hand, the unstable 3D-AVQDs [in gray area in Fig. <ref>(a)] spontaneously transform into ground-state QDs after a few milliseconds. It is thus observed that 3D-AVQDs exist at a_s>12a_0, and they are stable at a_s>27a_0.As mentioned above, the vortex states with the vorticity axis parallel to the polarization of the dipoles [as shown schematically by “Type 1" in Fig. <ref>] are completely unstable. Because these solutions are axially isotropic symmetry, we mark them as SYM type in Fig.<ref>. The 3D-AVQD solutions obtained here are anisotropic, therefore they are marked by the ASY label. Figure <ref> displays the comparison of the total energy between the isotropic and anisotropic species of the vortex solutions. The energy of fundamental (zero-vorticity) QDs, marked by FUND, is also included, as a reference. Figures <ref>(a,b) show that the unstable SYM vortex QDs have the highest energy (which is a natural explanation for their instability), while stable ASY vortex states have a lower energy, which is almost identical to that of the fundamental QDs.For the SYM type of the vortex QDs, the void around the long axis implies the removal of a long tube filled by dipoles chiefly featuring attractive DDIs, i.e., the removal of the negative interaction energy, which causes them to have higher actual energy values, in accordance with Fig. <ref>(a,b). A typical example of the evolution for this vortex-QD species is displayed in Fig. <ref>(c1-c3), which demonstrates the instability-induced splitting. These results agree with the instability of the isotropic vortex solitons that was reported in Ref. <cit.>. On the other hand, the stability of the ASY type is feasible because the corresponding axial void removes a tube filled by dipoles chiefly featuring repulsive DDIs with the positive energy, thus producing lower actual energy values, as corroborated by Fig. <ref>(a,b). Additional analysis has demonstrated that the application of the imaginary-time-integration method to Eq. (<ref>) does not generate 3D-AVQD solutions with multiple vorticity, S≥ 2.To present systematic results for the 3D-AVQDs, we define their ellipticity 𝒜 and normalized angular momentum L̅_z:𝒜=D_S/D_L,L̅_z=∫ϕ ^∗L̂_zϕ/Nd𝐫,where D_S and D_L are, respectively, the short and long axes of the QDs, and L̂_z=iħ (y∂ _x-x∂ _y) is the operator of the z components of the angular momentum. Dependences of the chemical potential, ellipticity, and angular momentum on the number of atoms, for two different values of a_s, are produced in Fig. <ref>.In Fig. <ref>(a), the chemical potential Ω satisfies the Vakhitov-Kolokolov criterion, dΩ /dN<0, which is the well-known necessary stability condition for self-trapped modes <cit.> . A basic feature of QDs is their incompressibility. This implies that the average density of the droplets cannot exceed a maximum value <cit.>, which leads to flat-top QDs' shape. Thus, the volume of the QDs increases linearly with the growth of the number of atoms (total norm). Further, due to the strong DDI anisotropy, the increase of the volume is mostly represented by the extension along the x-axis, leading to the decrease of the ellipticity (see Eq. (<ref>)) in Fig. <ref> (b). As the internal vorticity is mainly concentrated at the center of the droplet, figure <ref>(c) shows that larger values of norm correspond to longer droplets and lower values of the angular momentum. Moreover, figures<ref>(b,c) reveal that L̅_z/ħ =2𝒜, which coincides with the relation found for strongly anisotropic 2D vortex QDs <cit.>. The shape of the 3D-AVQDs suggests a possibility to set it in rotation around an axis perpendicular to the vorticity vector. To this end, a torque was applied around the x-axis, multiplying the established 3D-AVQD by the phase factor exp [i(z/z_0)tanh (y/y_0)], i.e., adding an x -component of the angular momentum to the original z-component, cf. Ref. <cit.>. Here, z_0 and y_0 are length scales, which define the strength of the torque. Simulations reveal oscillations or rotation of the 3D-AVQDs around the x-axis, depending on values of z_0 and y_0. The weak torque, corresponding to large z_0 and y_0, induces oscillations, whose period increases with the decrease of z_0 and y_0. Divergence of the oscillation period implies a transition to the rotation, caused by a sufficiently strong torque. The rotation speed increases with the further decrease of y_0 and z_0, as the torque is made still stronger. Figure <ref>(a) displays the oscillation and rotation regions in the plane of ( z_0^-1,y_0) for N=10000 and a_s=50a_0 . The border between these dynamical regimes is fitted by y_0=Z_0^2/z_0+Y_0, with Z_0≈ 0.88 μm and Y_0≈ 0.06 μm. This relation is explained by the fact that, for |y|≲ y_0, the torque's phase, ≈ yz/( y_0z_0), is determined solely by the product y_0z_0. Periods of the oscillations and rotation are displayed, as functions of z_0^-1, by insets in the respective regions. A typical example of the stable rotation is presented in Fig. <ref>(b1-b3). The rotation picture is the same as produced by the stationary solution of Eq. (<ref>) in the rotating reference frame, which includes term ωL̂_xψ, where L̂_x=iħ (z∂ _y-y∂ _z) and ω is the rotation frequency.We have also explored results of the application of the torque around the y - and z-axes, in terms of Fig. <ref>. In the former case, the torque drives a complex dynamical regime: the prolate QD features oscillations in the ( z,x) plane, simultaneously with irregular rotation around the x axis (not around the y direction). Lastly, the application of a weak torque along the z direction initiates oscillations of the prolate vortex soliton in the ( x,y) plane, while a stronger torque leads to its splitting, the boundary between the two regimes being x_0=Y_0^2/y_0+X_0, where Y_0≈ 0.67 μm and X_0≈ -0.5 μ m, in terms of the torque's spatial scales.Another relevant dynamical problem is collisions between 3D-AVQDs set in motion by opposite kicks, ±η, applied along the x-, y- or z -direction. In particular, the collision between two identical AVQDs along the x-direction is initiated by input ψ (𝐫,t=0)=ϕ (x-x_0,y,z)e^-iη x+ϕ (x+x_0,y,z)e^iη x, where x_0 is the initial separation between them. The collision is elastic between slowly moving droplets, i.e., for smaller values of η. At larger η, inelastic collisions lead, in most cases, to merger of the droplets into a fundamental QD, while pivots of the intrinsic vortices escape from it. However, in some cases, the merger creates a long-lived state with a vortex-antivortex-vortex structure. Recently, a similar outcome was demonstrated in the 2D case <cit.>.The production of the QD with the long-lived vortex-antivortex-vortex structure suggests that a bound state with the same structure can be produced as a stationary solution of Eq. (<ref>). Indeed, it is generated by imaginary-time-integration method, using the inputϕ ^(0) = ∑_+,-A_±r̃_±exp( -α _1 r̃_±^2-α _2z^2+iθ̃_±) +Ar̃exp( -α _1r̃^2-α _2z^2-iθ̃) .Here, A_±>0 and α _1,2>0 are real constants, r̃ _±≡√((x± x_0)^2+β ^2y^2), θ̃_±≡arctan[ β y/(x± x_0)], and x_0 is an appropriately chosen separation, cf. Eq. (<ref>). A typical example of such a stable QD with average density 200× 10^20 atoms/m^3 is displayed in Fig. <ref>(a). Its stability area in the plane of (N,a_s) is plotted in Fig. <ref>(c). It shows that such stable three-pivot vortex bound stated exist in the region of 30<a_s/a_0<45 and 1800<N<6400, which is embedded in the broader stability region of the regular 3D-AVQDs, cf. Fig. <ref>(a).Conclusion We have predicted stable 3D strongly anisotropic vortex quantum droplets (3D-AVQDs) in dipolar BEC, with mutually perpendicular vorticity vector and polarization of atomic magnetic moments. While isotropic vortex solitons in dipolar BEC are known to be unstable, we have identified a vast stability region of the 3D-AVQDs in the system's parameter space. Essential characteristics of the 3D-AVQDs, including the chemical potential, aspect ratio, and angular momentum, are presented as functions of the control parameters. Furthermore, we have demonstrated that the application of the torque perpendicular to the vorticity axis initiates robust intrinsic oscillations or rotation of the 3D-AVQDs. The dependence of the oscillation and rotation period on parameters of the torque have been found. Collisions between moving 3D-AVQDs have been addressed too, demonstrating elastic and inelastic outcomes. In particular, the collisions may give rise to a novel bound state of the vortex-antivortex-vortex type, which are also produced as stationary states, and their stability area is identified. These new stable 3D vortex QDs may find various applications to studies of quantum matter, including quantum communications and data-processing techniques.As an extension of the present analysis, it may be relevant to look for more complex bound states of AVQDs, and to study a two-component version of the model <cit.>.Authors appreciate a valuable discussion with Prof. Zhenya Yan, Prof. G. E. Astrakharchik, and Xizhou Qin. This work was supported by NNSFC (China) through Grants No. 12274077, 11874112, 11905032, by the Natural Science Foundation of Guangdong province through Grant No. 2021A1515010214, and 2021A1515111015, the Key Research Projects of General Colleges in Guangdong Province through grant No. 2019KZDXM001, the Research Fund of Guangdong-Hong Kong-Macao Joint Laboratory for Intelligent Micro-Nano Optoelectronic Technology through grant No.2020B1212030010. The work of B.A.M. is supported, in part, by the Israel Science Foundation through grant No. 1695/22.99 Ferrier-Barbut2018 I. Ferrier-Barbut and T. Pfau, Quantum Liquids Get Thin, Science 359, 274 (2018).Guo2021 M. Guo and T. Pfau, A new state of matter of quantum droplets, Front. Phys. 16, 32202 (2021).Luo2021 Z. Luo, W. Pang, B. Liu, Y. Li, and B. A. Malomed, A new kind form of liquid matter: Quantum droplets, Front. Phys. 16, 32201 (2021).BAM2021 B. A. Malomed, The family of quantum droplets keeps expanding, Front. Phys. 16, 22504 (2021).Bottcher2021 F. Böttcher, J.-N. Schmidt, J. Hertkorn, K. S. H. Ng, S. D. Graham, M. Guo, T. Langen, and T. Pfau, New States of Matter with Fine-Tuned Interactions: Quantum Droplets and Dipolar Supersolids, Rep. Prog. Phys. 84, 012403 (2021).Chomaz2023 L. Chomaz, I. Ferrier-Barbut, F. Ferlaino, B. Laburthe-Tolra, B. L. Lev, and T. Pfau, Dipolar Physics: A Review of Experiments with Magnetic Quantum Gases, Rep. Prog. Phys. 86, 026401 (2023).Ferrier-Barbut2016a I. Ferrier-Barbut, M. Schmitt, M. Wenzel, H. Kadau, and T. Pfau, Liquid Quantum Droplets of Ultracold Magnetic Atoms, J. Phys. B: At. Mol. Opt. Phys. 49, 214004 (2016).GEA2018 G. E. Astrakharchik and B. A. Malomed, Dynamics of one-dimensional quantum droplets, Phys. Rev. A 98, 013631 (2018)Skov2021 T. G. Skov, M. G. Skou, N. B. Jørgensen, and J. J. Arlt, Observation of a Lee-Huang-Yang Fluid, Phys. Rev. Lett. 126, 230404 (2021). Petrov2015 D. S. Petrov, Quantum Mechanical Stabilization of a Collapsing Bose-Bose Mixture, Phys. Rev. Lett. 115, 155302 (2015).Petrov2016 D. S. Petrov and G. E. Astrakharchik, Ultradilute Low-Dimensional Liquids, Phys. Rev. Lett. 117, 100401 (2016).LHY T. D. Lee, K. Huang, and C. N. Yang, Eigenvalues and eigenfunctions of a Bose system of hard spheres and its low-temperature properties, Phys. Rev. 106, 1135-1145 (1957).Jrgensen2018 N. B. Jørgensen, G. M. Bruun, and J. J. Arlt, Dilute Fluid Governed by Quantum Fluctuations, Phys. Rev. Lett. 121, 173403 (2018). Schmitt2016 M. Schmitt, M. Wenzel, F. Böttcher, I. Ferrier-Barbut, and T. Pfau, Self-bound droplets of a dilute magnetic quantum liquid, Nature 539, 259 (2016).Chomaz2016 L. Chomaz, S. Baier, D. Petter, M. J. Mark, F. Wä chtler, L. Santos, and F. Ferlaino, Quantum-Fluctuation-Driven Crossover from a Dilute Bose-Einstein Condensate to a Macrodroplet in a Dipolar Quantum Fluid, Phys. Rev. X 6, 041039 (2016).Cabrera2018 C. R. Cabrera, L. Tanzi, J. Sanz, B. Naylor, P. Thomas, P. Cheiney, and L. Tarruell, Quantum Liquid Droplets in a Mixture of Bose-Einstein Condensates, Science 359, 301 (2018).Leticia2 P. Cheiney, C. R. Cabrera, J. Sanz, B. Naylor, L. Tanzi, and L. Tarruell, Bright soliton to quantum droplet transition in a mixture of Bose-Einstein condensates, Phys. Rev. Lett. 120, 135301 (2018).Semeghini2018 G. Semeghini, G. Ferioli, L. Masi, C. Mazzinghi, L. Wolswijk, F. Minardi, M. Modugno, G. Modugno, M. Inguscio, and M. Fattori, Self-Bound Quantum Droplets of Atomic Mixtures in Free Space, Phys. Rev. Lett. 120, 235301 (2018).collision G. Ferioli, G. Semeghini, L. Masi, G. Giusti, G. Modugno, M. Inguscio, Albert Gallemí, A. Recati, and M. Fattori, Collisions of Self-Bound Quantum Droplets, Phys. Rev. Lett. 122, 090401 (2019).Salasnich C. D'Errico, A. Burchianti, M. Prevedelli, L. Salasnich, F. Ancilotto, M. Modugno, F. Minardi, and C. Fort, Observation of quantum droplets in a heteronuclear bosonic mixture, Phys. Rev. Res. 1, 033155 (2019).Fibich1999 G. Fibich and G. Papanicolaou, Self-focusing in the perturbed and unperturbed nonlinear Schrödinger equation in critical dimension, SIAM J. Appl. Math. 60, 183 (1999).Berge1998 L. Bergé, Wave collapse in physics: principles and applications to light and plasma waves, Phys. Rep. 303, 259 (1998).Kuznetsov2011 E. A. Kuznetsov and F. Dias, Bifurcations of solitons and their stability, Phys. Rep. 507, 43 (2011).Pethick2002 C. Pethick and H. Smith,Bose-Einstein Condensation in Dilute Gases (Cambridge University Press, Cambridge; New York, 2002).Townes1 C.-A. and C.-L. Hung, Observation of universal quench dynamics and Townes soliton formation from modulational instability in two-dimensional Bose gases, Phys. Rev. Lett. 125, 250401 (2020).Townes2 B. Bakkali-Hassani, C. Maury, Y.-Q. Zhou, E. Le Cerf, R. Saint-Jalm, P. C. M. Castilho, S. Nascimbene, J. Dalibard, and J. Beugnon, Realization of a Townes Soliton in a Two-Component Planar Bose Gas, Phys. Rev. Lett. 127, 023603 (2021).Baillie2016 D. Baillie, R. M. Wilson, R. N. Bisset, and P. B. Blakie, Self-Bound Dipolar Droplet: A Localized Matter Wave in Free Space, Phys. Rev. A 94, 021602 (2016).Wenzel2018 M. Wenzel, F. Böttcher, J.-N. Schmidt, M. Eisenmann, T. Langen, T. Pfau, and I. Ferrier-Barbut, Anisotropic Superfluid Behavior of a Dipolar Bose-Einstein Condensate, Phys. Rev. Lett. 121 , 030401 (2018).Astra M. A. Garcia-March, B. Julia-Diaz, G. E. Astrakharchik, T. Busch, J. Boronat, and A. Polls, Quantum correlations and spatial localization in one-dimensional ultracold bosonic mixtures, New J. Phys. 16, 103004 (2014).Parisi2019 L. Parisi, G. E. Astrakharchik, and S. Giorgini, Liquid State of One-Dimensional Bose Mixtures: A Quantum Monte Carlo Study, Phys. Rev. Lett. 122, 105302 (2019).VC2020 V. Cikojević, L. V. Markić, M. Pi, M. Barranco, and J. Boronat, Towards a quantum Monte Carlo-based density functional including finite-range effects: Excitation modes of a ^39K quantum droplet, Phys. Rev. A 102, 033335 (2020).Tyluki2020 M. Tylutki, G. E. Astrakharchik, B. A Malomed, D. S. Petrov, Collective excitations of a one-dimensional quantum droplet, Phys. Rev. A 101, 051601(R) (2020).Huhui2020 H. Hu, and X. Liu, Consistent Theory of Self-Bound Quantum Droplets with Bosonic Pairing, Phys. Rev. Lett. 125, 195302 (2020).Baillie2017 D. Baillie, R. M. Wilson, and P. B. Blakie, Collective Excitations of Self-Bound Droplets of a Dipolar Quantum Fluid, Phys. Rev. Lett. 119, 255302 (2017).Yongchang Y. Zhang, F. Maucher, and T. Pohl, Supersolidity around a Critical Point in Dipolar Bose-Einstein Condensates, Phys. Rev. Lett. 123, 015301 (2019).Yongchang2021 Y. Zhang, T. Pohl, and F. Maucher, Phases of supersolids in confined dipolar Bose-Einstein condensates, Phys. Rev. A 104, 013310 (2021).Bttcher2019 F. Böttcher, J.-N. Schmidt, M. Wenzel, J. Hertkorn, M. Guo, T. Langen, and T. Pfau, Transient Supersolid Properties in an Array of Dipolar Quantum Droplets, Phys. Rev. X 9, 011051 (2019).Hertkorn2021 J. Hertkorn J.-N. Schmidt, M. Guo, F. Böttcher, K. S. H. Ng, S. D. Graham, P. Uerlings, H. P. Büchler, T. Langen, M. Zwierlein, and T. Pfau, Supersolidity in Two-Dimensional Trapped Dipolar Droplet Arrays, Phys. Rev. Lett. 127, 155301 (2021).Sanchez-Baena2023 J. Sánchez-Baena, C. Politi, F. Maucher, F. Ferlaino, and T. Pohl, Heating a Dipolar Quantum Fluid into a Solid, Nat. Commun. 14, 1868 (2023).Scheiermann2023 D. Scheiermann, L. A. P. Ardila, T. Bland, R. N. Bisset, and L. Santos, Catalyzation of Supersolidity in Binary Dipolar Condensates, Phys. Rev. A 107, L021302 (2023).Malomed2022c B. A. Malomed, Multidimensional Solitons (American Institute of Physics: Melville, NY, 2022).Michinel M. Quiroga-Teixeiro and H. Michinel, Stable azimuthal stationary state in quintic nonlinear optical media, J. Opt. Soc. Am. B 14, 2004-2009 (1997).Pego R. L. Pego and H. A. Warchall, Spectrally stable encapsulated vortices for nonlinear Schrödinger equations, J. Nonlinear Sci.12, 347-394 (2002).Dumitru D. Mihalache, D. Mazilu, L.-C. Crasovan, I. Towers, A. V. Buryak, B. A. Malomed, L. Torner, J. P. Torres, and F. Lederer. Stable spinning optical solitons in three dimensions, Phys. Rev. Lett. 88, 073902 (2002).Malomed2019 B. A. Malomed, Vortex solitons: Old results and new perspectives, Physica D 399, 108 (2019).Li2018a Y. Li, Z. Chen, Z. Luo, C. Huang, H. Tan, W. Pang, and B. A. Malomed, Two-Dimensional Vortex Quantum Droplets, Phys. Rev. A 98 , 063602 (2018).Kartashov2018 Y. V. Kartashov, B. A. Malomed, L. Tarruell, and L. Torner, Three-Dimensional Droplets of Swirling Superfluids, Phys. Rev. A 98, 013612 (2018).Zhang2019 X. Zhang, X. Xu, Y. Zheng, Z. Chen, B. Liu, C. Huang, B. A. Malomed, and Y. Li, Semidiscrete Quantum Droplets and Vortices, Phys. Rev. Lett. 123, 133901 (2019).Cidrim2018 A. Cidrim, F. E. A. dos Santos, E. A. L. Henn, and T. Macrí, Vortices in self-bound dipolar droplets, Phys. Rev. A 98 , 023618 (2018).Li2023 G. Li, X. Jiang, B. Liu, Z. Chen, B. A. Malomed, and Y. Li, Two-Dimensional Anisotropic Vortex Quantum Droplets in Dipolar Bose-Einstein Condensates, Front. Phys. 19, 22202 (2024).Chin2010 C. Chin, R. Grimm, P. Julienne, and E. Tiesinga, Feshbach Resonances in Ultracold Gases, Rev. Mod. Phys. 82, 1225 (2010).Courteille1998 P. Courteille, R. S. Freeland, D. J. Heinzen, F. A. van Abeelen, and B. J. Verhaar, Observation of a Feshbach Resonance in Cold Atom Scattering, Phys. Rev. Lett. 81, 69 (1998).Lima2011 A. R. P. Lima and A. Pelster, Quantum Fluctuations in Dipolar Bose Gases, Phys. Rev. A 84, 041604 (2011).Lima2012 A. R. P. Lima and A. Pelster, Beyond Mean-Field Low-Lying Excitations of Dipolar Bose Gases, Phys. Rev. A 86, 063609 (2012).Wachtler2016 F. Wächtler and L. Santos, Quantum Filaments in Dipolar Bose-Einstein Condensates, Phys. Rev. A 93, 061603 (2016). Lahaye2009 T. Lahaye, C. Menotti, L. Santos, M. Lewenstein, and T. Pfau, The Physics of Dipolar Bosonic Quantum Gases, Rep. Prog. Phys. 72, 126401 (2009).Stuhler2005 J. Stuhler, A. Griesmaier, T. Koch, M. Fattori, T. Pfau, S. Giovanazzi, P. Pedri, and L. Santos, Observation of Dipole-Dipole Interaction in a Degenerate Quantum Gas, Phys. Rev. Lett. 95, 150406 (2005). Chifalo2000 L. M. Chiofalo, S. Succi, and P. M. Tosi, Ground state of trapped interacting Bose-Einstein condensates by an explicit imaginary time algorithm, Phys. Rev. E 62, 7438 (2000).Jianke2008J. Yang and T. I. Lakoba, Accelerated imaginary-time evolution methods for the computation of solitary waves, Stud. Appl. Math. 120, 265 (2008)Bao X. Antoine, W. Bao, and C. Besse, Computational methods for the dynamics of the nonlinear Schrödinger/Gross-Pitaevskii equations, Comp. Phys. Commun. 184, 2621-2633 (2013). Ferrier-Barbut2016 I. Ferrier-Barbut, H. Kadau, M. Schmitt, M. Wenzel, and T. Pfau, Observation of Quantum Droplets in a Strongly Dipolar Bose Gas, Phys. Rev. Lett. 116, 215301 (2016).VK1973 N. G. Vakhitov and A. A. Kolokolov, Stationary solutions of the wave equation in a medium with nonlinearity saturation, Radiophys. Quantum Electron. 16, 783-789 (1973).Kartashov2014 Y. V. Kartashov, B. A. Malomed, Y. Shnir, and L. Torner, Twisted Toroidal Vortex Solitons in Inhomogeneous Media with Repulsive Nonlinearity, Phys. Rev. Lett. 113, 264101 (2014).Gammatwocomponent A. Boudjemâa, Fluctuations and quantum self-bound droplets in a dipolar Bose-Bose mixture, Phys. Rev. A 98 , 033612 (2018).Bisset2021 R. N. Bisset, L. A. P. Ardila, and L. Santos, Quantum Droplets of Dipolar Mixtures, Phys. Rev. Lett. 126, 025301 (2021).Smith2021 J. C. Smith, D. Baillie, and P. B. Blakie, Quantum Droplet States of a Binary Magnetic Gas, Phys. Rev. Lett. 126, 025302 (2021).
http://arxiv.org/abs/2310.17840v1
{ "authors": [ "Guilong Li", "Zibin Zhao", "Xunda Jiang", "Zhaopin Chen", "Bin Liu", "Boris A. Malomed", "Yongyao Li" ], "categories": [ "cond-mat.quant-gas", "nlin.PS" ], "primary_category": "cond-mat.quant-gas", "published": "20231027014305", "title": "Strongly anisotropic vortices in dipolar quantum droplets" }
Nonuniform Bose-Einstein condensate. I. An improvement of the Gross-Pitaevskii method Maksim TomchenkoBogolyubov Institute for Theoretical Physics 14b, Metrolohichna Str., Kyiv 03143, Ukraine===================================================================================================================A nonuniform condensate is usually described by the Gross-Pitaevskii (GP) equation, which is derived with the help of the c-number ansatz Ψ̂(𝐫,t)=Ψ (𝐫,t). Proceeding from a more accurate operator ansatz Ψ̂(𝐫,t)=â_0Ψ (𝐫,t) √(N), we find the equation iħ∂Ψ (𝐫,t)/∂ t=-ħ ^2/2m∂ ^2Ψ (𝐫,t)/∂𝐫^2+( 1-1/N) 2cΨ (𝐫,t)|Ψ(𝐫,t)|^2 (the GP_N equation). It differs from the GP equation by the factor ( 1-1/N), where N is the number of Bose particles. We compare the accuracy of the GP and GP_N equations by analyzing the ground state of a one-dimensional system of point bosons with repulsive interaction (c>0) and zero boundary conditions. Both equations are solved numerically, and the system energy E and the particle density profile ρ (x) are determined for various values of N, the mean particle density ρ̅, and the coupling constant γ =c/ρ̅. The solutions are compared with the exact ones obtained by the Bethe ansatz. The results show that in the weak coupling limit (N^-2≪γ≲ 0.1), the GP and GP_N equations describe the system equally well if N≳ 100. For few-boson systems (N≲ 10) with γ≲ N^-2 the solutions of the GP_N equation are in excellent agreement with the exact ones. That is, the multiplier ( 1-1/N)allows one to describe few-boson systems with high accuracy.This means that it is reasonable to extend the notion of Bose-Einstein condensation to few-particle systems. § INTRODUCTION The simplest analytical method for describing a system of N spinless interacting bosons is based on the solution of the nonlinear Schrödinger equationiħ∂Ψ (𝐫,t)/∂ t=-ħ ^2/2m∂ ^2Ψ (𝐫,t)/∂𝐫^2+Ψ ( 𝐫,t)∫_Vd𝐫^'U(|𝐫-𝐫 ^'|)|Ψ (𝐫^',t)|^2,where Ψ is the wave function of condensate,V is the system volume. This equation was first obtained by E. Gross in 1958 <cit.>. Gross realised that N. Bogoliubov's approach <cit.> could be applied to describe a nonuniform condensate if we set Ψ̂(𝐫,t)=Ψ (𝐫,t). In this case, the Heisenberg equation becomes Eq. (<ref>) for Ψ (𝐫,t). A few years later, Eq. (<ref>) was written by L. Pitaevskii <cit.> and E. Gross <cit.> for the point potential U(|𝐫_j- 𝐫_l|)=2cδ (𝐫_j-𝐫_l),iħ∂Ψ (𝐫,t)/∂ t=-ħ ^2/2m∂ ^2Ψ (𝐫,t)/∂𝐫^2+2cΨ ( 𝐫,t)|Ψ (𝐫,t)|^2.Since real-world atoms have a non-zero size, it is more accurate to describe them by Eq. (<ref>). However, if the characteristic dimensions of inhomogeneities in the system are much larger than the atomic size, the Gross-Pitaevskii (GP) equation (<ref>) may be used instead of Eq. (<ref> ). The GP equation is simpler than the Gross equation (<ref>) and is basic for describing the dilute Bose gas in a trap. A huge number of experimental and theoretical works published during the last 25 years were devoted to the study of gases in the trap <cit.>. The Nobel Prize was awarded for the experimental production of Bose condensate <cit.>.It is generally accepted that Eqs. (<ref>) and (<ref>) give a semiclassical description of the system and are only applicable to systems with a large number of particles, N. Several arguments have been made in favour of the latter <cit.>. The main one is that the Bogoliubov method works namely at N≫ 1.It is often asserted that Eqs. (<ref>) and (<ref>) can also be derived from the condensate approximation for the total N-particle wave function of the systemΨ _N(𝐫_1,… ,𝐫_N,t)=∏_j=1^Nψ (𝐫_j,t).However, this is not quite so.By integrating the N-particle Schrödinger equation <cit.> (see also section 2 below) and by using a variational approach <cit.> it was shown that ansatz (<ref>) gives rise to the equationiħ∂Ψ (𝐫,t)/∂ t=-ħ ^2/2m∂ ^2Ψ (𝐫,t)/∂𝐫^2 +( 1-1/N) Ψ (𝐫,t)∫_Vd𝐫 ^'U(|𝐫-𝐫^'|)|Ψ (𝐫^',t)|^2.For the potential U(|𝐫_j-𝐫_l|)=2cδ (𝐫_j-𝐫 _l) it transforms intoiħ∂Ψ (𝐫,t)/∂ t=-ħ ^2/2m∂ ^2Ψ (𝐫,t)/∂𝐫^2+( 1-1/N) 2cΨ (𝐫,t)|Ψ ( 𝐫,t)|^2.It will be seen in section 2 below that if the more accurate operator ansatzΨ̂(𝐫,t)=â_0Ψ (𝐫,t)/√(N)is used instead of the c-number ansatz Ψ̂(𝐫,t)=Ψ ( 𝐫,t), we also get Eqs. (<ref>) and (<ref>).Equations (<ref>) and (<ref>) will be called the GP and GP_N equations, respectively. The GP_N equation differs from the GP one by the factor ( 1-1 /N). This equation was obtained usingansätze (<ref>) and (<ref>), which are valid for any N≥ 2 (in contrast to the ansatz Ψ̂(𝐫,t)≈Ψ (𝐫,t), which is only applicable for N≫ 1). This indicates that the GP_N equation must be able to describe a Bose system even for small N.The accuracy of the GP_N equation has already been investigated for a Bose gas under spherically symmetric harmonic confinement by comparing solutions of the GP_N equation with Monte Carlo numerical solutions for N=2–50 <cit.> and with analytical solutions of the linear Schrödinger equation for N=2 <cit.>. Such an analysis showed that for 0≤ (N-1)a/a_h0 0.1 (where a is the s-wave scattering length, and a_h0=(ħ/mω_h0)^1/2), the GP_N equation describes the system with very good accuracy. However, the comparison of the accuracy of the GP_N and GP equations has not yet been carried out.In this and the next paper <cit.> we study in detail a one-dimensional (1D) Bose gas in the absence of a trap and compare the solutions ofstationary GP and GP_N equations for different N≥ 2 with the exact Bethe-ansatz solutions.In this paper, an equation for a nonuniform condensate is derived (section 2) and its solutions for the ground state of the condensate are analyzed (sections 3–5). In the next article <cit.>, the excitedstates of the condensate are considered.Note that we became aware of papers <cit.> after this article was submitted to arXiv.Moreover, after this article and <cit.> had already been written, we learned that analogous solutions of the GP equation have been found analytically by L. Carr, C. Clark, and W. Reinhardt <cit.>. The solutions obtained in <cit.> are expressed in terms of the Jacobi elliptic functions. In this paper, we obtain solutions using a different (numerical) method.Our solutions are consistent with those of work <cit.>, although a detailed one-to-one comparison was not performed.Note also that in the 1D case the term quasi-condensate is usually used instead of condensate <cit.>. For brevity, we will omit the prefix “quasi”and write “condensate”, even for small N.§ DERIVATION OF EQUATION FOR NONUNIFORM CONDENSATE§.§ Wave-function approach Consider a system of N interacting spinless bosons (N≥ 2). The Schr ödinger equation readsiħ∂Ψ _N/∂ t=-ħ ^2/2m∑_j=1^N∂ ^2/∂𝐫_j^2Ψ _N+1/2∑^N_j,p=1_(j≠ p)U(|𝐫 _p-𝐫_j|)Ψ _N.Let the wave function Ψ _N(𝐫_1,… ,𝐫_N,t) of the system have the condensate form (<ref>) with the normalization |ψ (𝐫,t)|^2=1. Substituting (<ref>) into (<ref>), we obtainiħ∑_j=1^N∏^N_l=1_(l≠ j)ψ (𝐫_l,t)·∂ψ (𝐫_j,t)/∂ t== -ħ ^2/2m∑_j=1^N∏^N_l=1_(l≠ j)ψ (𝐫_l,t)·∂ ^2ψ ( 𝐫_j,t)/∂𝐫_j^2+1/2∑^N_j,p=1_(p≠ j)U(|𝐫_p-𝐫 _j|)∏_l=1^Nψ (𝐫_l,t).Multiplying this equation by ∏_l=2^Nψ ^∗(𝐫_l,t) and integrating the result over 𝐫_2,… ,𝐫_N, we getiħ∂ψ (𝐫_1,t)/∂ t-a(t)ψ ( 𝐫_1,t)=-ħ ^2/2m∂ ^2ψ (𝐫 _1,t)/∂𝐫_1^2 +(N-1)ψ (𝐫_1,t)∫_Vd𝐫_2U(|𝐫 _1-𝐫_2|)|ψ (𝐫_2,t)|^2,wherea(t)= (N-1)∫_Vd𝐫_2ψ ^∗(𝐫 _2,t){ -ħ ^2/2m∂ ^2ψ (𝐫 _2,t)/∂𝐫_2^2-iħ∂ψ (𝐫 _2,t)/∂ t}+ (N-1)(N-2)/2∫_Vd𝐫_2d𝐫_3U(| 𝐫_2-𝐫_3|)|ψ (𝐫_2,t)|^2|ψ (𝐫 _3,t)|^2.The derivatives in the right-hand side of (<ref>) can be expressed using Eq. (<ref>), from whencea(t)=-(N-1)/2∫_Vd𝐫_2d𝐫_3U(| 𝐫_2-𝐫_3|)|ψ (𝐫_2,t)|^2|ψ (𝐫 _3,t)|^2.Let us set ψ(𝐫_1,t)=e^iκ(t)/ħψ̃(𝐫_1,t), where κ(t)=-∫_-∞^tdτ a(τ) <cit.>. Then (<ref>) is reduced toiħ∂ψ̃(𝐫_1,t)/∂ t = -ħ^2/2m∂^2ψ̃(𝐫_1,t)/∂𝐫_1^2 + (N-1)ψ̃(𝐫_1,t) ∫_V d 𝐫_2U(|𝐫_1-𝐫_2|)|ψ̃(𝐫_2,t)|^2.Making the substitution ψ (𝐫,t)=Ψ (𝐫,t)/√(N), we obtain the final equationiħ∂Ψ (𝐫,t)/∂ t=-ħ ^2/2m∂ ^2Ψ (𝐫,t)/∂𝐫^2+( 1-1/N) Ψ (𝐫,t)∫_Vd𝐫 ^'U(|𝐫-𝐫^'|)|Ψ (𝐫^',t)|^2,with the normalization ∫_Vd𝐫|Ψ (𝐫,t)|^2=N. For the point potential U(|𝐫_j-𝐫_l|)=2cδ (𝐫 _j-𝐫_l), this equation readsiħ∂Ψ (𝐫,t)/∂ t=-ħ ^2/2m∂ ^2Ψ (𝐫,t)/∂𝐫^2+( 1-1/N) 2cΨ (𝐫,t)|Ψ ( 𝐫,t)|^2.It has come to our attention that a similar analysis was previously carried out by B. Esry <cit.>.It is worth noting that the condensate ansatz (<ref>) does not satisfy the Schrödinger equation (<ref>) for any non-zero potential, even if Eqs. (<ref>) and (<ref>) are satisfied. Therefore, ansatz (<ref> ) always gives only an approximate description of the system. For the description to be exact, two-particle and all higher-order correlations in Ψ _N(𝐫_1,… ,𝐫_N,t) must be taken into account  <cit.>. §.§ Operator method T. Wu in work <cit.> proposed a method for describing a system of point bosons, which is based on the ansatz Ψ̂(𝐫,t)=â _0ψ (𝐫,t)+ϑ̂(𝐫,t)with ϑ̂(𝐫,t)≪â_0ψ (𝐫,t). The analysis <cit.> resulted in Eqs. (<ref>), (<ref>) with U(|𝐫_j-𝐫_l|)→ 2cδ (𝐫 _j-𝐫_l) andN-1 → N. We are unable to reproduce Wu's analysis, so we will make an independent calculation in the simpler case Ψ̂(𝐫 ,t)=â_0ψ (𝐫,t) with the normalization ∫_Vd 𝐫|ψ (𝐫,t)|^2=1. Thus, we assume that all N atoms at any time are in the condensate ψ (𝐫,t), but we do not change to the c-number. Substituting Ψ̂(𝐫,t)=â _0ψ (𝐫,t) into the Heisenberg equationiħ∂Ψ̂(𝐫,t)/∂ t=-ħ ^2/2m∂ ^2Ψ̂(𝐫,t)/∂𝐫^2 +∫_Vd𝐫^'U(|𝐫-𝐫^'|) Ψ̂^+(𝐫^',t)Ψ̂(𝐫^',t)·Ψ̂(𝐫,t),we obtainĜ=0,whereĜ = -â_0iħ∂ψ (𝐫,t)/∂ t -ψ (𝐫,t)iħ∂â_0/∂ t-â _0ħ ^2/2m∂ ^2ψ (𝐫,t)/∂𝐫^2+ +â_0^+â_0^2ψ (𝐫,t)∫_Vd𝐫 ^'U(|𝐫-𝐫^'|)|ψ (𝐫^',t)|^2. Consider an N-particle state |N_0,N_1,N_2,… ,N_∞⟩, where N_0=N particles are in the one-particle state ψ _0(𝐫,t)≡ψ (𝐫,t), whereas the other one-particle states ψ _j from the expansion Ψ̂(𝐫 ,t)=∑_j=0,1,… ,∞â_j(t)ψ _j(𝐫,t) are non-occupied, N_j≥ 1=0. Denote |N,0,… ,0⟩≡ |N⟩. The expressionĜ|N⟩ =0sets the equation for the condensate ψ (𝐫,t) for the system in the |N⟩ state. To find Ĝ|N⟩, we have to calculate ∂â_0/∂ t. If Ψ̂(𝐫,t)=â _0ψ (𝐫,t), the system Hamiltonian takes the formĤ = -ħ ^2/2m∫ d𝐫Ψ̂^+(𝐫,t) ∂ ^2Ψ̂(𝐫,t)/∂𝐫^2+1 /2∫ d𝐫d𝐫^'U(|𝐫-𝐫^'|) Ψ̂^+(𝐫,t)Ψ̂^+(𝐫^',t)Ψ̂( 𝐫^',t)Ψ̂(𝐫,t)= E_kâ_0^+(t)â_0(t)+E_p[â_0^+(t)]^2â _0^2(t)=(E_k-E_p)N̂_0(t)+E_pN̂_0^2(t),where N̂_0(t)=â_0^+(t)â_0(t),E_k=-ħ ^2/2m∫ d𝐫ψ ^∗(𝐫,t)∂ ^2ψ (𝐫,t)/∂𝐫^2, E_p=1/2∫ d𝐫d𝐫^'U(|𝐫-𝐫 ^'|)|ψ (𝐫,t)|^2|ψ (𝐫^',t)|^2.Such a Hamiltonian does not contain terms that could transfer atoms from the condensate to other states ψ _j≥ 1(𝐫,t). Therefore it is natural to expect that ∂â_0/∂ t=0.Let us show that really ∂â_0/∂ t=0. The derivative ∂â_0/∂ t cannot be found from the Heisenberg equation iħ∂â_0/∂ t=[ â_0,Ĥ]. Indeed, from the equations iħ∂â_j/∂ t=[â_j,Ĥ] (j=0,1,… ,∞), Ψ̂(𝐫,t)=∑_j=0,1,… ,∞â_j(t)ψ _j( 𝐫,t), and iħ∂Ψ̂/∂ t=[Ψ̂,Ĥ], it follows that ∑_j=0,1,… ,∞â_j(t) ∂ψ _j(𝐫,t)/∂ t=0, i.e. ∑_j=0,1,… ,∞â_j^+(t)∂ψ _j^∗(𝐫,t)/∂ t=0. The last two equations must hold for any state |N_0,N_1,… ,N_∞⟩. This means that ∂ψ _j^∗(𝐫,t)/∂ t=0 and ∂ψ _j(𝐫,t)/∂ t=0 for all j=0,1,… ,∞, which contradicts our scheme.Similarly to the analysis in <cit.>, let us determine ∂â_0/∂ t from the time evolution of the wave function in the Schrödinger representation: Ψ (𝐫,t)=e^-i Ĥt/ħΨ (𝐫) <cit.>. ThenΨ (𝐫,t+δ t)=e^-iĤδ t/ħΨ (𝐫,t).In the second quantization formalism, the state |N,0,0,… ,0⟩ is <cit.>Ψ (t)=(N!)^-1/2[â_0^+(t)]^N|0⟩ ,where |0⟩≡ |0,0,0,… ,0⟩ is the vacuum state:â_j|0⟩ =0, j=0,1,2,… ,∞ .ThenΨ (t+δ t)≡ (N!)^-1/2[â_0^+(t+δ t)]^N|0⟩ =e^-iĤδ t/ħΨ (t)=(N!)^-1/2e^-iĤδ t/ħ[â_0^+(t)]^N|0⟩= (N!)^-1/2( 1-iĤδ t/ħ +1/2!(-iĤδ t/ħ )^2+…) [â_0^+(t)]^N|0⟩ .Note that the energy and the total number of particles are integrals of motion; therefore, Ĥ and N̂ do not depend on time. Using the relationsâ_0(t)â_0^+(t)=â_0^+(t)â_0(t)+1, N̂_0(t)[â_0^+(t)]^N=[â_0^+(t)]^N(N̂ _0(t)+N), N̂_0(t)^2[â_0^+(t)]^N=[â_0^+(t)]^N(N̂ _0(t)+N)^2,and formula (<ref>), we obtain(-Ĥδ t/ħ )^p[â_0^+(t)]^N=[â _0^+(t)]^Nf̂^p, f̂=(-δ t/ħ )[ (E_k-E_p)(N̂_0(t)+N)+E_p(N̂_0(t)+N)^2] , (N!)^-1/2[â_0^+(t+δ t)]^N|0⟩ =(N!)^-1/2[â _0^+(t)]^Ne^if̂|0⟩= (N!)^-1/2[â_0^+(t)]^N( 1+if̂+1/2!(if̂ )^2+…) |0⟩ .Sinceif̂|0⟩ =(-iδ t/ħ )[ (E_k-E_p)N+E_pN^2] |0⟩≡ igδ t|0⟩ ,we have that for an arbitrary δ t≥ 0,(N!)^-1/2[â_0^+(t+δ t)]^N|0⟩ =e^igδ t(N!)^-1/2[â_0^+(t)]^N|0⟩ .To find the correct normalization, we must make the substitution e^igδ t→ 1. Eventually,(N!)^-1/2[â_0^+(t+δ t)]^N|0⟩ =(N!)^-1/2[â _0^+(t)]^N|0⟩ , â_0^+(t+δ t)=â_0^+(t),∂â _0^+/∂ t=0⇒ ∂â_0/∂ t=0.The relations (<ref>), â_0|N⟩ =√(N) |N-1⟩, â_0^+â_0|N-1⟩ =(N-1)|N-1⟩, and ∂â_0/∂ t=0 yieldĜ|N⟩ /√(N)=[ -iħ∂ψ (𝐫 ,t)/∂ t-ħ ^2/2m∂ ^2ψ (𝐫,t) /∂𝐫^2. + . +(N-1)ψ (𝐫,t)∫_Vd𝐫^'U(|𝐫-𝐫^'|)|ψ (𝐫^',t)|^2] |N-1⟩ .For any 𝐫 and t, the equality Ĝ|N⟩ =0 must hold, which gives the desired equation for the condensate,iħ∂ψ (𝐫,t)/∂ t=-ħ ^2/2m∂ ^2ψ (𝐫,t)/∂𝐫^2+(N-1)ψ ( 𝐫,t)∫_Vd𝐫^'U(|𝐫-𝐫 ^'|)|ψ (𝐫^',t)|^2.This equation coincides with (<ref>). So, we again arrive at equations (<ref>) and (<ref>).In work <cit.>,different results were obtained from Eqs. (<ref>) and (<ref>); namely, ∂â_0^+/∂ t|_ϑ̂→ 0≠ 0, and Eqs. (<ref>), (<ref>) with the replacements U(|𝐫_j-𝐫_l|)→ 2cδ (𝐫 _j-𝐫_l) andN-1 → N (i.e. without the multiplier 1-1/N). We are not able to grasp how the equation for the condensate was derived in <cit.>. The different result for ∂â_0^+/∂ t|_ϑ̂→ 0 may have been obtainedfor one or more of the following reasons: (i) It was assumed in <cit.> that Ψ̂(𝐫,t)=â _0ψ (𝐫,t)+ϑ̂(𝐫,t) instead of Ψ̂(𝐫,t) = â_0ψ(𝐫,t). (ii) Vacuum was defined in work <cit.> differently: Ψ̂( 𝐫,t)|0⟩ =0. In this case, formally, ϑ̂( 𝐫,t)|0⟩ =-â_0ψ (𝐫,t)|0⟩≠ 0, and this equality seems to be applied in <cit.>. However, if ϑ̂(𝐫,t)=-â_0ψ (𝐫,t), then the smallness condition ϑ̂(𝐫 ,t)≪â_0ψ (𝐫,t), which was used in <cit.> to develop the perturbation theory, is violated. Moreover, in the second quantization formalism <cit.>, vacuum is the state |0,0,… ,0⟩ corresponding to the occupation numbers N_j=0, j=0,1,… ,∞, from which follows condition (<ref>). It is not difficult to show that the equality Ψ̂(𝐫,t)|0⟩ =0 is possible only if condition (<ref>) holds. (iii) In work <cit.>, the expression for Ψ (t+δ t) was not written in the form that allows one to extract the terms which give a zero contribution when acting on the vacuum.In any case, Wu proposed the right idea that the equation for a nonuniform condensate can be obtained without going to the c-number. Moreover, Wu derived an equation which is equivalent to the GP equation, simultaneously with Pitaevskii <cit.> and Gross <cit.> and using the more precise ansatz Ψ̂(𝐫,t) = â_0ψ(𝐫,t) + ϑ̂(𝐫,t). However, we are not sure that the analysis itself in <cit.> is entirely accurate.Note that the replacement of an operator by a c-number (ψ̂(x,t)= â_0Ψ (𝐫,t)/√(N)→Ψ (x,t)) creates the uncertainty ± 1 for the number of particles and thus violates the law of conservation for this parameter. To solve this difficulty, N. Bogoliubov proposed the method of quasi-averages <cit.>; this method is actually reduced to the mechanism of spontaneous symmetry breaking (SSB), which removes statistical degeneracy. However, this is a purely formal technique. In nature, the SSB occurs differently, at a phase transition, which is usually initiated by the formation of new phase nuclei.In the case of our system, the application of the operator ansatz Ψ̂ (𝐫,t)=â_0Ψ (𝐫,t)/√(N) automatically eliminates the difficulty with the conservation law for N. The ansatz Ψ̂(𝐫,t)=â_0Ψ (𝐫,t)/√(N) means a condensate without SSB because the function Ψ (𝐫,t) can be multiplied by an arbitrary factor e^iα. Bogoliubov's model can also be constructed without replacing â_0 by the c-number <cit.>, i.e. without SSB. SSB is an important property <cit.>: a violation of the global U(1) symmetry would mean that a phonon in He II is the Goldstone boson <cit.>. However, the operator ansatz provides a more accurate description of the system than the c-number ansatz does (in particular, the operator ansatz leads to a better agreement with exact solutions, see the results below and in <cit.>) and does not lead to SSB. This means that contrary to widespread opinion <cit.>, the U(1) symmetry is not violated, and a phonon in He II is not a Goldstone boson but is similar to classic sound: the phonon exists simply because of the interaction of atoms. This is also evidenced by the closeness of the profile of the ^4He structure factor S(k,ω ) for T=T_λ-δ to the profile for T=T_λ+δ, where 0<δ≪ T_λ <cit.> (recall that liquid helium at T=T_λ+δ is He I; in this case, the condensate and SSB are absent). Thus, since the c-number approach works well at N≫ 1, the phonon in a macroscopic superfluid Bose system is very similar to the Goldstone boson. However, from the viewpoint of the more accurate operator approach, such a phonon is still not a Goldstone boson. Thus, we obtained equations (<ref>) and (<ref>) for the condensate via two methods. The use of the operator Ψ̂(𝐫,t)=â _0Ψ (𝐫,t)/√(N) instead of the c-number Ψ̂( 𝐫,t)=ψ (𝐫,t) results in Eq. (<ref>), which contains the additional factor ( 1-1/N) in comparison with the ordinary Gross-Pitaevskii equation. This multiplier also appears in the approach based on the ansatz Ψ _N(𝐫_1,… ,𝐫 _N,t)=∏_j=1^Nψ (𝐫_j,t). The approximation Ψ̂(𝐫,t)=ψ (𝐫,t) requires that N≫ 1, but the ansätze Ψ̂(𝐫,t)=â_0Ψ (𝐫,t)/√(N) and Ψ _N(𝐫_1,… ,𝐫_N,t)=∏_j=1^Nψ ( 𝐫_j,t) are valid for any N≥ 2. Therefore, the use of such ansätze instead of the c-number Ψ̂(𝐫,t)=ψ ( 𝐫,t) makes it possible to extend the domain of applicability of the Gross-Pitaevskii equation to small N.§ GROUND STATE OF CONDENSATE: EQUATIONS AND NUMERICAL METHOD In order to be able to compare the solutions with exact ones, let us consider a 1D system. We will use zero BCs because in this case the particle density ρ (x) depends on the coordinate (under periodic BCs, it is constant: ρ (x)=const). So, let us consider N spinless Bose particles, which occupy the segment [0,L], under zero BCs (Ψ (0,t)=Ψ (L,t)=0). We assume that the interaction is repulsive, and the wave function of the condensate considerably changes on scales much larger than the atomic size. Therefore, we use the GP equation (<ref>) and the GP_N equation (<ref>) instead of Eqs. (<ref>) and (<ref>). We seek stationary solutionsΨ (x,t)=e^ϵ t/iħΦ (x).Then the GP and GP_N equations (<ref>) and (<ref>) take the formϵΦ (x)=-ħ ^2/2m∂ ^2Φ/∂ x^2+2c|Φ |^2Φ , ϵΦ (x)=-ħ ^2/2m∂ ^2Φ/∂ x^2+( 1-1/N) 2c|Φ |^2Φwith the boundary conditionsΦ (x=0)=Φ (x=L)=0. Let the condensate Ψ (x,t) contain N_c atoms. Then∫_0^Ldx|Φ (x)|^2=N_c.Since the condensate approximations Ψ̂(x,t)=Ψ (x,t), (<ref>), and (<ref>) mean that N_c=N, we put N_c=N.To satisfy BCs (<ref>), we will seek each solution of the GP (GP_N) equation as a series expansion in the complete orthonormal set of sines,Φ (x)=∑_j=1,2,… ,∞b_j√(2/L)·sin (k_jx), k_j=π j/L.One cansee that there are “elementary j_0 -series” Φ _j_0(x)=∑_j=j_0,3j_0,5j_0,…b_j√(2/L)·sin (k_jx),for which j_0 can be equal to 1,2,3,… ,∞. When j_0-series (<ref>) is substituted into Eq. (<ref>) or (<ref>), then both right- and left-hand sides will contain only terms with the structure of j_0-series (<ref>) (in so doing, every product of three sine functions should be presented as the sum of sines).In the absence of interaction (c=0), Eqs. (<ref>) and (<ref>) with BCs (<ref>) have the solutions Φ (x)=√(2N/L)·sin (k_jx) , j=1,2,3,…, which form a complete set of functions. If c≠ 0, these solutions transform into series (<ref>) with j_0=1,2,3,… and b_j≠ j_0≠ 0. In this paper and <cit.>, we analyze only solutions in the form of j_0-series (<ref>). For each value of j_0 we have found one and only one solution Φ (x). Below we will see that the j_0-series corresponds to the particle density profile ρ (x) with j_0 domains. According to the results of work <cit.>, such solutions include all solutions of the GP equation with zero BCs. The ground state of the condensate corresponds to a single-domain solution (j_0=1).Let us substitute expansion (<ref>) into Eq. (<ref>) and express the product of three sines as the sum of sines. Then we obtain the equation∑_j( ϵ -ħ ^2k_j^2/2m) b_jsin (k_jx)=( 1-1/N) c/L∑_j_1j_2j_3b_j_1^∗b_j_2b_j_3[ sin (k_j_3-j_1+j_2x)+. +. sin (k_j_3+j_1-j_2x)-sin (k_j_3-j_1-j_2x)-sin (k_j_3+j_1+j_2x)] ,where j,j_1,j_2,j_3 run over the values j_0,3j_0,5j_0,… ,∞. Denote by j each index of the form j_3± j_1± j_2 on the right-hand side of Eq. (<ref>) and pass from summation over j_3 to summation over j=± j_0,± 3j_0,± 5j_0,…. The descriptions in terms of the subscripts j_3 and j are equivalent. Further, for all j<0 we take into account the property sin (k_j)=-sin (k_-j), make the substitution j=-j̃, and omit the tilde. As a result, Eq. (<ref>) takes the form∑_j( ϵ -ħ ^2k_j^2/2m) b_jsin (k_jx)=( 1-1/N) c/L∑_j_1,j_2,jb_j_1^∗b_j_2sin (k_jx)[ θ (j+j_1-j_2)b_j+j_1-j_2-. -θ (-j+j_1-j_2)b_-j+j_1-j_2+θ (j-j_1+j_2)b_j-j_1+j_2-θ (-j-j_1+j_2)b_-j-j_1+j_2--. b_j+j_1+j_2+θ (-j+j_1+j_2)b_-j+j_1+j_2-θ (j-j_1-j_2)b_j-j_1-j_2] ,where j,j_1,j_2=j_0,3j_0,5j_0,… ,∞, and θ (p) is the discrete Heaviside function: θ (p)=1 for p≥ 0, and θ (p)=0 for p<0.Then let us collect the coefficients of the independent functions sin (k_jx) and denote b_j=√(N)f_j, N/L=ρ̅, γ =2mc/ħ ^2ρ̅, and ϵ =2ρ̅( 1-1/N) c·ϵ̃. As a result, we arrive at the following nonlinear system of equations for the unknown ϵ̃ and the coefficients f_j:( π ^2j^2/(1-N^-1)γ N^2-2ϵ̃) f_j+∑_j_1j_2f_j_1^∗f_j_2[ θ (j+j_1-j_2)f_j+j_1-j_2-.- θ (-j+j_1-j_2)f_-j+j_1-j_2+θ (j-j_1+j_2)f_j-j_1+j_2-θ (-j-j_1+j_2)f_-j-j_1+j_2- - . f_j+j_1+j_2+θ (-j+j_1+j_2)f_-j+j_1+j_2-θ (j-j_1-j_2)f_j-j_1-j_2] =0,where j,j_1,j_2 run over the values j_0,3j_0,5j_0,… ,∞. System (<ref>) has to be supplemented by the normalization condition following from (<ref>),∑_j=j_0,3j_0,5j_0,… ,∞|f_j|^2=1. To solve Eqs. (<ref>) and (<ref>), it is convenient to set j=j_0(2l-1), f_j_0(2l-1)≡ g_l and pass to the enumeration via the index l. Since j=j_0,3j_0,5j_0,… ,∞, we have l=1,2,3,… ,∞. Since all solutions Φ (x) of Eqs. (<ref>) and (<ref>) can be written in the real-valued form (see Appendix), we set f_j^∗=f_j and g_l^∗=g_l. As a result, Eqs. (<ref>) and (<ref>) take the form( π ^2j_0^2(2l-1)^2/(1-1/N)γ N^2-2ϵ̃) g_l+∑_l_1,l_2=1,2,… ,∞g_l_1g_l_2[ θ (l+l_1-l_2-1)g_l+l_1-l_2-. - θ (-l+l_1-l_2)g_-l+l_1-l_2+1+θ (l-l_1+l_2-1)g_l-l_1+l_2-θ (-l-l_1+l_2)g_-l-l_1+l_2+1-- . g_l+l_1+l_2-1+θ (-l+l_1+l_2-1)g_-l+l_1+l_2-θ (l-l_1-l_2)g_l-l_1-l_2+1] =0, l=1,2,… ,∞ , ∑_l=1,2,… ,∞g_l^2=1.The system of equations (<ref>) was obtained for the GP_N equation ( <ref>). This system also corresponds to the GP equation (<ref>) if we make the substitution ( 1-1/N) → 1 in Eq. (<ref> ) and in ϵ =2ρ̅( 1-1/N) c·ϵ̃.The accuracy of the GP and GP_N approaches will be verified by comparing the calculated system energy with the exact energy found by the Bethe ansatz. From the second quantization approach and the approximation Ψ̂(x,t)=Ψ (x,t)=e^ϵ t/iħΦ (x), it follows <cit.> that each GP solution Φ (x) corresponds to the system energyE_GP=∫_0^Ldx{ -ħ ^2/2mΦ ^∗(x) ∂ ^2/∂ x^2Φ (x)+qc|Φ (x)|^4}with q=1. Formula (<ref>) describes the stationary state of condensate for N≫ 1. From the exact quantum mechanical formulaE=∫_0^Ldx_1… dx_NΨ _N^∗(x_1,… ,x_N)[ -ħ ^2/2m∑_j=1^N∂ ^2/∂ x_j^2+∑_j<lU(|x_j-x_l|)] Ψ _N(x_1,… ,x_N)and the condensate ansatz (<ref>) with ψ (x,t)=e^ϵ t/iħΦ (x)/√(N), we obtain formula (<ref>) with q=1-1/N. In the operator approach with Ψ̂(𝐫,t)=â_0Ψ ( 𝐫,t)/√(N), the formulae E=⟨ 0,… ,0,N|Ĥ |N,0,… ,0⟩, (<ref>)–(<ref>), and ψ (𝐫 ,t)=Ψ (𝐫,t)/√(N)=e^ϵ t/iħΦ (x)/√(N) again bring about (<ref>) with q=1-1/N. That is, formula (<ref>) with q=1-1/N gives the energy of the stationary condensate state Ψ̂(x,t)=â_0e^ϵ t/iħΦ (x)/√(N) (or (<ref>)) for any N≥ 2. This is the energy obtained in the GP_N approach.Substituting the function Φ (x) (<ref>) with b_j=√(N) f_j=√(N)f_j_0(2l-1)=√(N)g_l into Eq. (<ref>), after some algebra we getE_GP = ħ ^2ρ̅^2/2mπ ^2j_0^2/N∑_l=1^l_m-1(2l-1)^2g_l^2+qc ρ̅ N/2∑_l_1l_2l_3=1^l_m-1g_l_1g_l_2g_l_3[ 3θ (l_1+l_2-l_3-1)g_l_1+l_2-l_3-.- . 3θ (l_1-l_2-l_3)g_l_1-l_2-l_3+1-θ (l_1+l_2+l_3-2)g_l_1+l_2+l_3-1] .Here q=1 for the GP approach, and q=1-1/N for the GP_N approach.In the exact approach, the wave functions of a 1D system of point bosons are given by the Bethe ansatz <cit.>, see also reviews <cit.>. Under periodic BCs, the wave function of the 1D system of N point spinless bosons for the region x_1≤ x_2≤…≤ x_N is given by the Bethe ansatz <cit.>ψ _{k}(x_1,… ,x_N)=∑_Pa(P)e^i∑_l=1^Nk_P_lx_l,where k_P_l is selected from the set (k_1,… ,k_N), and P means all possible permutations of k_l. Under zero BCs, the wave function of the system is a superposition of a set of counter-propagating waves <cit.> ,Ψ _{|k|}(x_1,… ,x_N)=∑_ε _1,… ,ε _N=± 1C(ε _1,… ,ε _N)ψ _{k}(x_1,… ,x_N),where ψ _{k} is defined by formula (<ref>) with k_j=ε _j|k_j|. The formulae for C(ε _1,… ,ε _N) and a(P) are written out in <cit.>. The energy of the system of point bosonsE_Bethe=k_1^2+k_2^2+… +k_N^2. Under zero BCs, the numbers |k_j| satisfy the system of Gaudin's equations  <cit.>,L|k_p|=π n_p+∑_j=1^N( arctanc/ |k_p|-|k_j|+arctanc/|k_p|+|k_j|) |_j≠ p, p=1,… ,N,where quantum numbers n_p are integers, and n_p≥ 1. The ground state corresponds to n_p=1, p=1,2,… ,N (or n_p≤ N=1 for short). The system of equations (<ref>) has a unique real-valued solution {|k_p|}≡ (|k_1|,|k_2|,… ,|k_N|) for each set {n_p} <cit.>. The positivity of all |k_p| was not proven in <cit.>, but it can be corroborated by the direct numerical solution of system (<ref>).Below we find the set of numbers |k_j| by numerically solving, using the Newton method, the system of equations (<ref>) with n_p≤ N=1 and various N, L,γ. As a result, we obtain the exact energy E_Bethe (<ref>), which makes it possible to compare it with E_GP and E_GP_N (<ref>) (in so doing, we must set ħ =2m=1 in (<ref>) because formulae (<ref>)–(<ref>) were obtained just for this normalization). § GROUND STATE SOLUTIONS The ground state of the system is described by the function Φ _j_0(x) (<ref>) with j_0=1. In this section, we analyze this solution for different values of the number N of bosons, mean particle density ρ̅=N/L, and the coupling constant γ =c/ρ̅.First of all, the wave function of condensate Φ _1(x) (<ref>) has to be determined. Knowing Φ _1(x), we can find the ground state energy E_0, the particle density profile ρ (x), and other quantities. We are interested in E_0 and ρ (x). To find Φ _1(x) (<ref>), it is necessary to solve the system of equations (<ref>) and (<ref>) at j_0=1. We solved it numerically with the help of the Newton method, putting l,l_1,l_2=1,2,… ,l_m-1 and denoting ϵ̃≡ g_l_m. As a result, we obtained l_m equations for l_m unknowns: g_1,g_2,… ,g_l_m. At j_0=1 , we found a unique solution of Eqs. (<ref>) and (<ref>) for each of the considered sets (N,ρ̅,γ ). §.§ Coefficients g_l The coefficients f_j=2√(2)/π j (g_l=2√(2)/π (2l-1)) and the energy ϵ̃=1 were used as seed values for the Newton method. They correspond to the zero approximation solution for the Bose gas with weak point interaction (N^-2≪γ≪ 1) <cit.>. For the seed f_j, formula (<ref>) gives Φ _1(x)=√(ρ̅) inside the system and Φ _1(x)=0 at the boundaries.The solutions g_l obtained within the GP approach for N=2 and N=1000 are shown in Figs. <ref> and <ref>, respectively. The values of g_l obtained in the GP_N approach are close by magnitude and therefore not shown. At γ≫ 1 the coefficients g_l are almost independent of γ and close to the seed values g_l<l_m=2√(2)/π (2l-1), g_l_m=1. At γ≪ 1, the coefficient g_1 is close to unity for all N. The coefficients g_1<l<l_m decrease as γ decreases. At γ≪ 1 they are smaller for smaller N and differ strongly from the seed values g_1<l<l_m. Recall that g_l_m denotes the energy ϵ̃ . The latter decreases with increasing γ and becomes close to ϵ̃=1 for all N at γ≫ 1. At γ≪ 1 the values of ϵ̃ increase rapidly as N decreases. In the GP approach (1-N^-1→ 1), Eqs. (<ref>) and (<ref>) possess scaling properties: they do not change if γ and N vary provided γ N^2=const. That is why the coefficients g_l for such pairs (γ ,N) are identical. The particle density profiles ρ (x) are also the same for them if ρ̅ are the same. However, the energies E_GP (<ref>) are different for such (γ ,N) pairs. §.§ Ground state energy In this subsection, we find the energies E_GP and E_GP_N for the condensate ground state (formula (<ref>) with j_0=1, ħ =2m=1) and analyze their dependences on N, ρ̅, and γ. We also compare E_GP and E_GP_N with the exact energy E_Bethe obtained from Eqs. (<ref>), (<ref>).The dependence E(γ ) calculated for N=2, 10, 100, 1000 is shown in Fig. <ref>. If γ =0, the ground state corresponds to N free particles with the total energy E=N(π /L)^2. As one can see from the figures, the energies E_GP and E_GP_N are close to the energy of free particles if γ is small. The larger N, the smaller γ at which such a nearness of energy values takes place (because as N increases, the potential energy increases faster than the kinetic one). At γ→∞, the exact solution E_Bethe tends to the limit of impenetrable bosons (for N→∞, this is E_Bethe=Nπ ^2n^2/3 <cit.>). Therefore, the curve E_Bethe(γ ) saturates at γ≫ 1. In this case, the energies E_GP and E_GP_N differ strongly from the exact energy. However, if γ≲ 0.1, the energies E_GP and E_GP_N are close to the exact energy E_Bethe (we verified it for ρ̅=0.1, 1, and 10). Moreover, for N≳ 100, the energies E_GP and E_GP_N are close to the Bogoliubov ground-state energy if γ is small but not too small.The dependence E(N) for different γ is shown in Fig. <ref>. From the dependence for c=1, one can see that the energy E_GP_N is close to E_Bethe even for a fairly large γ (γ =1) if N=2 or 3, but E_GP differs appreciably from E_Bethe for γ =1 and any N. At γ =0.1 the energies E_GP and E_GP_N are close to E_Bethe if N≲ 100. Finally, at γ =0.01, the energies E_GP, E_GP_N and E_Bethe are close to each other for all N.For all cases shown in the figures and not shown, the energy E_GP_N is closer to the exact energy E_Bethe than the energy E_GP, for all values of the parameters. In this case, for N≫ 1 the energies E_GP and E_GP_N are very close to each other, whereas for N≲ 10 they are appreciably different, with E_GP_N being much closer to E_Bethe. Note the following interesting feature. One can see in panel c=0.01 in Fig. <ref> that the condensate is close to the system of free particles when N≲ 10, and to Bogoliubov's system when N≳ 300. For N∼ 30÷ 100, the system has already left the free particle regime but has not yet entered Bogoliubov's one; in this case, the energies E_GP and E_GP_N are close to the exact energy E_Bethe. That is, for N≃ 30–100 structure (<ref>) seems to work fairly well, whereas the interaction between the particles manifests itself more in the change of the form of Φ (x) rather than in interparticle correlations. §.§ Particle density profile for the ground state The local particle density for a 1D system of N bosons in the state Ψ (x_1,… ,x_N) is defined by the formulaρ (x)=N∫ dx_2… dx_N|Ψ (x,x_2,… ,x_N)|^2.For N free particles being in the ground state Ψ (x_1,… ,x_N)=∏_j=1,… ,N√(2/L)sin(π x_j/L), it givesρ (x)=2ρ̅sin ^2(π x/L),ρ̅=N/L.In the GP_N approach with the condensate ansatz (<ref>), we obtain ρ (x)=N|ψ (x,t)|^2 with the normalization ∫_0^Ldx|ψ (x,t)|^2=1. For the stationary solution (<ref>) with normalization (<ref>), formula (<ref>) givesρ (x)=|Φ (x)|^2,where Φ (x) is the wave function from the GP_N equation (<ref> ). The operator GP_N approach also leads to (<ref>): Ψ̂( 𝐫,t)=â_0Ψ (𝐫,t)/√(N), ρ (x)=⟨Ψ̂^+(x,t)Ψ̂(x,t)⟩ _T=0=⟨N̂_0⟩ _T=0|Ψ (x,t)|^2/N=|Ψ (x,t)|^2=|Φ (x)|^2 (here ⟨Â⟩ _T=0=∫ dx_1… dx_NΨ _N^∗ÂΨ _N=⟨ N|Â|N⟩). Similarly, within the GP approach we find Ψ̂(x,t)=Ψ (x,t) and ρ (x)=⟨Ψ̂^+(x,t)Ψ̂(x,t)⟩ _T=0=|Ψ (x,t)|^2=|Φ (x)|^2.Let us determine ρ (x) for the GP and GP_N approaches on the basis of Eqs. (<ref>), (<ref>), (<ref>), and (<ref>). The results of numerical analysis are depicted in Figs. <ref> and <ref>. The GP and GP_N curves ρ (x) are close to each other and would be visually indistinguishable. Therefore, only the GP_N curves are plotted.Figure <ref> demonstrates the ρ (x)-profile for γ =0.01 and different N. The curves ρ (x) for N=10 and N=30 are similar to the curve for N=2 and contain no intervals with a constant particle density ρ (x)=const. Such ρ (x) are close to those for free particles. On the contrary, for N≳ 100, the profile ρ (x) contains a large section where the particle density is constant, which testifies that the collective properties of the system manifest themselves at N≳ 100. Note that the profiles ρ (x) obtained in the GP and GP _N approaches for γ =0.01, ρ̅=1, N=2 coincide with high accuracy with the profile ρ (x) found by the Bethe ansatz (see Fig. <ref>). Figure <ref> shows the profiles ρ (x) for N=1000 and different γ. Let us introduce the half-width η of the wall layer; this parameter is equal to the smallest coordinate x for which ρ (x)=ρ (L/2)/2. Numerical analysis showed that for N=2 and γ≲ 1, the quantity η is practically independent of γ and is equal to η≈ L/4. For N=30 and 0.1≲γ≲ 1, η considerably depends on γ. And for N=1000 and N^-2≪γ≲ 1, we have η≈1/ρ̅√(γ) (a close estimate η≈π/2 ρ̅√(γ) was obtained in <cit.>). For γ N^2≲ 1 and any N≥ 2, the interval with the constant particle density disappears, and the system is in the near-free particle regime. In this case, the dependence ρ (x) for N interacting bosons is close to dependence (<ref>) for free bosons, and the concept of the wall layer partly loses its meaning, although we may assume that η =L/4. §.§ Near-free particle regime The regime of near-free particles corresponds to the condition γ N^2≲ 1. For N≫ 1, this relation corresponds to ultraweak coupling, γ≲ N^-2. In this case, the solutions |k_j| of Gaudin's equations (<ref>) have specific properties, and the ground state energy is approximately determined by the formula <cit.>E_af≈Nπ ^2/L^2+3(N-1)γρ̅^2/2.For example, for N=1000, ρ̅=1, and γ =10^-8, we have E_af=0.0098845894≡ Nπ ^2/L^2+0.000014985, E_GP=0.0098846031≡ Nπ ^2/L^2+0.0000149987, and E_GP_N=0.0098845881376≡ Nπ ^2/L^2+0.000014983738. The exact solution is E_Bethe=0.009884588128≡ Nπ ^2/L^2+0.000014983728.In Fig. <ref> the values ofE_GP-E_Bethe/E_Bethe and E_GP_N-E_Bethe/E_Bethe are plotted for different γ and N for ρ̅=1. Since E_Bethe is the exact solution, Fig.  <ref> illustrates the accuracy of the GP and GP_N approaches . It is easy to see that for γ N^2≲ 1, i.e. in the near-free particle regime, the GP_N approach is in much better agreement with the exact one than the GP approach. § COMPARISON OF THE GP AND GP_N APPROACHES Fig. <ref> shows several patterns of relationships. First, both approaches reproduce the exact energy E_Bethe well for all N≥ 2 if γ≲ 0.1. Second, the GP and GP_N approaches give similar results for large N. However, for small N, the GP_N solutions agree much better, than the GP ones, with the exact solutions. For example, for N=2, ρ̅=1, γ =10^-3, the estimates E_GP-E_Bethe/E_Bethe≈ 10^-3.5∼γ and E_GP_N-E_Bethe/E_Bethe≈ 10^-7.4∼γ ^2.5 hold, whereas for N=2, ρ̅=1, γ =10^-7 we have E_GP-E_Bethe/E_Bethe≈ 10^-7.5∼γ and E_GP_N-E_Bethe/E_Bethe≈ 10^-15.7∼γ ^2.2. In the latter case, E_GP=4.9348025, E_GP_N=4.9348023505446781, and E_Bethe=4.9348023505446772. That is, for γ =10^-7 the relative error of the GP_N solution is about 10^-16. This is amazing accuracy!A more general property holds: the GP_N approach works much better than the GP one in the near-free particle regime (γ N^2≲ 1 ), which corresponds to small N (N≲γ ^-1/2) or small γ (γ≲ N^-2). Why? In our opinion, this is a result of the following: It is natural to expect that every near-free boson with high probability is in the condensate, for any N≥ 2. Therefore, ansätze (<ref>), (<ref>), and Ψ̂(x,t)=Ψ (x,t) are good approximations. Since ansätze (<ref>) and (<ref>) are somewhat more accurate than the c-number ansatz Ψ̂(x,t)=Ψ (x,t), the GP_N approach turns out to be more accurate than the GP one.It is commonly believed that the GP approach works only for large N because the approximation Ψ̂(x,t)=Ψ (x,t) is reasonable only when N≫ 1. In point of fact, the GP approach works even better for small N than for large N (see Fig. <ref>). This surprising property appears to be due to the fact that the GP equation is simply close to the GP_N one, which describes few-particle systems with high accuracy. In turn, the following question arises: Why is the GP_N equation works better in the case of small N? The evident answer is that the role of two- and many-particle correlations is less for small N (such correlations are not taken into account in ansätze (<ref>), (<ref>)). More information can be obtained from the diagonal expansion of the single-particle density matrixF_1(x,x^')=∑_j=1^∞λ _jϕ _j^∗(x^')ϕ _j(x),where λ _j are the occupation numbers of the single-particle states ϕ _j(x), and ϕ _j(x) form the complete collection of orthonormal functions. Condensate ansätze (<ref>) and (<ref>) describe the system well only if √(N)ϕ _1(x) is close to the solution Φ (x) of the GP_N equation, and the relations λ _1≃ N, λ _j>1≪ N hold (in his case, λ _1+λ _2+… +λ _∞=N). We suppose that (λ_1(γ)/N|_N≤ 10)>(λ _1(γ)/N|_N≥ 100) for γ≲ 0.1. If so, then the role of two- and many-particle correlations is less for N≲ 10 than for N≳ 100.Next, one can see from Fig. <ref>c that the Bogoliubov energy E_Bog for large N is closer to the exact energy E_Bethe than the energies E_GP and E_GP_N . That is, the Bogoliubov method describes the ground state of the system with large N more accurately than the GP and GP_N approaches do. This property shows that taking into account above-condensate atoms—which is done in the Bogoliubov model, but not in the GP and GP_N approaches—is significant.In order to better understand the properties of the system, it is necessary to find thedensity matrix (<ref>) and an equation for the condensate in the approximation that accurately takes into account the above-condensate atoms (or, equivalently, two- and many-particle correlations). § CONCLUDING REMARKS We have shown that the condensate ansätze (<ref>) and (<ref>) lead to a nonlinear Schrödinger equation (the GP_N equation), which differs from the standard Gross-Pitaevskii (GP) equation by the additional factor ( 1-1/N).We have analysed the ground state of the system and compared the solutions of the GP and GP_N equationswith the exact solutions. The analysis showed that both equations describe well a Bose system with any number of particles N≥ 2, if the coupling is weak: γ≲ 0.1 (we considered the mean particle densities n=0.1, 1, and 10). This result significantly extends the conventional view that the Gross-Pitaevskii equation is applicable only in the case of N≫ 1.In the case of the near-free particle regime, γ N^2≲ 1, the GP_N equation describes the Bose system much better than the standard Gross-Pitaevskii equation does. The condition γ N^2≲ 1 can be written as γ≲ N^-2 or N≲γ ^-1/2, which corresponds to ultraweak coupling or small N, respectively. These properties and the analysis in <cit.> indicate that phonons in a superfluid Bose gas and helium-II are not Goldstone bosons (see section 2).Since the GP_N equation describes a few-boson system with high accuracy, it is worth extending the concept of the condensate to systems with small N, using the following criterion: if λ_1≫λ_2+… + λ_N in (<ref>), then state 1 is a condensate state, whereas other states are not. For an ideal gas, all λ_j are integers, and this criterion gives λ_1=N; λ_2=… = λ_N=0.Interestingly, the GP and GP_N equations work well for all N and γ≲ 0.1, although they completely ignore two-particle and higher order correlations. This property means that these correlations are weak when γ≲ 0.1.To describe a system of N spinless interacting bosons, several methods have been proposed (below we cite only some references, according to our subjective view). These are analytical methods for systems with N≫ 1 and weak coupling <cit.>; numerical methods for systems with N=2÷ 10 and any, weak or strong, coupling (here different expansions in basis functions are used) <cit.>; and Monte Carlo methods for systems with N∼ 10÷ 100 and arbitrary coupling <cit.>. The exactly solvable approach describes 1D systems of N≥ 2 point bosons, for arbitrary coupling <cit.>. The GP and GP_N equations provide one more analytical method for systems with N≥ 2 (in this case, the GP_N equation more accurately describes systems with 2≤ N≲ 100 ). To our knowledge, the ground state of Bose systems with 10≲ N≲ 100 can be accurately described only by the Monte Carlo method and the GP_N approach. Thus, the factor ( 1-1/N) makes it possible to accurately describe Bose systems with small N using the Gross-Pitaevskii equation.§ ACKNOWLEDGMENTS This research was supported by the National Academy of Sciences of Ukraine (project No. 0121U109612) and the Simons Foundation (grant No. 1030283).§ APPENDIX: PROOF THAT Φ (X) IS REAL Let us show that any solution Φ (x) of Eq. (<ref>) with zero BCs ( <ref>) can be written in the real form.Let a solution of Eq. (<ref>) be complex, Φ (x)=Φ _1(x)+iΦ _2(x), where Φ _1(x) and Φ _2(x) are real functions. Then Eq. (<ref>) can be written as two equations:ϵΦ _1=-ħ ^2/2m∂ ^2Φ _1/∂ x^2+2c(Φ _1^2+Φ _2^2)Φ _1, ϵΦ _2=-ħ ^2/2m∂ ^2Φ _2/∂ x^2+2c(Φ _1^2+Φ _2^2)Φ _2.Multiplying Eq. (<ref>) by Φ _2(x) and Eq. (<ref>) by Φ _1(x), and subtracting the results, we obtain the equationΦ _2∂ ^2Φ _1/∂ x^2=Φ _1∂ ^2Φ _2/∂ x^2.Let us expand Φ _1(x) and Φ _2(x) in sine series,Φ _1(x)=∑_j=1,2,… ,∞a_j√(2/L)·sin (k_jx),Φ _2(x)=∑_j=1,2,… ,∞b_j√(2/L )·sin (k_jx),where k_j=π j/L. Then Eq. (<ref>) can be written in the form∑_j_1j_2=1,2,… ,∞k_j_1^2(b_j_1a_j_2-b_j_2a_j_1)sin (k_j_1x)sin (k_j_2x)=0,or(1/2)∑_j_1j_2=1,2,… ,∞k_j_1^2(b_j_1a_j_2-b_j_2a_j_1)[cos (k_j_1-j_2x)-cos (k_j_1+j_2x)]=0.Using the formulaecos (k_jx)=∑_p=1,2,… ,∞c_j^psin (k_px), c_j^p=[ [ 0p-j,; 2/π( 1/p-j+1/p+j)p-j, ].let us write Eq. (<ref>) as follows:(1/2)∑_j_1j_2=1,2,… ,∞k_j_1^2(b_j_1a_j_2-b_j_2a_j_1)∑_p=1,2, … ,∞(c_j_1-j_2^p-c_j_1+j_2^p)sin (k_px)=0.Let the function Φ _1(x) be known, and Φ _2(x) unknown. Since the functions sin (k_px) are independent, we obtain from Eq. (<ref> ) the system of equations for the unknown coefficients b_j:∑_j_1j_2=1,2,… ,∞k_j_1^2(b_j_1a_j_2-b_j_2a_j_1)(c_j_1-j_2^p-c_j_1+j_2^p)=0, p=1,2,… ,∞ .Such a system of linear homogeneous equations for b_j, ∑_jA_pjb_j=0, always has a zero solution: b_j=0 for all j=1,2,… ,∞. If the determinant of the matrix A_pj is zero, then this system of equations also has one nonzero solution, namely, b_j≠ 0 for at least two j's. It is easy to see that the quantities b_j=ra_j, where r is the same for all j=1,2,… ,∞, provide a solution of system (<ref>). So, we obtain two possible solutions: Φ _2(x)=rΦ _1(x) and Φ _2(x)=0. In both cases, Φ (x)=Φ _1(x)+iΦ _2(x)=const·Φ _1(x). Therefore, we can consider the function Φ (x) in Eq. (<ref>) to be real. This analysis can be easily generalized to two- and three-dimensional cases.99gross1957 Gross E P Phys. Rev. 106 161 (1957)gross1958 Gross E P Ann. Phys. 4 57 (1958)bog1947 Bogoliubov N N J. Phys. USSR 11 23 (1947)pit1961 Pitaevskii L P Sov. Phys. JETP 13 451 (1961)gross1961 Gross E P Nuovo Cimento 20 454 (1961) https://doi.org/10.1007/BF02731494leggett2001 Leggett A G Rev. Mod. Phys. 73, 307 (2001) https://doi.org/10.1103/RevModPhys.73.307pethick2008 Pethick C J and Smith H Bose–Einstein Condensation in Dilute Gases (Cambridge University Press, New York, 2008)blume2012Blume D Rep. Prog. Phys.75 046401 (2012)https://doi.org/10.1088/0034-4885/75/4/046401pitstring Pitaevskii L and Stringari S Bose-Einstein Condensation and Superfluidity (Oxford University Press, New York, 2016) ch 5cornell2002 Cornell E A and Wieman C E Rev. Mod. Phys. 74 875 (2002)ketterle2002 Ketterle W Rev. Mod. Phys. 74 1131 (2002)fetter Fetter A L and Walecka J D Quantum Theory of Many-Particle Systems (McGraw-Hill, New York, 1971)pinesnoz Noziéres P and Pines D The Theory of Quantum Liquids, vol. II (CRC Press, New York, 2018)lieb2005 Lieb E H, Seiringer R, Solovej J P and Yngvason JThe Mathematics of the Bose Gas and its Condensation (Birkhä user-Verlag, Basel, 2005)esryphdEsry B DMany-body effects in Bose-Einstein condensates of dilute atomic gases,PhD Thesis (University of Colorado, Boulder, 1997)salasnich2000 Salasnich L Int. J. Mod. Phys. B 14 1 (2000) https://doi.org/10.1142/S0217979200000029blume2001Blume D and Greene C H Phys. Rev. A63 063601 (2001)https://doi.org/10.1103/PhysRevA.63.063601gp2 Tomchenko MarXiv:2311.03176 [cond-mat.quant-gas]carr2000r Carr L D, Clark C W and Reinhardt W P Phys. Rev. A 62 063610 (2000). https://doi.org/10.1103/PhysRevA.62.063610petrov2004 Petrov D S, Gangardt D M and Shlyapnikov G V J. Phys. IV Fr. 116 5 (2004)bouchoule2009 Bouchoule I, van Druten N J and Westbrook C I arXiv:0901.3303 [physics.atom-ph]yuv1 Vakarchuk I A and Yukhnovskii I R Theor. Math. Phys. 40 626 (1979) https://doi.org/10.1007/BF01019246gross1962 Gross E P Ann. Phys. 20 44 (1962) https://doi.org/10.1016/0003-4916(62)90115-Xwoo1972 Woo C-W Phys. Rev. A 6 2312 (1972) https://doi.org/10.1103/PhysRevA.6.2312feenberg1974 Feenberg E Ann. Phys. 84 128 (1974) https://doi.org/10.1016/0003-4916(74)90296-6wu1961 Wu T T J. Math. Phys. 2 105 (1961) https://doi.org/10.1007/BF01019246land3 Landau L D and Lifshitz E M Quantum Mechanics. Non-Relativistic Theory (Pergamon Press, New York, 1980)bogquasi Bogoliubov N N Lectures on Quantum Statistics, vol. 2: Quasi-Averages (Gordon and Breach, New York, 1970)gardiner1997 Gardiner C W Phys. Rev. A 56 1414 (1997) https://doi.org/10.1103/PhysRevA.56.1414girardeau1998 Girardeau M D Phys. Rev. A 58 775 (1998) https://doi.org/10.1103/PhysRevA.58.775anderson1984 Anderson P W Basic notions of condensed matter physics (Benjamin/Cummings, Menlo Park CA, 1984) ch 2forster2018 Forster D Hydrodynamic fluctuations, broken symmetry, and correlation functions (CRC Press, Boca Raton FL, 2018) ch 7, 10powell2020 Powell B J Contemporary Physics 61 96 (2020) https://doi.org/10.1080/00107514.2020.1832350goldstone1962 Goldstone J, Salam A and Weinberg S Phys. Rev. 127 965 (1962) https://doi.org/10.1103/PhysRev.127.965andersen1994a Andersen K H, Stirling W G, Scherm R, Stunault A, Fak B, Godfrin H and Dianoux A JJ. Phys.: Condens. Matter 6 821 (1994) https://doi.org/10.1088/0953-8984/6/4/003andersen1994b Andersen K H and Stirling W GJ. Phys.: Condens. Matter 6 5805 (1994) https://doi.org/10.1088/0953-8984/6/30/004blag1997 Blagoveshchenskii N M, Puchkov A V, Skomorokhov A N, Bogoyavlenskii I V and Karnatsevich L V Low Temp. Phys. 23 374 (1997) https://doi.org/10.1063/1.593381andersen1999 Gibbs M R, Andersen K H, Stirling W G and Schober HJ. Phys.: Condens. Matter 11 603 (1999) https://doi.org/10.1088/0953-8984/11/3/003kalinin2007 Kalinin I V, Lauter H and Puchkov A V JETP 105 138 (2007) https://doi.org/10.1134/S1063776107070291bethe Bethe H A Z. Phys. 71 205 (1931)ll1963 Lieb E H and Liniger W Phys. Rev. 130 1605 (1963)gaudin1971 Gaudin M Phys. Rev. A 4 386 (1971)gaudinm Gaudin M The Bethe Wavefunction (Cambridge University Press, Cambridge, 2014)syrwid2021 Syrwid A J. Phys. B: At. Mol. Opt. Phys. 54 103001 (2021) https://doi.org/10.1088/1361-6455/abd37fmtjpa2017 Tomchenko M J. Phys. A: Math. Theor. 50 055203 (2017)mtmethodbog Tomchenko M D Ukr. J. Phys. 64 250 (2019) https://doi.org/10.15407/ujpe64.3.250girardeau1960 Girardeau M J. Math. Phys. 1 516 (1960)batchelor2005 Batchelor M T, Guan X W, Oelkers N and Lee CJ. Phys. A: Math. Gen. 38 7787 (2005)mt2015 Tomchenko M J. Phys. A: Math. Theor. 48 365003 (2015)fey1954 Feynman R Phys. Rev. 94 262 (1954) https://doi.org/10.1103/PhysRev.94.262bz1955 Bogoliubov N N and Zubarev D NSov. Phys. JETP 1 83 (1956)brueck1959 Brueckner K Theory of Nuclear Structure (Methuen, London, 1959)yuv2 Vakarchuk I A and Yukhnovskii I R Theor. Math. Phys. 42 73 (1980) https://doi.org/10.1007/BF01019263pash2004 Pashitskii E A, Mashkevich S V and Vilchynskyy S I J. Low Temp. Phys. 134 851 (2004) https://doi.org/10.1023/B:JOLT.0000013206.08699.a2mch Multidimensional Quantum Dynamics: MCTDH Theory and Applications ed H-D Meyer, F Gatti and G A Worth (Wiley-VCH, Weinheim, 2009) zinner2016 Zinner N T EPJ Web of Conferences 113 01002 (2016) https://doi.org/10.1051/epjconf/201611301002sowinski2019 Sowiński T and Garćia-March M A Rep. Prog. Phys. 82 104401 (2019)ceperley1992 Schmidt K E and Ceperley D M Monte Carlo techniques for quantum fluids, solids and droplets, in Monte Carlo Methods in Condensed Matter Physics, ed K Binder, Topics in Applied Physics, vol 71 (Springer, Heidelberg, 1992) pp. 205–248 https://doi.org/10.1007/3-540-60174-0_7whitlock2006 Whitlock P A and Vitiello S A Quantum Monte Carlo Simulations of Solid ^4He, in: Large-Scale Scientific Computing. LSSC 2005, ed I Lirkov, S Margenov and J  Wasniewski, Lecture Notes in Computer Science, vol 3743 (Springer, Berlin, 2006) pp. 40–52 https://doi.org/10.1007/11666806_4lieb1963 Lieb E H Phys. Rev. 130 1616 (1963) https://doi.org/10.1103/PhysRev.130.1616mtsp2019 Tomchenko M D Dopov. Nac. Akad. Nauk Ukr. No. 12 49 (2019) https://doi.org/10.15407/dopovidi2019.12.049
http://arxiv.org/abs/2310.18528v2
{ "authors": [ "Maksim Tomchenko" ], "categories": [ "cond-mat.quant-gas" ], "primary_category": "cond-mat.quant-gas", "published": "20231027231125", "title": "Nonuniform Bose-Einstein condensate. I. An improvement of the Gross-Pitaevskii method" }
[email protected] Institute of Applied Physics of the Russian Academy of Sciences, 603950 Nizhny Novgorod, Russia Recently, brown dwarfs have emerged as a new topic for the astrophysical studies. These objects are intermediate between solar-type stars and giant gaseous planets. In this article, the analogies between brown dwarfs and the planet Jupiter are considered with a focus on the surrounding plasma. I consider the magnetohydrodynamic version of the Rayleigh-Taylor instability (or so called “interchange instability”) as a minimal model of the expansion of the plasma disc surrounding Jupiter. By comparing the theoretical prediction for the radial expansion rate of the disc with the observations I quantitatively confirm the existing qualitative result, which predicts that the Rayleigh-Taylor instability provides too quick expansion. Therefore, in the realistic plasma disc yet another mechanism must operate which slows down the expansion. I suggest that similar mechanisms take place in the observed radiation belts of brown dwarfs. Some properties of plasma surrounding brown dwarfs Dmitry Kobyakov 2023-10-26 ==================================================Introduction. Brown dwarf is a stellar-type celestial body with mass M_* in the range 13M_Jup<M_*<80M_Jup, or, in solar masses, 1.241×10^-2M_<M_*<7.636×10^-2M_, where the lower limit corresponds to the minimum mass suitable for the stellar deiterium combustion and the upper limit corresponds to the minimum mass suitable for the stellar hydrogen combustion. Here, M_Jup=1.8913×10^30 g is the Jupiter mass. The spectral type of brown dwarf is in the range M7-M9, L, T, Y. Its temperature is between 300 and 2500 K. The dipolar magnetic field on the surface is typically of the order of 10^3-10^4 G. The possible emission types are radio, infrared, optical, ultravioler and X-ray <cit.>.Observations <cit.> of the brown dwarf 2MASS J18353790+3259545 (equivalently denoted as LSR J1835+3259) with mass ∼77M_Jup, radius ∼1.07R_Jup and rotation period 1.008×10^4 s, have revealed a radiation belt surrounding the star. The radiation belt has radius ∼17R_Jup, where R_Jup=7.1492×10^9 cm is Jupiter's radius <cit.>. The existence of the radiation belt, relatively strong magnetic field and rapid rotation observed from LSR J1835+3259 indicates that there are analogies between the radio emission mechanisms in its magnetosphere and the physics of the radiation belt of Jupiter. At present, the origin of the plasma in the radiation belt of LSR J1835+3259 is unclear but it is likely that in analogy with the Jupiter-Io system there is a planetary satellite <cit.>.An elementary physical picture of the radiation belt is based on the model of the uniform (solid-like) rotation of the magnetosphere. The mechanism maintaining the rotation of the plasma surrounding a rotating magnetic dipole with electrically conducting surface has been considered in <cit.>. The Alfven radius defines the radial distance from the star center to the point where the configuration of the magnetic field lines changes from closed to open (Fig. 1). The black dot in Fig. 1 is the source of plasma (Io in case of Jupiter's magnetosphere). With R_K<R_A, the magnitosphere is centrifugal <cit.>, where R_K=(GM_*/Ω^2)^1/3 is the Kepler radius (Fig. 1), Ω is the rotational angular frequency. Formation of a plasma disc (Figs. 1,2) as a result of the magnetosphere rotation has been first shown for the magnetic star σ Ori E <cit.>. The same mechanism leads to the formation of Jupiter's plasma disc. The standard model of the radial expansion of Jupiter's plasma disc is the convective (or so called interchange) plasma instability of the plasma disc <cit.>. However, there remains an open question <cit.>: why is the observed expansion of the plasma disc is significantly slower than the expansion rate predicted theoretically in the framework of the interchange plasma instability? The notion of the “interchange mode” has appeared in the beginning of studies of the laboratory plasma. It implies that the plasma and the confining magnetic field switch their spatial locations as a result of action of the external forces. In dealing with the interchange plasma instability I will follow the book <cit.>. The problem of the interchange plasma instability is analogous to the Rayleigh-Taylor instability.Figure 3 shows a schematic picture of the plasma slab in a uniform external force field supported by the magnetic field. Linearization of the equations of motion of ideal isothermic plasma with a perturbation of a fluid element ξ, the external free-fall acceleration 𝐠=(-g,0,0) and the perturbation wave vector 𝐤_0=(0,k_y,k_z), (Fig. 3), leads to the resulting potential energy W of the system:W=ξ_x(0)/2k_0[(𝐤_0·𝐁_0)^2/tanhk_0a - ρ_0k_0g + (𝐤_0·𝐁̂_0)^2/tanhk_0b].Equation (<ref>) shows that (i) the external force 𝐠 (g≥0) always destabilizes the plasma, (ii) the magnetic induction may stabilize the plasma.In case when the plasma is inhomogeneous along x axis, the instability is described by the equation found for the first time in <cit.>. If the conditions 𝐁_0×𝐁̂_0=0 and 𝐁_0·𝐁̂_0>0 are satisfied, the dispersion equation has the formω^4-Ω_1^4+Ω_2^4=0,where Ω_1^4=b^2+2c^2/b^2+c^2k_∥^2b^2+k_0^2/k_0^2+q^2N_m^2; Ω_2^4=c^2/b^2+c^2k_∥^2b^2(k_∥^2b^2 + k_0^2/k_0^2+q^2N_B^2); c=γ[p(x=0)]/[ρ(x=0)]; b=B_0/√(ρ(x=0)); γ is the adiabatic index; k_∥ is the component of 𝐤_0 which is parallel to 𝐁_0; ξ∼ e^iqx, qL≫1, L=(p+B^2/2)/ρ g is the size of equilibrium variations. The frequencies (Brunt-Väisäläa and its magnetic modification <cit.>) are given byN_b^2=-1/ρ(ρ' g + ρ^2g^2/γ p), N_m^2=-1/ρ(ρ' g + ρ^2g^2/γ p+B^2),where ρ'≡∂_xρ|_x=0. The relation between the growth rates is defined by four quantities:Γ=-ρ'/ρg,Γ_B=ρ g^2/γ p, Γ_m=ρ g^2/γ p+B^2,Γ_0=Γ_m^2/Γ_B.It has been known that (i) the plasma is stable when Γ_B≤Γ; (ii) at Γ_0≤Γ<Γ_B the most unstable mode is the quasiinterchange mode (k_∥≠0) and its growth rate is ω^2=-ρ g^2/B^2(1-√(Γ/Γ_B))^2; at Γ≤Γ_0 the most unstable is the interchange mode with the growth rate ω^2=Γ-Γ_m. Numerical results. For Jupiter's plasma disc, the parameters entering Eq. (<ref>) are known from observations, and thus, the most unstable mode can be easily found. Using figure 4 of <cit.> I find the characteristic distance of the outer edge of the plasma disc (x=0) from Jupiter's center (x=x_2) (Fig. 2):x_2≈20R_Jup.The mass density of the electron-ion plasma ρ=Am_pn(x=0), where A∼48 is the atomic mass (assuming that the sulfur oxide is the ion component of plasma), m_p is the proton mass, n∝(x_2-x)^-3, from figure 4 of <cit.>n(x=0)≈1 cm^-3,g≈ r_2Ω_Jup^2=4.422×10^3 cm s^-2where Ω_Jup=1.759×10^-4 rad s^-1. From these parameters I findΓ=-9.277×10^-8 s^-2,Γ_B=6.821×10^-4 s^-2, Γ_m=9.9×10^-7 s^-2,Γ_0=1.437×10^-9 s^-2.It follows from Eqs. (<ref>)-(<ref>) that the case Γ<Γ_0 (since Γ<0) is realized. Therefore, the expansion of the plasma disc of Jupiter should occur due to the interchange mode with the characteristic growth rate from Eq. (<ref>):τ_theory∼1.056×10^3 s.This result implies that the theoretical prediction for the growth rate is significantly smaller than it is expected from observations. The latter has the order of 20-80 days <cit.>, or in case of 20 days,τ_observ∼1.728×10^6 s. Conclusions. The quantitative estimate for the expansion rate of Jupiter's plasma disc, Eq. (<ref>), agrees with the qualitative prediction known from the literature <cit.>. Specifically, the theory predicts a growth rate, Eq. (<ref>), which is a few orders of magnitude smaller than it is inferred from the observations, Eq. (<ref>). In case when a brown dwarf possess a plasma disc, the analogous situation is expected. Such a discrepancy between the theory and observations indicates that a significant piece of theoretical understanding of the plasma surrounding those celestial bodies is missing. In the future work it is therefore necessary to identify possible physical mechanisms, which are responsible for the practical increase of the duration of the loss of matter. It is necessary to analyze the following possible reasons. (i) Nonzero shear of the magnetic field, which has not been included in the linear analysis in Eq. (<ref>). (ii) Account for the Birkeland currents and the corresponding electric current in the plasma disc. (iii) The action of the Kelvin-Helmholtz instability on the nonlinear stage of the interchange instability found in Eq. (<ref>).Acknowledgements. I thank P. A. Bespalov for helpful comments and discussions. This research was supported by the Russian Science Foundation under grant No. 20-12-00268.Translated by the author.Burrows2001 A. Burrows, W. B. Hubbard, J. I. Lunine, and J. Liebert, Rev. Mod. Phys. 73, 719 (2001). The Theory of Brown Dwarfs and Extrasolar Giant Planets. https://doi.org/10.1103/RevModPhys.73.719Hallinan2006 G. Hallinan, A. Antonova, J. G. Doyle, S. Bourke, W. F. Brisken and A. Golden, ApJ 653, 690 (2006). Rotational Modulation of the Radio Emission from the M9 Dwarf TVLM 513–46546: Broadband Coherent Emission at the Substellar Boundary? https://doi.org/10.1086/508678 ZaitsevStepanov2022 V. V. Zaitsev and A. V. Stepanov, Geomagnetism and Aeronomy 62, 1078 (2022). Two Populations of Magnetic Loops in the Atmosphere of the Brown Dwarf TVLM 513–46546. https://doi.org/10.1134/S0016793222080254 Climent2023 J. B. Climent, J. C. Guirado, M. Perez-Torres, J. M. Marcaide, AND L. Pena-Monino, Science 381, 1120 (2023). Evidence for a Radiation Belt around a Brown Dwarf. https://doi.org/10.1126/science.adg6635 Kao2023 M. M. Kao, A. J. Mioduszewski, J. Villadsen and E. L. Shkolnik , Nature 619, 272 (2023). Resolved Imaging Confirms a Radiation Belt around an Ultracool Dwarf. https://doi.org/10.1038/s41586-023-06138-w Bespalov2018 P. A. Bespalov and O. N. Savina, MNRAS 480, 4761 (2018). An Excitation Mechanism of Electromagnetic Pulses by Relativistic Electrons in the Brown Dwarfs Rarefied Magnetosphere. https://doi.org/10.1093/mnras/sty2204 HonesBergeson1965 E. W. Hones Jr. and J. E. Bergeson, J. Geophys. Res. 70, 4951 (1965). Electric Field Generated by a Rotating Magnetized Sphere. https://doi.org/10.1029/JZ070i019p04951 udDoulaOwocki2002 A. ud-Doula and S. P. Owocki, ApJ 576, 413 (2002). Dynamical Simulations of Magnetically Channeled Line-driven Stellar Winds. I. Isothermal, Nonrotating, Radially Driven Flow. https://doi.org/10.1086/341543 Nakajima1985 R. Nakajima, Astrophys. Space Sci. 116, 285 (1985). The Circumstellar Gas of Sigma Orionis E. https://doi.org/10.1007/BF00653783 BagenalDols2020 F. Bagenal and V. Dols, J. Geophys. Res. 125, e2019JA027485 (2020). The Space Environment of Io and Europa https://doi.org/10.1029/2019JA027485 Goedbloed2019 J. P. Goedbloed, R. Keppens, S. Poedts, Magnetohydrodynamics of Laboratory and Astrophysical Plasmas, (Cambridge University Press, 2019). Goedbloed1971 J.P. Goedbloed, Physica 53, 412 (1971). Stabilization of Magnetohydrodynamic Instabilities by Force-Free Magnetic Fields: I. Plane Plasma Layer. https://doi.org/10.1016/0031-8914(71)90127-3 Bespalov2006 P. A. Bespalov, S. S. Davydenko, S. W. H. Cowley, and J. D. Nichols, Ann. Geophys. 24, 2043 (2006). Interchange Instability of the Plasma Disk in Jupiter's Middle Magnetosphere and its Relation to the Radial Plasma Density Distribution. https://doi.org/10.5194/angeo-24-2043-2006
http://arxiv.org/abs/2310.18017v1
{ "authors": [ "Dmitry Kobyakov" ], "categories": [ "astro-ph.SR", "astro-ph.EP" ], "primary_category": "astro-ph.SR", "published": "20231027094713", "title": "Some properties of plasma surrounding brown dwarfs" }
firstpage–lastpage Edge AI Inference in Heterogeneous Constrained Computing: Feasibility and Opportunities Roberto Morabito University of Helsinki [email protected] Mallik Tatipamula Ericsson [email protected] Sasu Tarkoma University of Helsinki [email protected] Mung Chiang Purdue University [email protected] XXX. Received YYY; in original form ZZZ ======================================================================================================================================================================================================================================================== The Milky Way (MW) is by far the best-studied galaxy and has been regarded as an ideal laboratory for understanding galaxy evolution.However, direct comparisons of Galactic and extra-galactic observations are marred by many challenges, including selection effects and differences in observations and methodology. In this study, we present a novel codeto address these challenges by generating mock integral-field spectrograph data cubes of the MW using simple stellar population models and a mock stellar catalog of the Galaxy derived from E-Galaxia. The data products are in the same format as external galaxies, allowing for direct comparisons. We investigate the ability ofto recover kinematics and stellar population properties for an edge-on mock observation of the MW.We confirm thatcan distinguish kinematic and stellar population differences between thin and thick disks. However,struggles to recover star formation history, where the SFR is overestimated in the ranges between 2-4 and 12-14 Gyr compared to the expected values. This is likely due to the template age spacing,regularization algorithm, and spectral similarities in old population templates.Furthermore, we find systematic offsets in the recovered kinematics, potentially due to insufficient spectral resolution and the variation of line-of-sight velocity withand age through a line-of-sight.With future higher resolution and multi- SSP templates,will be useful to identify key signatures such as - distribution at different R and |z| and potentially measure radial migration and kinematic heating efficiency to study detailed chemodynamical evolution of MW-like galaxies. Galaxy: stellar content - galaxies: stellar content - galaxies: kinematics and dynamics - galaxies: star formation - methods: numerical - techniques: spectroscopic § INTRODUCTION How galaxies formed and evolved remains one of the outstanding questions facing astrophysics today.Due to our unique vantage point, the Milky Way (MW) is by far the best-studied galaxy in the Universe. Precise astrometric data from Gaia <cit.> and accurate chemical abundances of individual stars from large spectroscopic surveys such as LAMOST <cit.>, GALAH <cit.> and APOGEE <cit.> have been conducted in the last decade for Galactic archaeologists to reveal the detailed chemodynamical picture of the Milky Way <cit.>. This includes the accretion history and interplay of chemical and dynamical processes (e.g., ). Particularly, <cit.> demonstrated two distinct sequences in - distributions with the fractions vary systematically with location R and |z| across the MW (see also ), and the skewness of the outer-disk metallicity distribution function (MDF) indicates the important process of radial migration.The twosequences (also called -bimodality) are associated with the thick and thin disks of the MW and are key diagnostics for understanding the MW's chemodynamical evolution history. In the past decades, many efforts have been made to develop Galactic chemical evolution (GCE) models to reproduce thebimodality and uncover its origin. The two-infall model (originally fromand modified later by ) assumes two distinct infall episodes with a delay of 4∼6 Gyr.The first episode happens at the beginning of the model and forms metal-poor and -enhanced stars. When the metallicity is enriched to ∼0.4 dex, the second infall brings in metal-poor gas and resets gas's chemical composition.Then the chemical enrichment repeats and -poor stars spread along a wide range of metallicities. On the contrary, another theory by <cit.> considers radial migration or radial mixing. They showed that the extendeddistribution in low- regime is due to radial migration where stars change their angular momentum or guiding radius with time, and one smooth episode of gas infall is sufficient to produce -bimodality. The effect of radial migration is also seen in simulations (e.g., ). <cit.> applied Shu distribution function <cit.> and extended the model in <cit.> by adding the distribution ofand a new prescription for velocity dispersion relation from <cit.>, and successfully reproduced the - bimodality distributions at different R and |z|.They are quantitatively consistent with APOGEE observations in <cit.>.The relation of chemistry and kinematics is also consistent at all Galactic locations.In this study, we mainly focus on the model from <cit.> and refer to the above-mentioned publications for details. Even though specific chemical evolution models can reproduce the chemodynamical distributions of our MW successfully, it is still under debate what is the origin of -bimodality.Because the MW is only one galaxy, whether these theories apply to other disk galaxies' formation history or whether the MW is unique in the Universe is still an open question. For example, <cit.> foundbimodality is rare in the EAGLE simulation, while <cit.> found it to be present due to gas-rich mergers in all simulated MW-like galaxies from the NIHAO-UHD project. Therefore, it is essential to combine both the MW with other galaxies observations to answer these questions.According to photometric and single-fiber spectroscopic observations on external galaxies, the MW from an extra-galactic view is an Sb-Sbc galaxy positioned at the “green valley" on the color-mass diagram <cit.> and obeys the Tully-Fisher relation <cit.>. It has a low star formation rate (SFR) which is not unusual when compared to similar galaxies <cit.>.The MW appears to be a normal spiral galaxy based on its kinematics and intrinsic luminosity <cit.>, but unusually compact for its stellar mass (e.g. ). Its satellite galaxies are fewer in number and more compactly distributed (e.g. ) compared to other galaxies with similar total stellar mass. The geometric thin and thick disk structures obtained by vertically fitting the surface brightness profiles are found to be common in most disk galaxies (e.g., ), but whether they are related to differentsequences were not investigated due to the lack of spatially resolved chemistry information.Over the last decade, integral-field spectroscopy (IFS) instruments have enabled us to obtain integrated light spectra of galaxies across different regions of the same object.Several IFS surveys have already been carried out such as SAMI <cit.>, CALIFA <cit.>, and MaNGA <cit.> to measure kinematics and chemical compositions of thousands of galaxies. Several studies compared face-on galaxies in these surveys with the MW: <cit.> selected 62 Milky Way analogs (MWAs) from MaNGA with the criteria of stellar masses and bulge-to-total ratios. They found most of these galaxies have flatter stellar and gas-phase metallicity gradients due to different disc scale lengths. A greater consistency can be found when scaling gradients by these lengths; <cit.> revisited 138 MaNGA galaxies by fitting the spectra with a semi-analytic chemical evolution model <cit.> and measured their star formation and chemical enrichment histories. They detected similarbimodality as the Galactic APOGEE observations <cit.> in many of the galaxies. Even though face-on MWAs are ideal for measuring age and metallicity variations crossing different Galactocentric radii R_gc, they lack vertical (|z|) information, which is essential to geometrically distinguish the thin and thick disks and study their differences in chemical abundances. Therefore, edge-on MWAs are more useful in this case to provide elemental abundance distributions at different R and |z|.Several studies of nearby edge-on MWAs and lenticular galaxies using MUSE found similar kinematics and bimodal distributions to the MW (e.g., ).In particular, <cit.> demonstrated that UGC 10738 has similar - distributions at different projected R and |z| with the MW observation in <cit.> and model predictions in <cit.>, which supports the commonness of MW's chemical distributions. Despite the efforts above, a direct comparison of MW with its analogs in kinematics and chemistry is still challenging because the observables and methods used for studying the MW are different from those for external galaxies, i.e., utilizing properties of individual stars as opposed to integrated quantities with projection effects.Therefore, the comparisons may be impacted by systematics or biases <cit.>. In addition, some key processes such as radial migration and kinematic heating have not been extensively explored like the MW on external galaxies, which are also essential to identify whether the MW's formation and evolution is distinct from the general population. Therefore, to take the MW as an ideal laboratory and test galaxy evolution theories, one needs to remove these observational and methodological biases, transfer the observables of MW and external systems into the same definition, and apply models that consider both chemical enrichment and kinematic processes. These considerations lead to the development of the tools presented in this work. In this study, we present a novel codeto generate mock integral-field spectrograph (IFS) data cubes of the MW with integrated spectra using simple stellar population (SSP) models and mock stellar catalog from(Sharma et al. 2024, in perp), which is based on the chemodynamical model of <cit.> (hereafter S21) that is most consistent to the current Galactic observations in both kinematics and chemistry. This mock data cube is in the same format as extragalactic IFS observations. It can be applied to extragalactic data analysis methods (e.g., the GIST pipelinewith) to measure directly comparable parameters in (age, , , , , h_3, h_4).In addition,can also be used to more comprehensively re-investigate the reliability of the above software in measuring kinematic and stellar population properties than before, because the inputs for generating mock data cubes are known. Therefore, we also address this topic in this work.We describe the ingredients used inand detailed procedures of this code in Section <ref>.In Section <ref>, we test the ability of spectral fitting tools to recover kinematics, stellar population parameters, and mass fraction distributions by applying it to a mock edge-on data cube and compare to the true values fromcatalog.In Section <ref>, we discuss the causes of deviations between measured and true (input) values and address potential reasons and future improvements on current data reduction pipelines for obtaining better results.We explore the effect of different fitting strategies and provide some references when using spectral fitting tools. We also give some caveats about usingcode. In Section <ref>, we talk about future plans on the improvements ofalong with spectral fitting codes and prospect some scientific topics that can be done by using . A summary is presented in Section <ref>. § DATA CUBE GENERATIONThe purpose ofis to take the mock stellar catalog of the Galaxy obtained from(Sharma et al. 2024, in prep.), and transform it into a mock data cube in three dimensions (two in spatial and one in spectral) as observed by integral-field spectrographs (IFS) such as MUSE <cit.>, SAMI <cit.> or MaNGA <cit.>.Thecode takes as input the following set of user-specified input parameters: galaxy distance (d), inclination (i), extinction, SSP templates, instrumental properties, as well as a few additional parameters (see the full list in Table <ref>). The produced mock data cube can be processed in the same way as real IFS observations data by many methods, particularly codes like Voronoi binning <cit.>, Penalized Pixel-Fitting (, ), line-strength indices (e.g., ), or a combination of them (e.g., the GIST pipeline, ).The results can be compared directly with those from IFS observations of MWAs in terms of mass- or light-weighted parameter maps, line-of-sight velocity distribution (LOSVD), and mass fraction distributions. The ingredients, flexibility, procedures, and computational expenses of this code are described in detail in the following sub-sections. §.§ The Ingredients§.§.§ Galactic Chemical Evolution Model We apply the analytical chemodynamical model of the Milky Way developed by S21 which can predict the joint distribution of position (𝐱), velocity (𝐯), age (τ), extinction , the photometric magnitude in several bands and chemical abundances (,) of stars in the Milky Way. Compared with previous models (e.g., ), this model included a new prescription for the evolution ofwith age andand a new set of relations describing the velocity dispersion of stars <cit.>. This model shows for the first time that it can reproduce the - distribution of APOGEE observed stars <cit.> at different radius R and height |z| across the Galaxy. In this model, the origin of twosequences is due to two key processes: the sharp transition from high- to low- at around 10.5 Gyr ago that creates a valley between the two sequences, which is likely due to the delay between the first star formation and sequential occurrence of SN Ia; the radial migration or so-called churning is responsible for the large spread of the low- sequence along theaxis. This model also showed that without churning it is not sufficient to reproduce the double sequences (see their Fig. 6). Because this chemical evolution model is purely analytical, it is easy to be inserted into the forward-modeling toolto generate mock stellar catalogs for further analysis. In addition, several free parameters such as radial migration and heating efficiency are adjustable, which will be useful to implement similar analysis on external galaxies.§.§.§To mock the Milky Way IFS data cube, we need a comprehensive stellar catalog from observations on the Galaxy with well-measured parameters such as position (𝐱), velocity (𝐯), age (τ) and chemical abundances.However, this is impractical as the Galaxy has hundreds of billions of stars being unexplored and the dust in the disk obscures distant light. An alternative way is to use particle catalogs from N-body/hydrodynamical simulations (e.g., EAGLE , FIRE-2 ), but most of the simulations only contain ∼10^6 stellar particles, which is not enough to generate integrated spectra because each spatial bin (spaxel) would only contain less than a hundred particles. This in turn would increase the sampling noise of the integrated spectrum and make observables derived from spectra noise.Even though oversampling can solve this problem, it is still challenging to find a simulation that is observably identical to the MW in all aspects quantitatively, especially thebimodal trends.Therefore, the approach we take here is to generate a mock stellar catalog to represent the MW. We use(Sharma et al. 2024, in prep.). This tool is in accordance with the chemical evolution model of S21 and can create a catalog with the user-defined number of particles (N_p) with parameters including position (𝐱, 𝐲, 𝐳), velocity (𝐯_𝐱, 𝐯_𝐲, 𝐯_𝐳), age (τ), metallicity () and , where the parameter distributions are chemodynamically consistent to the MW observations.Other codes like TRILEGAL <cit.>, BESANCON <cit.>, and<cit.> can also create mock catalogs.However, compared to , the underlying models of these codes lack the information ofand , and do not have the processes of radial migration and kinematic heating. Furthermore, the ability ofto control the observed properties by regulating the underlying physical process is important for future analysis of external galaxies, whose formation history and dynamical processes are expected to be different from the Milky Way.One caveat is that star particles in the current version ofare distributed only in the disk and there is no bulge, halo, or nuclear disk structure.This is because the chemical and kinematic distribution functions predicted by S21 are extrapolated from observations in the solar neighborhood. Nevertheless,parameter distributions are still consistent with APOGEE observations in the range of 3<R_gc<15 kpc.§.§.§ Spectral Libraries To build a mock data cube, one important procedure is to turn particles ininto stellar spectra based on their properties, so a spectra library is needed. Because the integrated light in extra-galactic observations is assumed from stellar populations, in , we treat each particle as a single-stellar population (SSP). The SSP spectrum describes the spectral energy distribution (SED) of a stellar population with a single age, metallicity, and chemical abundance patterns. An initial mass function (IMF) is assumed, and the stellar population is evolved using a given isochrone <cit.>. currently supports MILES <cit.> and PEGASE-HR <cit.>, both of which will be used in this study.The MILES SSP library (FWHM=2.51, 3500<λ<7500) is based on the model of <cit.>. It uses Padova 2000 <cit.> or BaSTI <cit.> isochrones and IMF in Unimodal/Bimodal <cit.>, Kroupa Universal/Revised <cit.> and Chabrier <cit.> shapes. For Padova 2000 isochrones, the template grids cover 7 metallicity bins between (-2.32, 0.22) dex, 50 age bins between (0.063, 17.78) Gyr and one scaled-solarbin <cit.>.As for BaSTI isochrones, the template grids cover 12 metallicity bins between (-2.27, 0.40) dex, 53 age bins between (0.03, 14.00) Gyr and twobins in 0.0 and 0.4 dex <cit.>. Sinceenhancement is essential in this study, for most of the cases we will use the α-variable templates. The PEGASE-HR library (FWHM=0.55, 3900<λ<6800) is based on the Elodie 3.1 stellar library <cit.>.The SSP models are computed using Padova 1994 <cit.> isochrones with a Salpeter <cit.>, Kroupa, or top-heavy <cit.> IMF. The templates grid consists of 7 metallicity bins between (-2.30, 0.70) dex, 68 age bins between (0.001, 20) Gyr, and one scaled-solarbin.In this work, the PEGASE-HR templates are mainly used to explore the effect of spectral resolution onfitting. Therefore, it is still useful even if it lacks variable . Revised grids interpolated by<cit.> in the same way as <cit.> for PEGASE-HR are also available. The new grids contain 15 steps in metallicity between -2.3 and 0.7 dex and 50 steps in age between 10 Myr and 14 Gyr.§.§ Configurations ofTo meet different research purposes, we incorporated a wide range of adjustable parameters in . Specifically, the adjustable parameters are divided into three groups: ,and , as listed with descriptions in Table <ref>.The user can set up their preferred parameters to obtain the expected data cubes. is for setting the distance, inclination, and position of the mock MW using coordinates transformation; governs the spectral properties, such as the SSP model selection, single or variable , spectral resolution, age,and wavelength range and the interpolation method to be used to assign each particle a spectrum (details in Section <ref>).Based on particles' parameters, there are two options to assign them spectra: one is “nearest", which will assign the spectrum of the nearest template grid. The other is “linear", which will assign the spectrum by piece-wise linear interpolation of templates.Thecontrols the instrument spatial sampling, atmospheric effects (PSF), and the number of spatial bins in each coordinate.We also provide an option to derive the data cube in the format of a specified instrument.Alternatively, the user can also design a hypothetical instrument that does not exist by manually setting up these parameters, which will be useful for future instrument designs.§.§ Procedures of Making a Mock Data-CubeThe procedures of generating mock MW data cubes are described in detail as the following steps, along with a flowchart in Fig. <ref>:* Useto generate a mock stellar catalog of the MW, with particles' position (𝐱), velocity (𝐯), age (τ), extinction , metallicity () and , transferto . * Load the setup parameters of Table <ref> provided by the user, as well as a list of data cubes to be generated with their center coordinates in (l, b) or (ra, dec). * Apply coordinates transformation based onparameters to move this Galaxy to a certain distance and rotate it into a defined inclination. * Apply the spatial binning based on IFS instrument properties given by , and find the particles included in each bin, then note the bin index for later use. * Load the SSP templates and make the cutoff based on the (age, ) ranges in , then oversample them by a factor ofusinginterpolation with the order of three. Next, construct the interpolator which will be used to assign the spectrum in the next step. * Assign each particle an SSP spectrum based on its age, , andby interpolating the templates with the method defined by. Multiply the spectrum by the particle's stellar mass because SSP spectral templates are normalized to 1 . Shift the spectrum due to its line-of-sight velocity () using the Doppler equation. Then apply a flux calibration due to the particle's distance and extinction.* Loop over spatial bins to stack all the spectra of particles included and obtain the integrated spectrum for each spatial bin. In the meantime, generate the mass/light-fraction distribution of each spatial bin. The light weights are obtained within the wavelength range given by . The spatial bin numbers are given by . * After generating integrated spectra for all the spatial bins, apply the atmosphere effects by convolving each wavelength slice with a point spread function (PSF), this can be either a Gaussian or Moffat kernel function with the given . * Degraded the stacked spectrum to the instrument resolution given byusing convolution with a Gaussian line-spread-function. * Re-bin the oversampled flux array into the original wave grid or the user-defined wavelength interval and wavelength range, according toand . * Create the fits file header of flux arrays, then combine all the results as data cubes in the format of FITS file. Other than the above procedures, there are a few points that need to be clarified to the users as follows: * The original mock stellar catalog in step (i) should have two coordinate systems, Cartesian coordinates (x, y, z) and Galactic coordinates (l, b), where (l, b) are overlapped with (y, z), respectively. Therefore, by adjusting (θ_zx, θ_yz), users can change the inclination of the Galaxy at any angle. This is convenient when compared with real observations.* The oversampling in step (v) has two purposes: one is to ensure the validity of degrading - when degrading the SSP templates from the original spectral resolution to instrumental resolution (e.g., from MILES to MUSE), the σ of the Gaussian kernel to be convolved with the templates will be smaller than wavelength interval Δλ without oversampling. In this case, the kernel function array becomes invalid with only one element having a value of 1, and the rest having values of 0. Then the degraded spectrum will be still in its original resolution. Another reason is to minimize the sampling error when stacking the spectrum with different line-of-sight velocities. Fig. <ref> shows an example mock integrated spectrum generated by using the non-oversampled (in red and light-red) and oversampled (in blue and light-blue) MILES SSP templates. The light-color lines are the spectra convolved with a Gaussian kernel by the given mean velocity and dispersion (μ, σ), which is the manner ofduring the kinematics measurements. The dark-color lines are stacked from 2000 spectra shifted by Gaussian distributed line-of-sight velocities using the same (μ, σ), which is the manner of . By calculating the residuals of these two lines (shown in the residual panel), we find that oversampling can reduce the deviation between the convolved and stacked spectrum significantly by a factor of ∼25.* This package can select a portion of particles within the field of view (FoV) of the user-defined instrument to generate the data cube, rather than take a whole catalog into account. It helps to mimic a more realistic IFS observation and reduces computational expenses. *can be executed in the batch mode where users can provide a list of cubes with the central coordinates in (l, b) or (ra, dec), and all the cubes can be automatically generated in one execution;* Other than the data cube FITS file, this package also generates some by-products (optional), such as mass/light-fraction distributions and number of particles array for each spatial bin, mass/light-weighted , , age and kinematic distribution maps in the Galactic coordinates, etc. These by-products are obtained fromparticles' properties and can be used to calculate the expected true answers thatshould recover from spectrum fitting. This will be described in detail in Section <ref>. §.§ Estimate the Sampling ErrorCompared to the real catalogs of the Milky Way (e.g. Gaia), one shortcoming of themock stellar catalog is the limited number of particles. Although there are 10∼100 times more particles (10^8) in the mock stellar catalog compared to MWAs in N-body/hydrodynamical simulations, it is inevitable that some spatial bins contain too few particles for robust measurements of their properties.Even for spaxels or Voronoi bins containing more than ∼10^4 particles,the spectral noise due to finite star particles is still non-negligible.We call this type of spectral noise “sampling error" (e_f, S).One way to reduce sampling error is to generate more particles from . However, this will increase the computational expenses and memory usage ofsignificantly, and exceed most of the HPCs' limitation (∼22 GB for a catalog containing 10^8 particles, hence ∼220 GB for 10^9 particles). Therefore, when mocking IFS observations, one has to ensure that for each integrated spectrum, the sampling error is smaller than the observational flux error (e_f, M assuming MUSE observations) due to instrument conditions and exposure time.In this way, it is safe to apply the data reduction pipeline on this mock data cube and allowto derive mathematically reasonable results. This is particularly important in kinematics because the LOSVD effect inintegrated spectra is implemented by stacking individual spectra of particles with each shifted by its own . Butemploys the Gauss-Hermite function to convolve with SSP templates and determines the kinematics moments. In Section <ref>, we will provide a detailed example of how to deal with the sampling error for each Voronoi bin. To estimate the sampling error, thecode provides an option to apply bootstrapping. First, it randomly re-selects particles from the originalcatalog and generates a bootstrapped catalog with the same particle number (N_p). Then,uses this catalog to generate bootstrapped data cubes.Repeating the above procedures a certain number of times, then the sampling error for each spaxel can be represented by the standard deviation of these bootstrapped fluxes.This sampling flux error will be used as the lower limit of the acceptable Gaussian noise when mimicking the real observations and obtaining the final integrated spectra of Voronoi bins. Fig. <ref> illustrates how the sampling SNR (flux divided by sampling noise) varies as a function of the number of particles in a spatial bin. The shaded region is the standard deviation of sampling SNRs for spaxels having the same number of particles.We obtain this figure by using the original and bootstrapped mock data cubes later in Section <ref>. It can be seen that a spatial bin with ∼10^3 particles can generate a spectrum with sampling SNR∼25 , and a spatial bin with ∼10^4 particles can generate a spectrum with sampling SNR∼80 .§.§ Computational Expenses and Multiprocessing StrategyThe execution time ofto generate mock data cubes depends mostly on the number of particles included in the instrument FoV and the spectral interpolation method. In general, the execution time is proportional to the number of particles used, and the “nearest" interpolation is three times faster than the “linear" interpolation. In this code, we applytechniques to speed up the computing.For a typical MUSE FoV containing 6×10^6 particles, the execution time spent with a 24-core CPU (2.50GHz) is ∼ 1.4 hour. Based on this, the users can roughly estimate the execution time they will spend. If taking into account the bootstrapped cubes, the total execution time will be multiplied by the number of bootstrapping times plus 1. Therefore, it is highly recommended to run it on high-performance computers (HPC) or a Cluster where these 21 jobs can be executed simultaneously.§ RECOVERY OF THE GALAXY CHEMODYNAMICAL PROPERTIESIn this section, we test the ability ofto recover galaxy properties by applying it to mock cubes generated using . We measure kinematics (, , h_3, h_4), stellar population parameters (age, , ) and mass fraction distributions of different structural components.The analysis is performed in the same way as extra-galactic studies. Then we compare the results with the input true values that are obtained by stellar parameters incatalog.This test allows us to access the consistency of parameters measured via broadly applied software in other studies (e.g. ), which was not possible previously as the true values of external galaxies are unknown.Furthermore, it also provides standard references for the future to better understand extra-galactic results (e.g., gradient, flaring) by distinguishing real distributions from artificial effects due to the measuring method (e.g., ), projected view, and integrated light. §.§ Mock Cube Generation for MUSE Instrument We generate a mock MUSE observation by , using thecatalog that contains 10^8 particles. We removed particles with stellar age less than 0.25 Gyr because their position and kinematics are erroneous in the current version of , and we confirmed that removing these particles does not affect our conclusions. The mock MW catalog is assumed to have a distance of 26.5 Mpc and inclination of 86^∘ to the observer.The selected distance and inclination are the same as the projection of NGC 5746, which was observed by MUSE and has comprehensive analysis in(hereafter M21).We use MILES α-variable SSP templates <cit.> with the BaSTI stellar isochrones <cit.> and Kroupa Universal IMF <cit.>. The templates have 53 bins in age, 12 bins inand 2 bins inand we apply a “linear" interpolation to assign each particle a spectrum based on its age,and , and then degrade the stacked spectra to MUSE spectral resolution (FWHM∼2.65).Following procedures in Section <ref>, we obtain two mock cubes focusing on the central (N_p=61575676) and the disk (N_p=7379847) regions, as shown in Fig. <ref>.This observation strategy is also the same as NGC 5746 in M21. The total execution time spent byon a 24-core CPU for these two cubes is ∼14.5 hours.We also generate 2×20 bootstrapped cubes and use 16% and 84% percentiles to calculate the sampling error of each spaxel. We do not apply extinction in these cubes because here we only focus on therecovery ability. Adding extinction would blend all the effects and make it difficult to differentiate their individual impacts. Therefore, we reserve this topic for future studies.The next procedure is to add Gaussian flux error to the spectra. We first derive the MUSE flux error (e_f, M) of the mock cubes. The flux error depends on many aspects but can be classified into two main categories: the observation conditions (seeing, air-mass, exposure time, etc.) and the instrumental properties (telescope aperture, system efficiency, dark current, read-out noise, etc).For simplicity, we ignore the sky conditions, dark current, and read-out noise which only contribute a few percent to total received photons, and assume the MUSE SNR and received photons are defined bySNR=f/e_f, M=√(N),N=aft,where f is the flux of the target; e_f, M is the MUSE flux error; N is the received number of photons; t is the exposure time; a is an overall reaction of sky transmission, efficiency and telescope aperture. Therefore, a should be only dependent on wavelength for the same instrument.By substituting the above equations, the parameter a can be calculated using f, e_f, M and t from an observation by a=f/e^2_f, M× t.In this work, we take all the bulge and disk observations of NGC 5746 from M21 and fit equation <ref> as a function of wavelength (λ) using a 4-degree polynomial, which is described bya(λ)= 4.34274826^-18λ^4 - 1.43263443^-13λ^3 + 1.61141240^-9λ^2 -7.29164505^-6λ + 1.17213217^-2,where a(λ) is in the unit of 1/10^-20 erg cm^-2 ^-1.Next, we set the bulge and disk mock cubes to have an exposure time of 1729.39 s and 6221.84 s, respectively, and use the equation <ref> and <ref> to estimate the flux error e_f, M of each spaxel. Then we use this error to add Gaussian noise to all the spaxels.Finally, the two mock cubes are stitched together. §.§ Extracting Galaxy Properties We apply the GIST pipeline[<https://gitlab.com/abittner/gist-development>] <cit.> on the stitched mock cube to measure the kinematics and stellar population parameters.The GIST pipeline combines all the tools needed to process the data and the user can obtain final results in a single execution. Here we use a modified version to implement some functionalities that the current public version (v3.1.0) does not have but are needed in this work. A detailed list of added features is given in Appendix <ref>.We run the GIST pipeline in the following steps: First, we apply the Voronoi tessellation software <cit.> to spatially re-bin the mock cube and increase the SNR to 80 , which results in 1477 Voronoi bins. Most of the Voronoi bins contain N_p>10^4.For the other Voronoi bins, N_p is also very close to this number. To ensure the sampling noise e_f, S is less than the MUSE flux error e_f, M, we apply the same Voronoi binning arrangement to all 20 bootstrapped cubes. Then we calculate the sampling noise e_f, S of each Voronoi by taking half of the difference between the 16th and 84th percentiles of its 20 bootstrapped integrated spectra. We confirm that on average e_f, M>e_f, S applies for all the Voronoi bins.Next, we apply the full-spectrum fitting method<cit.> to measure the stellar kinematics of each Voronoi bin.Here we use the same MILES α-variable templates <cit.> as when we generated the mock cubes and degraded them to the same MUSE spectral resolution.Since there is no emission line or atmosphere effect, we use a wide wavelength range of (4750, 6550) to fit with the Voronoi binned spectra and remove the first and last 75 to avoid effects caused by spectral oversampling, rebinning and Doppler shifting, etc. During the fitting, the MILES templates are convolved with a line-of-sight velocity distribution (LOSVD) described by a Gauss-Hermite equation to match the Voronoi binned spectrum. We parameterize the LOSVD using four moments, which are mean line-of-sight velocity , line-of-sight velocity dispersion , and the third- and fourth-order moments h_3, h_4.No regularization is applied in this process, and we include a fourth-order multiplicative Legendre polynomial. After measuring kinematics, we employagain to obtain the stellar population parameters for each Voronoi bin.We choose the same templates, spectral resolution, and fitting wavelength range as the previous step. Each template is normalized to one solar mass ().We use the LOSVDs (, , h_3, h_4) measured in the last step as input, and fix them during the fitting to obtain the weight of each template. The best-fit spectrum is the weighted sum of all the templates.For the initial fitting, we set no regularization to obtain the initial χ^2. Then we follow the approach in <cit.> and iterate the fitting by increasing R until χ^2 is increased by Δχ^2=√(2N), where N is the number of wavelength pixels considered for the fit. This iteration process allows us to obtain the smoothest solution that is still compatible with the data within 1σ level.At this stage, we note R reaches the maximum regularization R_max.Next, we choose R=5 which is between 0 and R_max (30∼100) to keep smooth solutions while still allowing for a variation on short timescales of the star formation, which will disappear if R is too large (see similar discussions inand M21). Fig. <ref> is an example of the fitting results for the spectrum of one Voronoi bin. Finally, using the weights, we can calculate mass-weighted age, , , and mass fraction distributions.In addition, we also apply the LS module in the GIST pipeline to compute line-strength indices of each Voronoi bin spectrum in the LIS system <cit.>choosing the definitions provided at a spectral resolution of 8.4 .This routine was presented by <cit.> and <cit.>.Next, given the relationship between templates' properties (age, , ) and line strengths, the measured line strengths can be matched to SSP-equivalent parameters by using the MCMC implementation from the package emcee <cit.>.In this work, we follow M21 and useas an age indicator and Fe5015, Fe5270 and Fe5335, and Mgb to trace metallicity and .The Monte-Carlo simulation is run 15 times and each one uses 100 walkers and 1000 iterations to obtain uncertainties.After measuring all the parameters, in the next sub-sections, we compare the kinematics, mass-weighted stellar population parameters, and mass fraction distributions of the mock cube with the true values to verify the recovery ability of . §.§ Kinematic Maps Fig. <ref> shows the kinematics maps in four moments (, , h_3, h_4) of the mock MUSE cubes. The scale of the color bar is given in the second row of the upper left corner of each panel.We calculate the true values in the first with the following procedures: First, for each Voronoi bin, we calculate the total flux of each particle in thefitted wavelength region. Then we plot flux-weightedhistogram distribution using all the particles included in this bin. Next, we fit this histogram with a Gauss-Hermite function and obtain four best-fit moments (, , h_3, h_4). This method is consistent with the definition of light-weighted kinematics whichis expected to recover during the fitting process. The middle row is the results fromand the bottom rows are residuals of theresults and the true values. In this figure, the kinematic moments obtained byhave the same trend compared to the true values.Both show two kinematically distinct components: one is aligned to y∼0 and the thickness increases with x, which has larger absoluteand h_3, and smaller ;the other is in a similar projected radius but vertically higher and thicker, and it has smaller absoluteand h_3, but larger . An anti-correlation of h_3 withwhich are usually associated with disk-like components (e.g. ) is seen and are similar to MUSE edge-on galaxies studies of <cit.> and M21. In Section <ref>, we will define these two components as thin and thick disks. However, in the residual panels, all these four moments show systematic offsets.Compared to the true values,fromis around 17  lower above and below the very thin mid-plane (y∼ 0) and shows an overestimation around x∼ [10, 30] arcsec. Around the galaxy center, the residual ofalso shows a continuous decrease from negative to positive x; is generally overestimated everywhere in the galaxy with few light blue residuals. h_3 is overestimated in regions of x∼[20, 60] arcsec and y∼[10, 25] arcsec and underestimated in the outer region of x∼[60, 110] arcsec; h_4 fromhas no significant structures likeand h_3 maps, which is also seen in real galaxies results (e.g.,and M21), but the true h_4 map clearly shows kinematic differences. The clear structures in these residual panels indicate that it is not because of the fitting uncertainties.We will discuss this issue in detail and provide our investigations in Section <ref>.§.§ Mass-weighted Stellar Population Parameter Maps Fig. <ref> shows the mass-weighted age,andmaps of the mock MUSE cubes. The first row is the true values by calculating the particles' median age,andfor each Voronoi bin, which are equivalent to mass-weighted values since each particle is weighted as 1 . The second row is the results fromwith regularization R=5. The third row is the results fromwith R=R_max. The last row shows the residuals of theresults with R=5 and the true values.The overall distributions of these three parameters obtained byare very close to the true values. This confirms the reliability of usingto measure the weighted chemical compositions.Especially, themap fromwith R=5 indicates the capability ofto identify -rich and -poor populations in the thick and thin disk, respectively, even though only twobins are available. The residuals offromwith R=5 and the true values are flat and no systematic pattern is found. However, minor offsets appear in age andresidual panels:age fromis mostly overestimated in the outer regions whileis overestimated in the inner center regions. This means the age gradient fromis underestimated but thegradient is overestimated. In addition, thedistribution fromresults with R=R_max is almost uniformly high and much larger than the true values for all the Voronoi bins.This is because when R is very large, thealgorithm forces the result to have very smooth template weights in three parameter dimensions (age, , ).Since there are only twogrids, regularization will force them to have similar weights to achieve smoothness and does not permit large deviations (e.g., more than 0.1 dex). Therefore, it will be challenging to identifybimodality. Results fromwith R=R_max also show much underestimation for age gradients than results with R=5. The age andgradients are essential properties to help understand the star formation and chemical enrichment processes. Therefore, a wrong choice of regularization will then easily lead to wrong conclusions. We will explore these offsets in more detail in Section <ref> using mass fraction distributions and the effect of regularization in Section <ref>.§.§ SSP-equivalent Maps from Line-Strength IndicesFig. <ref> shows the SSP-equivalent age,andmaps of the mock MUSE cubes measured by line-strength indices.This figure shows that the main structures we derived fromare also recovered by the line-strength analysis with consistent trends.In the age panel, young populations are closer to the mid-plane, and old populations are further to the mid-plane or above/below the central region. In thepanel, we see the metallicity gradient from the inner center to the outer Galaxy. In thepanel, we see the α-rich bins in the center and α-poor bins in the outer region. The main difference compared withresults is that the age panel shows a very low range of [1∼5] Gyr.This is also seen in M21 (Fig. B1) and because of the Balmer line indices being dominated by young stars. Therefore, the SSP-equivalent ages only reflect the fraction of stars formed within the past Gyr <cit.>. The SSP-equivalentandrange are much closer compared toresults because young populations do not contribute much to the metal lines, which is also indicated in M21. This figure confirms that both line-strength indices andanalysis can identify α-rich (thick disk) and α-poor (thin disk) populations.§.§ Mass Fraction Distributions of Different Galaxy Components In addition to calculating the mass-weighted parameters, we can also study the mass fraction distribution of stellar populations along the age andspace. This is done by using weights of templates from .Because the flux of each template is normalized to 1 , the weights array fromoutputs in our tests are equivalent to stellar population mass fractions.Therefore, we can study the mass distribution of any component of the Galaxy.M21 employed multiple components morphological fitting to a Spitzer 3.6 micron image to obtain regions dominated by the boxy/peanut bulge, nuclear disk, and thin and thick disks. In Fig. <ref>, we artificially select similar regions based on the locations (x, y), kinematics, and stellar population parameters of different components of M21, and name them “upper central", “inner central", “thin disk" and “thick disk", as shown in different colors.We call them “upper central" and “inner central" because there is no boxy/peanut bulge and nuclear disk in the GCE of S21.Note these component definitions are purely following those in M21 to mock their data analysis. In reality, radial scale lengths of the thin disk (R^t) and thick disk (R^T) in NGC 5746 and the MW are very different (R^t_MW=2.6±0.5 kpc and R^T_MW=2.0±0.2 kpc from ; R^t_NGC 5746=6.1 kpc and R^T_NGC 5746=8.2 kpc from M21). For each component, the mass weights of all the Voronoi bins are combined to represent its mass fraction distribution. Fig. <ref> shows the mass fraction distributions of these four components, respectively.The total weights are normalized to one for each panel.The left column is the results fromwith R=5, and the right column shows the true values calculated using particles' properties in the mock stellar catalog. We only plot the weights above 0.001 . For the thin disk, true values indicate a rapid metallicity enrichment history ∼10 Gyr ago, and it slowly increases later on. While fromresults, mass fraction weights are dominated by two regions, which are relatively young (2∼4 Gyr) and old (12∼14 Gyr) stellar populations and the metallicity enrichment trend is indistinguishable.The same features are also seen in other Galaxy components. For the overdensity region of 2∼4 Gyr, we think this is due to the decrease in template age bin size from old to young population. For the region of 12∼14 Gyr, we think this is because at the level of SNR∼80 , the differences between nearby old, metal-rich templates are smaller than the Gaussian noise added on the spectra, causing the degeneracy in this parameter region.More details will be discussed in Section <ref>. For the thick disk, true values also show the metallicity enrichment trend, but the populations born less than 1 Gyr are more metal-poor than the thin disk;This is due to the geometrical definition of the thick disk, which contains younger and relatively more metal-poor stars that are flared in the outer disk (x∼[60, 80] arcsec). Given that NGC 5746 from M21 is four times more massive than the MW, it has a larger scale length and the definition of its thick disk might not apply to the MW. Another reason is due to the projection effect, where young stars flared in the outer disk could appear at the front and back of the line of sight in the region of x∼[30, 60] arcsec. The upper central shows a similar trend with the thick disk but is more dominated in the old populations, but this domination is smoothed out in theresults; For the inner central, true values are showing a clearer chemical enrichment trend and there is no new population born with <-0.2 dex in the young region, while theresults show again two regions with higher mass fractions, one at ∼2 Gyr and the other at 12∼14 Gyr.Therefore, except for the overestimation in the young and old regions, the mass fraction distributions ofare generally in agreement with true values.In Fig. <ref>, we integrate the mass fraction distributions in Fig. <ref> along the two axes and derive age anddistributions for each component. The top panels are mass distributions as a function of age which is the definition of star formation history or star formation rate, and the bottom panels are as a function of . Results fromare in red lines and the true values are in black lines.For each panel, we calculate the correlation between these two lines to quantify their similarity. For the age distributions, we find the same as in Fig. <ref>. Compared to the true values, theresults of all the components demonstrate an overestimation of weights in the ranges of 2∼4 Gyr and 12∼14 Gyr and underestimation in the range of 4∼11 Gyr. The underestimated regions seem to compensate for the overestimated regions. And the thin disk, inner central, and global panels show a peaky feature with age <1 Gyr. For thedistributions,results are consistent with true values for most regions, as indicated by the correlation coefficients. However, weights in the metal-rich region are overestimated by . Other than that, the overall trend of results fromis consistent with true values.In conclusion, we find thatcan recover the broad trends of 2-D mass fraction distributions for different components, but with overestimation in 2∼4 Gyr, 12∼14 Gyr and most metal-rich regions. When integrating into 1-D age and metallicity distributions, these inconsistencies are significant. According to correlation coefficients,distributions are more consistent with the true values than age distributions. We will investigate the reasons for such differences in detail in Section <ref>. §.§ Distributions of   along Different Galaxy Locations (R, z)According to S21, different radial migration and kinematic heating rates will cause different fractions of stars in the distinct -rich and -poor sequences. A recent study by <cit.> derived this distribution from an external galaxy UGC 10738 using MILES α-variable templates <cit.> and concluded it has similar bimodality distributions to the MW. Since we know the true values from the mock stellar catalog particles, here we explore the recovery ability ofon measuring the changes of this bimodality at different projected R and |z|.Different from <cit.> where they compared integrated values of UGC 10738 with individual stellar values of the MW, our integrated-to-integrated value comparison is more direct and there is no systematic bias due to different methodologies. This direct comparison also provides an example for future studies on comparing an integrated version of the MWbimodality with MW-like edge-on galaxies from MUSE observations (e.g., GECKOS survey ).The results are shown in Fig. <ref> where we separate different locations in the same way as <cit.>.For each panel, the total mass fraction given by true (dotted lines) and(solid lines) is normalized to 1, respectively.Blue lines are mass fraction distributions for =0.0 dex while red lines represent =0.4 dex.The overall trends for -rich and -poor sequences fromare consistent with the true values for most of the panels, which indicates both sequences are well recovered by , as also shown by correlation coefficients. However, there are some discrepancies such as in the inner regions (R<5 kpc) and thin disk (|z|<0.5 kpc), whereshow smoother metallicity distributions in both the blue and red lines compared to true values.In addition, the mass fraction is underestimated at -1<<0.0 dex and overestimated at the most metal-rich regions, which is also seen in metallicity distributions in Fig. <ref>. Because we do not see the tail at the most metal-rich regions in the true values for bothsequences, this difference is more likely due to the degeneracy of metal-rich populations; these spectra are very similar which means thatwill obtain more uncertain results.When comparing differentsequences, the metallicity positions with the highest mass fraction are identical in bothand true values. This differs from our understanding of - relation of the Milky Way. It is more likely due to the limited number ofandbins in MILES templates that cause the differences of metallicity distribution in differentbins to be indistinguishable.Similar findings are also mentioned by <cit.> when they analyzed the -bimodality of UGC 10738. Therefore, we emphasize that moreandbins in the spectral templates are necessary to obtain more detailed distributions of this bimodality.It can help to make it more feasible to identify the effect of different radial migration and kinematic heating efficiency from the relative fractions of these two sequences when compared with other IFS observations. § DISCUSSION§.§ Systematic Offsets in the Kinematics Recovery In Fig. <ref>, we show the deviations between the kinematics map fromand the true values.To better illustrate the offsets of each moment, we plot their residuals as a function of true velocity dispersion σ_true in Fig. <ref>, in which the y-axis value is calculated byfitted value subtracting the true value.Each data point represents one Voronoi bin.The blue dotted lines represent the instrumental dispersion () in MUSE spectral resolution. We also plot the zero-line in red to guide the eyes. The first row demonstrates the results in Fig. <ref>, and it clearly shows the systematic offsets for each moment: Δ increases with σ_true, andis overestimated for most of the Voronoi bins; as for h_3 and h_4, the residuals have a slight positive slope with .From Fig. <ref>, we find that several Voronoi bins have large absolute h_3 and h_4 true values (e.g., central region and the thick disk).These higher-order moments are not well constrained when SNR is relatively low <cit.>. Therefore,with keyword will penalize them towards lower absolute values during the fitting to make the LOSVD towards a Gaussian.However, using different penalizations can result in different kinematic measurements (see full analysis using SAMI galaxies by ). In our case, when h3 and h4 are larger than 0.15/0.2 the Gauss-Hermite approximation no longer works well, so differences are expected.Therefore, we turn off the penalization of these higher-order moments and re-measure the kinematics, and show the results in the second row of Fig. <ref>.A spatially distributed map can also be seen in Fig. <ref> of the Appendix <ref>. After turning off penalization, both h_3 and h_4 fromare visually closer to the true values in Fig. <ref>.However, the second row of Fig. <ref> still shows similar trends ofand , and h_3 and h_4 are very scattered, which indicates penalization is not the cause of the systematic offsets. Another possible reason causing the kinematic inconsistency could be the low spectral resolution of the MUSE instrument.As shown in the first row of Fig. <ref>,values for most of the Voronoi bins are smaller than the MUSE instrumental dispersion ∼62 .In this case, the recovery of kinematics will be uncertain. This has been pointed out in Fig. 3 of <cit.>, and the only way to avoid it is to increase the instrumental spectral resolution. The explanation is when  < , duringfitting, the broadening of one sharp spectral feature is less than the distance to its nearby wavelength pixels In this case, the nearby wavelength pixels have minor changes and this brings difficulties forto measurecorrectly, and h_3 and h_4 will go towards zero because there are also not enough pixels to identify the skewness. To remove the effect of low spectral resolution, we tested the kinematics recovery again by using PEGASE-HR templates <cit.> generated by Kroupa IMF and PADOVA 1994 isochrones, which has a higher spectral resolution (FWHM=0.55) than MUSE (FWHM∼2.65).The instrumental velocity scale of PEGASE-HR is ∼15 , which is smaller than the minimum σ_true of our Voronoi bins.We repeat all the procedures in Section <ref> to generate new cubes using PEGASE-HR, and then apply them to the GIST pipeline to measure the kinematics. We keep the original PEGASE-HR spectral resolution throughout the whole process. The third row of Fig. <ref> shows the kinematics recovery using PEGASE-HR (also see the kinematics map in Fig. <ref>).Compared to the first row, the residuals of each panel are slightly better or clearer when using higher spectral resolution templates. However, all these four moments fromstill have the same systematic offsets to the true values.Most improvements are for h_3 and h_4, with less bias to zero values because thehigher spectral resolution helps identify skewness and kurtosis.We also show the results by turning off penalization in the fourth row (also see the kinematics map in Fig. <ref>), and find the h_3 and h_4 residuals are identical. Therefore, turning off penalization helps with low-spectral-resolution kinematics measurements, but does not affect higher-resolution spectra. The higher spectral resolution improves the kinematics recovery but does not fix the systematics offsets. Fundamentally, duringfitting process, SSP templates are loaded and convolved with a Gauss-Hermite function by FFT to match the observed spectrum <cit.>, then it returns one series of LOSVD moments which are best fitted to the observed spectrum of each Voronoi bin.This meansassumes all the templates having the same kinematics (, , h_3, h_4) in a line-of-sight, i.e., stellar populations with differentand ages are assumed to have the same LOSVD.But in real galaxies, due to the joint process of chemical enrichment and dynamical movements of stars, populations with differentand ages are different in LOSVDs.This effect is non-negligible for edge-on projected galaxies because metallicity and age gradients along the disk are the strongest.Therefore,should change withand age, which disagrees with the analysis used in most of the previous studies when employing .Because of the relation betweenand , stars with differentshould also have different .To investigate it in our mock stellar catalog, we select one Voronoi bin and split the included particles into three groups usingand age, respectively.We reserve this investigation onfor the future due to the limited number oftemplates. Then, we plot the LOSVDs of these groups of particles, weighted by each particle's total flux, and show them in the first row of Fig. <ref>.In the top-left panel, particles with -0.8<<-0.4 dex have the sharpest distribution, while particles with -0.2<<0.4 dex show a nearly normal distribution.In the top-right panel, particles with 12<age<14 Gyr have the broadest distribution, while particles with 0<age<8 Gyr have the narrowest and most peaked distribution.The red line and black line representfitted and the true LOSVD for this Voronoi bin, with kinematic parameters texted in the top right corner, respectively.Compared to the true LOSVD, the width offitted curve in red is close to the youngest populations (0<age<8 Gyr), which is reasonable because these populations dominate the light.However, the top right panel shows thattends to also care for the oldest populations by having a more skewed tail of their LOSVD ataround 150∼300 . Even though the youngest populations dominate the spectral light, the old populations have the most features along the whole fitted wavelength region. Therefore, the differences of the skewness in the region ofat 150∼300  indicate the effect caused by different spectral features having different LOSVDs. This figure strongly suggests there are limitations on current techniques to obtain unbiased kinematics due to the dependence ofon age and .We made a new experiment to verify this effect onresults. Firstly, we re-generate mock MUSE cubes following the same procedures as above, but not using each particle'sto shift the spectrum. In this way, we obtain non-kinematics mock data cubes. Next, for each Voronoi bin, we select all the particles included and obtain itshistogram, which is the true LOSVD as shown in grey in Fig. <ref>. Then, we directly use this histogram as the LOSVD kernel function and convolve it with all the spectra in this Voronoi bin to add kinematics. We do it for all the Voronoi bins and obtain new mock cubes, where spectral templates with differentand age have the same LOSVDs. Therefore, in this way, we eliminated the relation betweenand ages as mentioned above. Finally, we employto the new cubes (hereafter LOSVD-Convolved cubes) and measure the kinematics.The fifth row of Fig. <ref> shows the kinematics recovery results.It is clear that all the Gauss-Hermite moments are well recovered with all the data points aligned to the zero line.The red and black lines in the bottom row of Fig. <ref> also indicate the correction ofmeasurements. We also plot the kinematics maps in the same way as Fig. <ref> in Appendix (Fig. <ref>) to better demonstrate the consistency. We also tested the kinematics recovery on LOSVD-Convolved cubes generated using MILES templates, as shown in the sixth row of Fig. <ref> (see also Fig. <ref>).In this low spectral resolution, even though we make different populations have the same LOSVD for the spectra, there are still systematic offsets appearing in all the panels. We also demonstrate in the last row the results on LOSVD-Convolved cubes generated using PEGASE-HR templates, but in MUSE spectral resolution to remove the effects of different templates (see kinematics maps in Fig. <ref>). The appearance of offsets indicates the effect purely due to insufficient spectral resolution.In conclusion, all these results strongly suggest that potentially both the low spectral resolution and variant LOSVD contribute to the systematic offsets we see in residual panels of Fig. <ref>.In the future, to remove the systematic offsets of stellar kinematics measured by , one needs to use an instrument with higher spectral resolution than MUSE.This means it is necessary to also increase the exposure time to achieve the same SNR.As we knowchanges withand age in the line-of-sight of real galaxy observations, it might be necessary to allow different templates to have different LOSVDs during thespectrum fitting to obtain more accurate results.However, since kinematic and stellar populations measurements via full spectrum fitting have already been a highly degenerate problem. Allowing variable LOSVDs will increase the degeneracy by a factor of the number of templates. In this case, the algorithm is easy to obtain incorrect results. One possible way would be simply assuming a quantitative relation between LOSVD and metallicity and age at different locations of the galaxy and that this relation can be expressed by an analytical equation.This is equivalent to adding a prior to thefitting process to improve the accuracy of kinematics measurement without adding degeneracy. Even though this way has many limitations, it will be still useful for the analysis of Milky-Way-like galaxies, andcan help provide this prior for galaxies in different projections.Apart from the above analysis oninconsistencies with the true values, Fig. <ref> also shows the disagreements betweenhistogram (grey line) and the parametrically best-fitted true LOSVD (black line). Even though it does not affect our conclusions above, we indicate the need for non-parametric techniques (e.g., BAYES-LOSVD ) to measure kinematics rather than using Gauss-Hermite equation, especially for theregion of 150∼300 .§.§ Inconsistency of Mass Fraction Distributions Recovery In general spectral fitting, measuring mass fraction distributions of stellar populations is a highly degenerate problem, especially when one wants to obtain the star formation history of the Galaxy at different components. We follow the data analysis procedures in M21 to divide the template's age and metallicity bin size. However, most of the previous studies did not consider the bin size. Nevertheless, the mass fractions increasing from young to old and metal-poor to metal-rich populations found in Fig. <ref> is still seen in some components of the galaxies (e.g., ).In the mock and real data tests of <cit.>, the mass fraction distributions are also smoothed towards old and metal-rich regions (see their Fig. 5).The over-dense feature of young populations at around 2∼4 Gyr in our Fig. <ref> is also seen in some of the above-mentioned studies. In this section, we re-investigate all these tests more comprehensively using mock cubes ofand discuss the differences in the mass fraction distribution from , including whether or not to divide the templates' age and metallicity bin size, and try to address potential reasons causing the inconsistency to the true values. We first convert the age distribution x-axis of Fig. <ref> from linear to log-scale, because MILES templates are almost evenly scaled in , then the width of each step is close to be equal.Next, we re-plot the age distribution recovery in Fig. <ref> and calculate the correlation coefficients using the original age grids.The first row shows age distribution in mass fraction , which is the representation of the star formation history of each component. The thin disk, inner central, and global panels clearly show two overdensity regions, which are pointed out by two green boxes. The thick disk and upper central panels also have peaky features in these boxes, but not significant.The second row shows just the mass fraction, which is the direct output from . Opposite the first row, all the panels in the second row show relatively smooth distributions. Therefore, given the differences between the first and second rows are whether or not dividing bin size, we speculate the two overdensity regions are artifacts due to the bin size arrangements to the template grids. To verify our speculation, we plot in Fig. <ref> a zoom-in of the two overdensity regions pointed out in the top right panel of Fig. <ref>.The blue and orange lines are mass fraction () and mass fraction distributions, equivalent to the first and second row of Fig. <ref>, respectively. The mass fraction for each panel is multiplied by a factor for better visualization. We also plot the age bin size of MILES templates as the grey line and the scale in the right y-axis. From these two panels, we find the peaky features of mass fraction () (blue line) appear when the age bin size experiences a decrease from old to young ages, and the mass fraction distribution (orange line) is still relatively smooth. Therefore, we can confirm the peaky features of star formation history fromare due to the age bin size of templates. This also explains the better recovery ofdistributions (bottom row of Fig. <ref>) which is due to the almost linearly spacedgrids. Previous studies applied regularization <cit.> into obtain smooth mass fraction distributions.The regularization works as an extra term in the equation to be minimized during the optimization to damp high-frequency variations in the mass weights distribution along spectral templates grid (see details in Section 3.5 of ) and leads to smooth mass weights distributions. However, the source code ofindicates that this smoothness has an effect on mass weights rather than mass weights per bin size. This explains why mass fraction distributions are smooth in our results, and whether mass fraction () distributions are smooth depends on how the template grids are spaced.A further test by modifyingand taking into account the bin size during regularization is needed to verify in the future. From now on, we only use the mass fraction without dividing the bin size to justify the recovery consistency, because this is expected to be achieved by . Although the currentregularization algorithm does not consider the bin size of the template age grid and different regularization strategies (e.g., R values, order of regularization) can affect the mass fraction recovery (will be discussed in Section <ref>), we still see the inconsistency betweenand true mass fraction distributions of the oldest populations in the bottom row of Fig. <ref>.The main reason is that spectral templates in the oldest and metal-rich regions are very similar and indistinguishable at the current SNR level. Therefore, thealgorithm may get trapped into the local minimum when measuring mass fraction weights in these age- regions. We also confirm in the Appendix (Fig. <ref>) that using LOSVD-Convolved cubes does not improve the results significantly. One way to increase the quality of the mass fraction recovery in old and metal-rich population regions might be to increase the SNR of the spectrum.This can be achieved by requiring a higher target SNR during the process of Voronoi binning but with the cost of reducing the number of Voronoi bins, i.e., details in the parameter distributions.We show in Fig. <ref> the age anddistributions at different SNRs (shown in different colors), compared to the true values shown in black lines. This figure is obtained by running the GIST pipeline on the mock MUSE cube generated in Section <ref> with different target SNRs during Voronoi binning, and then the mass fraction distributions of all the Voronoi bins are added together to represent the global mass distributions of the galaxy, and finally integrated through age andaxes, respectively. To make it a fair comparison, we make sure that the spatial pixels used to generate Voronoi bins are the same.According to the correlation coefficients, this figure indicates that with the increase of Voronoi bin SNR, the global mass fraction distributions in both panels are more consistent with the true values.Specifically, the weights of metal-rich and old stellar populations are going lower with higher SNR. Therefore, Fig. <ref> indicates that increasing SNR can improve SFH recovery. However, we note that in reality, an integrated spectrum with SNR larger than 1000 is impractical. In addition, even with SNR at 200is pointless because in this case numerical issues, spectral library uncertainty, etc become the dominant factor.To study the effect of different templates, we re-plot age anddistributions of mock cubes generated by PEGASE-HR templates in Fig <ref>. PEGASE-HR templates are evenly spaced in .During thefitting, we use the same templates and fitting strategies (fitting wavelength region, regularization, etc.) as those in Section <ref> and also degrade to MUSE spectral resolution to remove the resolution differences (even though we find increasing spectral resolution does not improve the recovery significantly after comparing to Figure <ref> of a PEGASE-HR spectral resolution version).The Voronoi bin allocation is the same as mock cubes generated by MILES templates. According to the correlation coefficients, mass fraction distribution using PEGASE-HR in Fig. <ref> are more consistent than those using MILES in Fig. <ref> and Fig. <ref> for both age and .Moreover, the mass fraction -panels (second row) have almost no blobs than the results using MILES (second row of Fig. <ref>), and the oldest populations have a more modest increase. Therefore, in our test, PEGASE-HR performs better during the mass fraction recovery and could provide more consistent mass fraction distributions than MILES. We think this is likely due to the templates' perfectly even spacing in , which leads to smoother input and is preferred byduring regularization. However, the PEGASE-HR templates used in our tests are generated using PADOVA 1994 <cit.> isochrones but MILES are generated using BASTI isochrones <cit.>. Whether isochrones differences can affect mass fraction distribution recovery ability requires further investigations. Moreover, the current limitation of PEGASE-HR templates is they only have one bin in , we emphasize here again the need for multiple -grid templates. In the future, non-parametrically, it is worthwhile to test the mass fraction recovery ability using a linearly spaced template grid or modifyingto regularize the fitting in terms of mass weights per Gyr or dex. Parametrically, one can try to recover both chemical enrichment history and star formation history by taking into account chemical evolution theories, which will help remove counter-intuitive features in the mass fraction distributions. One approach is to use Bayesian spectral fitting codes, such as Prospector <cit.> or BAGPIPES <cit.>. Another example is the semi-analytic spectral fitting method from <cit.>.This method applies full spectrum fitting with the predicted best-fit spectrum from chemical evolution models, which is similar to adding a prior onduring the fitting. Fig. 6 of <cit.> showed that this method has better consistency on mass fraction recovery than , and the over-estimations in metal-rich, 2∼4 and 12∼14 Gyr regions are removed successfully. Therefore, it provides a way to measure chemical enrichment and SFH for future studies accurately. One important caveat is that their chemical evolution model is a close-box model, which does not apply to galaxies dominated by frequent passive merger events (e.g. NGC 7793 in ). In addition, this method does not take into account the relation of chemical abundances with velocity, which are important indicators for radial migration and kinematic heating. §.§ Effect of RegularizationIn this section, we focus on the effects of different regularization strategies on mass fraction distributions. There are two parameters controlling the regularization in , which areand , whereor R applies linear regularization to the weights during thefit andcontrols the order of regularization. <cit.> investigated mass fraction differences of mock data with different order of regularization (see their Fig. 5 and Fig. C2) and concluded that the choice of order of regularization does not affect the results over a significant level. <cit.> tested the mass fraction distributions recovery of the second- and third-order of regularization and compared them to the true values using stellar particles from an EAGLE simulated galaxy. They found results from third-order regularization are more consistent with the true values. <cit.> compared the mass fraction recovery consistency of non-regularizedfit, an average of 100fits to Montecarlo realizations, and single regularizedfit (see their Fig. 1), and concluded that regularized results are comparable to that of the average of multiple realizations.The selection of R value was briefly discussed in Section <ref>.Here we re-investigate this question in detail by applying differentandstrategies on our mock cubes generated in Section <ref>.The results are shown in Fig. <ref>, where we plot the age distributions on the top panels anddistributions on the bottom panels for different components of the galaxy. We also plot the global mass distributions in the last column. We apply five different strategies during the fitting: the first three cases use first-order regularization with R=5, R=10, R=20, and R=R_max (30∼100), respectively; the last uses R=5 and second-order regularization. Black lines are the true values. When comparing results with the same order of regularization but different R, we find results with larger R are smoother, and their weights at old populations are relatively smaller than results with smaller R. However, in thedistribution panels, weights are going more toward metal-rich populations and the distributions become less consistent with the true values for all the components. For results with the same R but different orders of regularization, we see results with  =1 show better consistency with the true values than those with  =2. Specifically, results with  =2 have more mass fractions in the oldest populations.In conclusion, Fig. <ref> indicates that for this mock cube, results using the first order of regularization are better than results using the second order of regularization; increasing R can help obtain smoother results but the weights will go towards metal-rich regions. Given we do not have the codes to perform third-order regularization, it is hard to say which order is the best. Intuitively, it seems higher-order regularization tends to have more mass fraction in the old populations. Therefore, we think the third-order regularization is preferred for EAGLE simulated galaxy test <cit.> because the particles' mass weights from the EAGLE simulations are naturally dominated by old and metal-rich populations, then the third-order regularization tends to fit better in this region.Ourmocked catalog has no particle in the metal-rich and oldest regions, so the first-order regularization is better. However, all the above discussions need further verification and we will test the third-order results in the future.For the real galaxy fitting results of <cit.> and M21, they all used second-order regularization and showed an overdensity of mass weights in the oldest, metal-rich populations for different components of the galaxies, which makes it difficult to discuss differences in the formation epoch between these components, and it is not possible to tell if the overdensity is true or instead fromartifacts. §.§ Caveats when using In this section, we give some caveats when using thecode. Since this code is using the mock stellar catalog fromwhich embedded the GCE model of S21.The success of this model is that it can reproduce - bimodality which is quantitatively consistent with the APOGEE observations of the MW in <cit.>. In reality, a more complex process is likely to exist as non-asymmetric perturbations such as spiral arms, bars, interlopers, etc. The current model does not have these features but can be added in the future; In addition, S21 suggests thebimodality is a natural result of radial migration and kinematic heating of stars, which is opposite to some clarifications by simulations which indicated that a merger is needed to contribute to the formation of bimodality (e.g., ), and this model lacks stars in the Galactic halo. Therefore, this code is not applicable for comparing the merger features or the halo distributions; This model also lacks descriptions of parameter distributions in the bulge, especially the nuclear disk in the MW center (e.g., ) and chemodynamics of the B/P bulge (e.g.. ), which could be improved in the future. § FUTURE STUDIESIn this study, we did not apply extinction on the mock data cube because this is the work purely on testing the kinematic and stellar populations recovery via full-spectrum fitting. In real edge-on galaxies, extinction can have an essential effect because it can obscure nearly 50% of the total light from stars in the thin disk, making SNR in this region lower.With thecode, we can test the kinematics and stellar population recovery of mock cubes with extinction. The extinction model inis assumed as a double exponential distribution mainly in the thin disk, which can be integrated through the line-of-sight, and each particle will have avalue. Then, we can add the extinction effect to the spectrum by using a specific reddening curve (e.g., ), which is also the strategy thatapplies to estimate extinction. <cit.> has already tested the effect of extinction on full-spectrum fitting results using one stellar population template, we can re-investigate it with a realisticmodel and also study how different extinction models can affect the results. In addition, several non-asymmetric features should be added to the chemical evolution model like the spiral arms, bars, mergers, and halo particles to mimic a more realistic MW. It is also necessary to add gas emissions to . There are several improvements can be made to the choice of SSP models. The current version ofhas embedded MILES and PEGASE-HR templates. In the future, we will add other empirical models such as BC03 <cit.>, X-Shooter <cit.>, and theoretical models like STARBURST99 <cit.> and BPASS <cit.>, which can satisfy different science goals since different models have different advantages. Some interesting tests can also be executed, e.g., using one SSP model to generate mock cubes and fit them with another model, which can help study the effect of SSP template uncertainties on full-spectrum fitting results. In addition, even though we showed in Section <ref> that the -rich and -poor populations can be well identified using twobins, it is still essential to add more bins to obtain detaileddistributions. The Milky Way results in <cit.> have shown that stars in the thin and thick disk havevalues about 0.0 and 0.2 dex, respectively, and there are some stars with negative , which is also shown in thecatalog.However, it is challenging to use current MILES α-variable templates to identify these twosequences because most of the weights are in the bin of  =0, and it is also difficult to assign a spectrum to particles with negative . Morebins (e.g., sMILES ) will help on the study of - distributions for external galaxies at different components. According to Section <ref>, given that spectral resolution is important in stellar kinematics measurements, it is also necessary to increase both the instruments' and stellar population models' spectral resolution. Current IFS observations such as MUSE perform well on kinematics measurements of the galaxies' central region. However, to go deeper and obtain distributions of the outer regions, especially the outer thin disk, which normally has lower dispersion, one might need to use instruments and SSP templates with a higher spectral resolution to obtain more accurate kinematics maps.Other than increasing spectral resolution. As discussed in Section <ref>, a prior can be added to thefitting process which assumes the relation between LOSVD and metallicity and age at different locations of the galaxy can be expressed by a function. Then it can allow templates to have different LOSVDs and the degeneracy is not increased.The other way is to add the evolution of stellar kinematics to the semi-analytic model like the one in <cit.> and make it derive the chemical evolution histories along with kinematic processes directly by fitting with the integrated spectra, where the fitting process is constrained by several model parameters and more physical. We will test the feasibility of these methods in the future.As for mass fraction distributions, Section <ref> demonstrated how the bin size can artificially create peaky features in the star formation history. In the future, it is worthwhile to test fitting results using linearly spaced template grids or modifying thealgorithm on regularizing the fitting and considering the bin size in age and metallicity. Parametrical methods like <cit.> can also be used to test recovery abilities. Moreover, in this work, we inputted a smooth SFH for the tests. Whether the mass fraction distribution inconsistencies we found also apply to merger-dominated galaxies with multiple starburst phases can be investigated usinggenerated mock cubes of N-body/hydrodynamical simulations.The aim ofcode is to apply the knowledge we learned from the Milky Way to other Milky-Way-like galaxies by mocking the IFS observations as a bridge. Especially, we want to interpret physical processes such as radial migration, kinematic heating, and - distributions of other galaxies to answer the question of whether all the Milky Way analogs have similar processes to the MW, and if not, how much difference they can be. Therefore, when templates with morebins are available, we will perform a comparison of a mock Milky Way data cube fromwith a real galaxy observation from the GECKOS survey <cit.>, which will observe 35 edge-on Milky-Way-like galaxies, and see the feasibility of identifying these processes. § SUMMARY AND CONCLUSIONSIn this work, we present thecode which uses simple-stellar population models and the mock Milky Way catalog fromto generate a mock Milky Way data cube in the same data format as integral-field spectrograph (IFS) extra-galactic observations.We aim to eliminate the differences in analysis techniques between the Galactic and extra-galactic studies such as the use of individual stars vs. integrated stellar populations, the number density distributions of stars' parameters vs. mass- or light-weighted distributions, the results with or without projection effect, etc. The mock data cube can be put into the GIST pipeline to perform data analysis in the same way as extra-galactic observations and the results can be compared directly to external galaxies to study the similarities and differences between the Milky Way and its analogs. Therefore, this code is a bridge to link the Galactic and extra-galactic studies to understand the formation and evolution of disk galaxies.Thecode is flexible, allowing users to choose their preferred SSP templates, galaxy distance and inclination, spatial/spectral resolution, and field-of-view of the instrument. is also designed to mock current existing instrumental observations such as MUSE, SAMI, or MaNGA, as well as test the performance of future instrument designs. Moreover, it can also be applied to N-body/hydrodynamical simulations to generate mock observations.For the rest of the paper, we appliedongenerated mock cubes to test the ability ofto recover the stellar kinematics and stellar population parameters. The mock MUSE data cubes have a distance of 26.5 Mpc and inclination 86^∘, which is the same as NGC 5746 observations from M21.After comparing the true values calculated fromparticles' properties withresults, we found that there are systematic offsets between -recovered kinematic moments (, , h_3, h_4) and the true values. We confirm there are two reasons causing these offsets: * The velocity dispersionof most Voronoi bins are smaller than the instrument velocity dispersionin MUSE spectral resolution (FWHM=2.65).*changes with age andfor particles through the line-of-sight, but most previous studies assumed all stellar population templates have the same LOSVD during thefitting.By using the higher spectral resolution templates PEGASE-HR (FWHM=0.55) and applying the LOSVD-Convolved cubes which eliminate degeneracies ofwith age and , we can obtain consistent kinematic results with the true values.Therefore, we indicate the need to allow different templates to have different LOSVDs or assume a quantitative relation between LOSVD and metallicity and age at different locations of the galaxy, where the equation coefficients can be measured duringfitting.The latter method is equivalent to adding a prior to thefitting process to measure kinematics without adding degeneracy.We will perform tests on them and verify if kinematics recovery can be improved. In addition, our tests also indicate the need to use non-parametric methods such as BAYES-LOSVD <cit.> rather than Gauss-Hermite equation for more accurate measurements ofdistributions.In terms of stellar population parameters, we verified thatcan recover mass-weighted age, , andwith good consistency.Both the -rich and -poor populations can be identified with reasonable regularization values during the fitting. Using line-strength indices can also identify these structures. We found mass fraction distributions of stellar populations fromusing MILES templates on MILES-generated mock cubes show deviations compared to the true values, where mass weights normalized by the bin size are overestimated in regions of 2∼4 Gyr, 12∼14 Gyr, the most metal-rich regions, and underestimated in regions of 5∼10 Gyr. This could be due to many reasons including having low SNR that the flux error is larger than templates' similarities in the region of old and metal-rich populations, the uneven spacing of age grids in MILES templates, currentalgorithm in regularization. These findings can be a reference for future extra-galactic data analysis ofresults on real galaxy observations. When repeating the test by employingand PEGASE-HR on PEGASE-HR-generated mock cubes, the mass fraction distributions are better with larger correlation coefficients betweenand true results, but differences still exist. In addition, we found using first-order regularization can obtain better mass fraction distributions than the second-order.Our tests and conclusions are helpful in identifying the limitations of studying some galaxy evolution processes using current methods and provide direction for potential improvements in the future. Even thoughprovides a bridge to link the Galactic and extragalactic studies by transferring MW to mock IFS observations, there remains a need for future improvements to facilitate more accurate measurements of external galaxy properties such as -bimodality and to enable detailed comparisons of the MW with MW-like galaxies. These improvements include more accurate SSP models with higher spectral resolution and moregrids, more advanced spectral fitting algorithms, and instruments with deeper observations. Future tests usingare required including modifyingcodes to improve the recovery ability of kinematics and stellar population models, using templates with moregrids, and adding extinction, to achieve measurements of known parameters such as radial migration and kinematic heating efficiencies of the MW in external galaxies. § ACKNOWLEDGEMENTS We thank James Binney, Alina Boecker, Andy Casey, Scott Croom, Jesus Falcon-Barroso, Eric Emsellem, Jianhui Lian, Richard McDermid, Anil Seth, Yuan-Sen Ting, Glenn Van de Ven, Gail Zasowski, Ling Zhu and Zhuyun Zhuang for useful discussions.ZW acknowledges the HPC service at The University of Sydney for providing HPC resources that have contributed to the research results reported in this paper.ZW is supported byAustralian Research Council Centre of Excellence for All Sky Astrophysics in Three Dimensions (ASTRO-3D) through project number CE170100013.MRH acknowledges support from ARC DP grant DP160103747 and ASTRO-3D. SS is funded by ASTRO-3D Research Fellowship and JBH’s Laureate Fellowship from the Australian Research Council. JBH is supported by an ARC Australian Laureate Fellowship (FL140100278) and ASTRO-3D. JvdS acknowledges the support of an Australian Research Council Discovery Early Career Research Award (project number DE200100461) funded by the Australian Government.This research has also made use of Astropy[<http://www.astropy.org>], a community-developed core Python package for Astronomy <cit.>, numpy <cit.>, scipy <cit.>, matplotlib <cit.> and SpectRes <cit.>.§ DATA AVAILABILITYsource code is publicly available via <https://github.com/purmortal/galcraft>.All therecovery test data and figures can be shared with reasonable requests. mnras§ MODIFICATIONS OF THE GIST PIPELINEWe provide a list of modifications on the GIST pipeline <cit.> to improve the flexibility of this software: We add more templates such as the original and interpolated PEGASE-HR <cit.> to the software and allow the option to oversample the spectra when degrading to lower observation spectral resolution, which is for the same reason as we do for thecode in Section <ref>; We also added the option to select SSP templates with a certain age andrange in some special cases. When measuring stellar kinematics and stellar population properties, we add the options to choose , , and use real spectra noise during the fitting and normalizing the integrated spectrum and noise by the median of the spectrum, which will be important to perform the iteration for estimating R_max; For regularization, we add an option to estimate R_max following the procedures in <cit.> and save the stellar population results with R=R_max.We also allow further degrading to lower spectral resolution during the fitting when measuring SFH, which is mainly for the reduction procedures in M21 where they wanted to compare the mass-weighted stellar population parameters fromwith the SPP-equivalent parameters from line-strength indices in the same spectral resolution; We add the option to change the penalization valueduringfitting and measure mass weights uncertainties using bootstrapping following procedures in <cit.>.§ MORE CHEMODYNAMIC PARAMETERS RECOVERY RESULTS
http://arxiv.org/abs/2310.18258v1
{ "authors": [ "Zixian Wang", "Michael R. Hayden", "Sanjib Sharma", "Jesse van de Sande", "Joss Bland-Hawthorn", "Sam Vaughan", "Marie Martig", "Francesca Pinna" ], "categories": [ "astro-ph.GA", "astro-ph.IM" ], "primary_category": "astro-ph.GA", "published": "20231027164446", "title": "The Milky Way in Context: Building an integral-field spectrograph data cube of the Galaxy" }
Prospects for thermalization of microwave-shielded ultracold moleculesJohn L. Bohn January 14, 2024 ========================================================================Automatically generated reports from medical images promise to improve the workflow of radiologists. Existing methods consider an image-to-report modeling task by directly generating a fully-fledged report from an image. However, this conflates the content of the report (e.g., findings and their attributes) with its style (e.g., format and choice of words), which can lead to clinically inaccurate reports. To address this, we propose a two-step approach for radiology report generation. First, we extract the content from an image; then, we verbalize the extracted content into a report that matches the style of a specific radiologist. For this, we leverage RadGraph—a graph representation of reports—together with large language models (LLMs). In our quantitative evaluations, we find that our approach leads to beneficial performance. Our human evaluation with clinical raters highlights that the AI-generated reports are indistinguishably tailored to the style of individual radiologist despite leveraging only a few examples as context. § INTRODUCTIONGenerating radiology reports from medical images is a crucial task in the field of medical imaging. For human interpreters to write such reports is not only time-consuming and labor-intensive but also requires a high level of expertise <cit.>. Furthermore, these reports are often subject to inter-observer variability, potentially compromising the consistency and accuracy of the reports’ findings. As a result, there is a growing interest in methods for automated radiology report generation, which can alleviate these issues and improve overall efficiency of the diagnostic process.Recent advances in large language models (LLMs) have shown great potential for generating high-quality text, with the ability to customize outputs based on user-specified instructions <cit.>. These models have been utilized to rephrase existing text, paving the way for new approaches to automated report generation in radiology. However, despite the promise of LLMs, their application to this task is not without challenges. One principal concern is their tendency to `hallucinate', i.e., to generate false information (even if plausible sounding), which can be particularly problematic in high-stakes settings such as when generating reports from medical images.Previous attempts at automated report generation have largely focused on approaches that aim to produce fully-fledged reports directly from medical images (i.e., a image-to-report modelling task) <cit.>. However, this conflates the content of the report (i.e., the radiology entities and attributes that are described) with its style (i.e., the complement of radiology entities and attributes, or everything needed in terms of language, grammar, and structure to formulate a fully-fledged report, on top of the content representation), which limits the flexibility and applicability of such models.This problem is also reflected in the employed evaluation metrics of existing approaches. They frequently optimize for traditional natural language processing (NLP) metrics, such as BLEU, ROUGE, or METEOR, which measure the similarity of the generated report to a reference report based on lexical overlap. These metrics, while generally useful, may not correlate with the clinical correctness or usefulness of the report <cit.>. Previous work has demonstrated that optimizing for clinical metrics that measure the content's relevance, such as the RadGraph scores <cit.>, is crucial in generating reports that accurately represent the image findings. The RadGraph score gauges similarity in extracted radiology entities and relations with a ground truth annotation. This emphasizes the report's clinical content as opposed to style or lexical overlap, and makes it an amenable measure of the report's usefulness and completeness in clinical practice. Here, we propose to generate radiology reports as a two-step procedure for disentangling report content and style. In the first step, a dedicated model is trained to extract pure content from the image. Specifically, it generates a structured representation (called RadGraph) of the entities and attributes that are present in the image. In the second step, a frozen LLM generates a stylized report from this structured representation. Given a few report examples as context, the LLM can on-the-fly adapt the report style closely to the style of a target radiologist or hospital template for whom the report should be drafted. This model stylization could offer several advantages for radiology workflows: flexibility, in generating reports targeted to their readership, such as ones with less jargon that are more accessible to patients; consistency, in ensuring clear communication between a radiologist and the referring physician, who may be accustomed to a particular style of reporting; and emphasis on preferred information, in highlighting findings most relevant to a specialist's scope of practice (e.g., follow-up on a patch of pneumonia, or the correct location of pacing lead in the heart).In the first step of our approach, we alter the supervision signal of an image-text model to a serialization of the clinical entities (as captured in RadGraph) and their attributes, rather than the full report text. This step ensures the content extraction model focuses only on generating the report's clinical content, measured by RadGraph score, rather than optimizing for traditional NLP metrics such as BLEU that may not correlate with the report's clinical relevance. By generating the clinical content first, we prioritize the report's clinical usefulness over its stylistic quality, which can be improved and even personalized in the second step.For the second step, we leverage GPT-3.5 (a generative LLM developed by OpenAI) to transform the predicted serialization of clinical entities and attributes from the image into a styled radiology report <cit.>. These models have shown great promise in generating high-quality text <cit.>, including summaries and paraphrases of existing text <cit.>, which we can use to inject a hospital-specific style, as well as to enhance readability and understandability. By separating the content generation and style injection steps, we can ensure the model optimizes for the relevant criteria for each step, resulting in a high-quality report.Our approach addresses the limitations of end-to-end image-to-report generation, as the LLM does not need to suggest facts about the image. Instead, it can focus on rephrasing the structured set of entities and attributes that were first derived from the image and infuse the desired style into the report. At prediction time, the LLM in-context learns a serialization-to-report mapping from a few examples of a target report writing style. Our method not only offers a novel solution to the challenges posed by previous approaches but also enhances the customization and adaptability of radiology reports generated by LLMs. This paper makes the following contributions: 0em * First, we develop a method for extracting structured content—that is, a serialized version of RadGraph—from radiology images. * Second, we propose a strategy for generating stylized reports from this extracted content by means of in-context learning from few style-specific example reports.* Third, our overall system (combining content extraction and style generation) achieves competitive performance at radiology report generation. * Fourth, in a human style evaluation clinical experts were not able to distinguish real reports from AI-generated ones that were adapted to the writing style of individual radiologists.§ RELATED WORKS§.§ Medical Report GenerationMedical report generation (MRG) has seen a recent insurgence in the field of medical AI. Early works <cit.> adhere to the methods of image captioning models <cit.>, leveraging deep CNNs to extract image features and RNNs to generate text descriptions in an encoder-decoder fashion. Meanwhile, emerging in several works was an auxiliary classification task to predict certain medical abnormalities, with the aim of more structured guidance for report generation <cit.>. Later, the use of the attention mechanism in MRG systems became increasingly prevalent <cit.>. To further bridge visual and linguistic modalities while incorporating medical domain knowledge, various kinds of knowledge graphs have been explored for use <cit.>. In our study, we use the classic encoder-decoder architecture, as it is a common denominator of many MRG approaches.Subsequent studies acknowledge the constraints of traditional natural language generation metrics when assessing medical reports, prompting a growing emphasis on ensuring clinical accuracy. RadGraph <cit.> is a dataset of entities and relations in full-text chest X-ray radiology reports based on a novel information extraction schema. <cit.> adopts the knowledge graph provided by RadGraph as general knowledge embedding. <cit.> employed a classification loss for medical concepts provided by RadGraph. <cit.> improves the factual completeness and correctness of generated radiology reports with a well-designed RadGraph reward. Most existing methods develop metrics around RadGraph and integrate that into the objective function; our approach, on the other hand, directly trains the model to generate a serialized (i.e., text) representation of RadGraph as we decouple the content and style generation and focus solely on the clinical correctness during the content generation stage. Additionally, serialized RadGraphs allow us to juxtapose dense content representations with stylized reports which enables style adaptation via in-context learning.§.§ Large Language ModelsRecent watersheds such as the Transformer architecture <cit.>, generative pre-training objectives <cit.>, and increased computing power have facilitated the training of large language models comprising billions of parameters <cit.>. These advances have significantly burgeoned model capability in tasks such as translation, summarization, and generating long text that closely resembles human language. In 2021, OpenAI announced GPT-3 <cit.>, a generative language model featuring an unprecedented 175 billion parameters. Their study introduces in-context learning, the ability of LLMs to learn to perform a task simply by being provided a few examples as context—without any parameter updates.LLMs have also demonstrated great potential in the medical domain. For instance, GPT-4 has been employed to post-hoc transform free-text radiology reports into structured reports <cit.>. As for LLM-based MRG systems, <cit.> utilized ChatGPT to generate medical reports based on features extracted by neural networks (e.g., disease classifier). However, their approach does not exploit in-context learning and thus has limited control over format and style.§ METHODS §.§ DatasetOur study makes use of the MIMIC-CXR <cit.> and RadGraph <cit.> datasets. MIMIC-CXR is a large dataset of 377 110 chest radiographs imaged at Beth Israel Deaconess Medical Center from 227 835 studies, with free-text medical reports. The RadGraph dataset is publicly available and comprises radiology text reports and corresponding knowledge graphs (Figure <ref>).To preprocess reports, we extract only Findings and Impressions, as other sections contain details that cannot be readily referenced from the image, such as patient demographics or lab results from different procedures. Findings refer to the direct observations from the image (e.g., opacity of lungs, catheter placement), while Impression summarizes the most urgent inferences and diagnostically relevant findings (e.g., presence of pneumonia).§.§ RadGraphAs notation, a RadGraph refers to any knowledge graph within the titular dataset. The nodes of a RadGraph are either anatomical (e.g, lungs, cardiomediastinal, carina) or observational entities (e.g., acute, abnormality). The edges are directed and heterogeneous, capturing three types of relations—modify, located at, suggestive of—between entities. Nodes and edges are automatically obtained via a named entity recognition and relation extraction model on MIMIC-CXR reports, employing the DYGIE++ framework from <cit.>. This embodies the Report → RadGraph preprocessing step, or content extraction in (Figure <ref>). We distinguish this from content generation (Section 3.4), which is the transformer-based prediction of serialized content based on the image.§.§ RadGraph SerializationWe serialize each RadGraph into a structured text representation (Figure <ref>), which serves as the supervision label for the content generation model (Section <ref>). This serialization acts as a dense content representation whose advantages over the free-text report include conciseness and the pruning of non-semantic content (e.g., style-filler words, radiologist-specific phrasing) from the report. The aim is to focus the model purely on generating the content backbone at this step, and defer the style injection to a later stage of the pipeline (Section <ref>).To exploit the graph structure of the input RadGraph, we first extract the weakly connected components (step (3) in Figure <ref>). These are the maximal subgraphs where any two nodes can be reached through a path of undirected links. Each component is thus a separate network of spatially and medically related entities, segmenting the chest X-ray's content into distinct regions of interest that can be serialized in parallel.For each component, we create a text span of all labelled entities. The keywords no and maybe are prepended to absent and uncertain entities, respectively. The entity ordering follows the syntax of the corresponding tokens in the report to ensure readability and lexical fidelity. When the report contains both Findings and Impression sections (Figure <ref>), we analogously stratify the components based on their referenced location in the free-text report. This is crucial as the content representation should carefully distinguish between factual image information (Findings) and clinical inferences (Impressions), even if strongly supported.Within Findings and Impression, the components are concatenated and separated by delimiters. The two sections are combined into a single text, which is the report serialization and densely characterizes the full chest radiograph. In the case where the report is not bipartite (e.g., only Impressions), we unify the components under a singular section.§.§ Content Generation Model For the content generation model, we leverage the encoder-decoder architecture that has been widely employed in image-to-text systems. An [subsec: imageencoder]image encoder takes chest X-ray images as the input and encodes them into a visual feature representation. In parallel, a [subsec: textencoder]text encoder reads clinical documents, such as doctor indications, and transforms the textual content into dense feature vectors.The visual and text embeddings are then added together and passed through a LayerNorm operation to form contextualized embeddings. The fused embeddings are then fed into the report decoder, which generates serialized RadGraph word-by-word. The main architecture is adapted from <cit.>, but we simplified it by removing the classifier and the interpretation module to eliminate as many potential confounders as we can since our goal is to evaluate the influence of the supervision signal. Image EncoderWe adopt a DenseNet-121 <cit.> model pre-trained on the ImageNet dataset as the image encoder. For each input chest X-ray image I, it extracts a feature vector d ∈ R^e where e is the embedding dimension. If an imaging study consists of more than one image, the feature will be obtained via max-pooling across all feature vectors extracted from each image. d is then transformed into n low-dimensional disease representations D_img∈ R^n × e.Text EncoderWe use a Transformer encoder to extract features H={h_1, h_2, ..., h_l} from the clinical document text input with length l consisting of word embeddings {w_1, w_2, ..., w_l} where w_i ∈ R^e is the vector representation of the i-th word in the text, e is the embedding dimension and h_i ∈ R^e is the attended features of the i-th word to other words in the input document. The features are then transformed into summarization denoted as Q={q_1, q_2, ..., q_n} representing a set of n disease-related topics (such as pneumonia or atelectasis) to be queried from the document, as proposed in <cit.>,D_txt = Softmax(QH^T)Hwhere matrix Q ∈ R^n × e is constructed by vertically stacking {q_1, q_2, ..., q_n} where each vector q_i is initialized with random values and subsequently refined through the attention process, and H ∈ R^l × e is formed by stacking {h_1, h_2, ..., h_l}.Fused Embedding We obtain the final, contextualized embedding D ∈ R^n × e by entangling the visual embedding D_img and text embedding D_txt,D=LayerNorm(D_img + D_txt)where D will be the input for the report decoder. Report Decoder We use the Transformer as the backbone of our report decoder to generate long, robust text. A total of 12 decoder components are stacked together where each component consists of a masked multi-head self-attention component followed by a feed-forward layer. The report generation objective (Equation 3) is defined as the cross-entropy loss between predicted words p and ground truth y. Here, we denote p_ij as the confidence of selecting the j-th word of vocabulary V in the i-th position in the generated text, and y_ij as a binary indicator of whether the j-th word appears in the i-th position of the ground truth. L = - 1/l∑^l_i=1∑^v_j=1y_ijlog(p_ij)§.§ Style Generation Step We describe the process of prompting a pre-trained LLM to generate reports from the serialization. This enables adapting the generation to a specific style by supplying the LLM with relevant in-context examples. Each example is a pair: serialization s_i from the RadGraph, and corresponding ground truth report r_i under the desired style. We use the gpt-3.5-turbo model from OpenAI, a dialogue-based LLM that accepts a sequence of messages as input, rather than a singular text prompt. This is useful for inserting style pairs seamlessly; we relay them as a back-and-forth conversation between user and assistant roles, where a user role supplies serialization s_i and an assistant responds with target report r_i for the language model to learn in-context. The chain of K examples {(s_i, r_i)}_i=1^Kis prefaced by a system role message indicating the LLM should act as the report-generating assistant, establishing its specific task within the radiology-based dialogue. The remaining prompt is structured as s_1:r_1,s_2:r_2,⋯,s_K:r_K,ŝ_eval:. At the end, the model is given an evaluation serialization ŝ_eval predicted from the chest X-ray image using our content generation model, cuing the LLM to generate the corresponding report prediction r̂_eval next. Note that in the zero-shot case, the prompt is just ŝ_eval:, with no preceding context examples. §.§ Evaluation Metrics We present a comprehensive quantitative evaluation of our approach with commonly-used metrics concerning both language fluency and clinical accuracy. For each metric, we display the mean x across the n test reports, as well as the 95% confidence interval (x± 1.96 ·σ/√(n)). Natural Language Generation Metrics (NLG) As for classical NLG metrics, we compute BLEU-2 and BERT scores. However, these metrics have relevant limitations due to focusing on lexical similarity or general (non-clinical) semantics, respectively, thereby lacking in the assessment of clinical similarity (for more details, see Section <ref>).Clinical Accuracy Metrics CheXbert vector similarity extends beyond BERT by utilizing CheXbert, a model trained specifically on datasets comprising chest X-rays. It computes the cosine similarity between the indicator vectors of 14 pathologies that the CheXbert labeler extracts from machine-generated and human-generated radiology reports. It is designed to evaluate radiology-specific information but its evaluation is limited to 14 pathologies. To address this limitation, we also adopt RadGraph F1 <cit.> that calculates the overlap in clinical entities and relations extracted by RadGraph from both machine-generated and human-generated reports.In this study, we lay emphasis on clinical metrics because we aim to generate reports in different styles while keeping accurate clinical information instead of reports that lexically match the ground truth in the dataset. §.§ Experimental SetupWe conduct four experiments to scrutinize each individual step and overall performance of our proposed strategy for report generation. This includes:0em * Image to Serialization: We evaluate content generation model in terms of comparing the content of the generated serialization with the ground truth report.* Serialization to Report: Conditioning on a strong image-to-serialization model, we evaluate LLM-based style injection by using the ground-truth RadGraphs as input to the LLM.* End-to-end report generation: We evaluate the pipeline end-to-end, feeding the image and clinical context through the content generation model and the generated serialization through the LLM.* Human style evaluation: To evaluate the style quality, we let physicians rate sets of 4 radiology reports where 3 were written by the same radiologists and 1 was AI-generated by our method following the style of the radiologist. The goal for the physicians is to detect the AI-generated report and to justify their choice.Baseline The baseline model is adapted from <cit.>, sharing the same architecture as our image-to-serialization model (Section <ref>), but its supervision target is the full report (Findings and Impressions) instead of serialization. The training involves an identical parameter set as above. Additional experimental details (training, hyperparameter search, and infrastructure) are provided in Section <ref>. § RESULTS §.§ Image to Serialization We train two content generation models, one with the full report as the supervision target, and the other with the serialized RadGraph. In table <ref>, we present the RadGraph F1 evaluation result on the MIMIC-CXR test set, comparing the outputs from both models against the ground truth full report.The comparison is suitable as RadGraph F1 is agnostic of general lexical similarity, measuring only overlap in radiology entities and relations extracted from the text. We find the model trained on the serialized RadGraph outperforms the model trained on the full report. This verifies our assumption that switching the supervision signal to the serialization would help focus the model on generating clinical content with greater accuracy and saliency. §.§ Serialization to ReportFor selecting examples, we draw randomly from the train split. This avoids patient overlap that could unfairly benefit performance on test cases. We compare the LLM-generated reports against the ground truth reports. Results are provided (Table <ref>) for varying numbers of in-context examples. We observe strong performance across all metrics, including a 0.722 RadGraph F1 mean in the zero-shot regime. Furthermore, lexical overlap metrics such as BLEU and BERT score saw noticeable improvement with more examples (20.2% and 15.7% increases, respectively, from 0 to 10 examples). This aligns with the aim of in-context learning to improve the style fidelity of generated reports.§.§ End-to-End Report GenerationWe evaluate the end-to-end performance from chest X-ray to report by concatenating the content generation step with the style generation step. The results are presented in Table <ref>. We observe that our two-step model surpasses the baseline (direct image-to-report model) in clinical accuracy metrics (CheXbert similarity, RadGraph F1, RadCliQ), even in the zero-shot style generation case. This illustrates greater accuracy in extracting radiology content from the chest X-ray, the central focus of separating content generation from style injection. Notably, RadCliQ is a composite metric found by <cit.> to best correlate with quality judgement of human radiologists. The primary contribution of more examples is improving lexical and non-clinical NLG metrics (BLEU, BERT Score), similar to our findings in the Serialization → Report step. However, they remain slightly lower than those of the baseline. A potential explanation is that during training, the baseline is directly supervised with full reports, while only the extracted content is available to our content generation model—with the report itself synthesized with an external, pre-trained LLM. §.§ Human Style Evaluation Four board-certified radiologists were instructed to write chest X-ray reports in their usual style from 40 randomly selected chest X-ray images from the MIMIC-CXR test set(Figure <ref>). As several chest X-rays in the MIMIC-CXR dataset were within normal limits or interval follow ups on intensive care patients, duplicate or near-duplicate chest X-ray reports were removed upon manual inspection. The style generation step described in Section  <ref> was then used to produce AI-generated chest X-ray reports in the style of each of the four radiologists. From these radiologist-generated and AI-generated chest X-ray reports, we created 23 sets of four chest X-ray reports for style evaluation by physician evaluators (i.e., the target audience for written radiology reports). In each set, three reports (corresponding to three different chest X-rays) from the same radiologist, and one AI-generated report (from a fourth chest X-ray) in the style of that radiologist were presented in random order. Three physician evaluators were asked to identify the AI-generated report out of the four reports and indicate whether report content, language, or report structure contributed to their choice (for more details, refer to Section  <ref>). A one-sided Z-test was performed for the proportion of AI-generated reports correctly identified by the three evaluators with a null hypothesis of 25%, corresponding to random chance, and an alternative hypothesis of >25%, corresponding to AI-generated reports being identified at a rate greater than that of random chance. Evaluators A, B, C correctly identified 5 out of 23 (21.7%), 5 out of 23 (21.7%), and 4 out of 23 (17.4%) AI-generated chest X-ray reports, respectively, for a mean accuracy of 20.3%. The one-sided Z-test produced p-values of 0.648, 0.648, and 0.832 for Evaluators A, B, and C respectively, and 0.835 when pooling all evaluators together. One potential explanation for why evaluators identified <25% of AI-generated reports (worse than random), is variance in radiologist style. For example, our radiologists would sometimes alternate between using parentheses to highlight the chest X-ray view (e.g., “Chest X-ray (PA and lateral views):”) and not using any parentheses (e.g., “Chest X-ray PA and lateral views:”).Human evaluators may use particular heuristics to call a report AI-generated, when faced with similar appearing reports. However, if they are not reflective of the truth, this may result in accuracy less than random. § DISCUSSION We presented a novel approach for radiology report generation that disentangles the report's content from its style. Our experiments showed that our method offers several advantages over the prevailing paradigm of direct image-to-report modeling. First,training a content extraction model to predict a serialized RadGraph represention from the input image helps focusing the model on the clinically relevant content which is reflected in improved performance metrics (Table <ref>). Second, when concatenating the content generation step with the style injection step, we observe favourable performance compared to the direct image-to-report baseline (Table <ref>). Third, by in-context learning the radiologist-specific mapping from serialized RadGraph to report, our method enables the generation of high quality reports that are tailored to the individual radiologist to the degree of indiscernability using just a few example reports.To ultimately determine the clinical utility of our method, deployment studies will be an exciting venue for future work. Another promising direction would be to expand our two-step report generation paradigm to other modalities such as radiology mammograms and magnetic resonance imaging (MRI).§ LIMITATIONSA limitation of our approach is that it relies on the accuracy and effectiveness of the initially extracted RadGraph representation, as this serves as a key supervision signal in our model. Notably, RadGraphs are extracted at inference time using an automated model rather than manual expert labelling. The model achieves high performance in entity and relation extraction <cit.> but is susceptible to error, particularly with report inputs that contain rarer medical entities or ambiguous observations.Furthermore, due to our employed LLM being served by a third party (OpenAI), reproducing our results comes at the financial costs of using Azure OpenAI's service. Another consequence of relying on this service is that we cannot guarantee the deterministically exact reproduction of our findings, as the served LLM models may change and potentially degrade over time—for instance, if models are replaced by distilled versions thereof.§ ETHICS CONSIDERATIONSA principal ethical consideration is the de-identified, credentialed medical data we worked with. In particular, responsible usage policy of the MIMIC-CXR dataset <cit.> prohibits sharing access to third parties. This disqualifies the use of ChatGPT or large language model APIs for prompting models to generate radiology reports from our content representations. However, cloud-based services are allowed, including the Azure OpenAI platform, which we use for deploying and prompting the gpt-3.5-turbo model. A stipulation is the monetary service costs, which are counted at a rate per one thousand tokens. These can pose financial barriers to equitable access to high-end language models (which are typically more expensive), as well as usage at scale. Furthermore, as discussed in the Introduction section, the use of large language models is accompanied by their risks of “hallucination” and generating false or misleading content. These risks can be amplified in the critical setting of medical report generation. To mitigate them, we prompt the LLM not to synthesize medical content, but rather to rephrase existing content into readable, stylized prose. We assist the model through providing content serialization to report pairs as in-context examples. This is our style injection step, which we intentfully separate from the content generation step to reduce the opportunity for LLM hallucination when generating the full report prediction.§ ACKNOWLEDGEMENTSThis project was supported by AWS promotional credits. acl_natbib § APPENDIX §.§ Further experimental detailsIn the following, we provide additional details about our experimental setup including information about the model training and used infrastructures.Training The image-to-serialization model is trained and evaluated on the official train/test split of the MIMIC-CXR (v2.0.0) dataset with 213 501 and 2 799 chest X-ray reports, respectively. The ground-truth RadGraph serialization associated with each study is taken from the RadGraph dataset. The model is trained for 25 epochs, with a batch size of 16, a learning rate of 0.0001, an embedding size of 512, a weight decay of 0.001, a dropout rate of 0.1, and 12 transformer blocks in the decoder. Hyperparameter Search We optimized our hyperparameters for the content generation model through grid search with the help of WandB. Figure <ref> visualizes the performance of the model with a different set of hyperparameters. We search the learning rate from [0.001, 0.0001], embedding dimension from [256, 512], number of transformer decoder components from [6, 12], and weight decay rate from [0.001, 0.0001, 0].InfrastructureThe image-to-serialization model was trained on an AWS g5.2xlarge instance with one NVIDIA A10G Tensor Core GPU. It was trained for 25 epochs or roughly 50 hours. Evaluation was dispatched on two NVIDIA RTX A4000 GPUs with 16 GB of memory. We use the Azure OpenAI service to access the gpt-3.5-turbo language model, with a cloud-based deployment. §.§ Serialization to Report ResultsTable <ref> shows the quantitative results of our Serialization to report generation task as described in Section <ref>. §.§ Human Style EvaluationHere, we provide further results on the human style evaluation of the generated reports. To reiterate the approach, three physicians were tasked to identify an AI-generated report within a set of four radiology reports: three were written by the same radiologist and the fourth one (appearing in random order) was generated using our approach using in-context examples from the same radiologist.Figure <ref> shows the cumulative count of explanations stratified by clinical evaluator (rows), as well as by whether the selection was correct or incorrect (columns). Language (e.g., word choice, grammar, and/or writing style) was the primary heuristic that evaluators used to decide whether a report was human-generated or AI-generated, followed by content (e.g., missing important details or including extraneous details), and structure (e.g., different use of numbering or section headings).§.§ LLM PromptingHere, we illustrate a template of our dialogue-based prompt for the LLM to generate a report prediction, with 2-shot learning to adapt to a report style. For notation, <serialization i> and <report i> are placeholders for the i-th serialization and corresponding ground truth report, respectively, and <eval serialization> is the serialization predicted from an image in the evaluation set. * System: You are a helpful assistant that generates chest x-ray reports from key words.* User: Generate a chest x-ray report from the following key words: <serialization 1>* Assistant: <report 1>* User: Generate a chest x-ray report from the following key words: <serialization 2>* Assistant: <report 2>* User: Generate a chest x-ray report from the following key words: <eval serialization>The LLM will proceed to generate its report prediction from the evaluation serialization, which we compare against the corresponding ground truth report in the test set. §.§ Natural Language Generation MetricsBLEU-2 is widely used in machine translation tasks; it measures the similarity between a candidate translation and one or more reference translations by comparing their bigram overlaps. Although BLEU-2 is a fast and reliable metric, it possesses a few limitations, e.g., it does not take into account synonymous words and the proper use of grammar. We thus also adopt the BERT score, a recently proposed metric for assessing the quality of machine-generated text. It takes into account the semantic similarity between the generated text and reference text by calculating the contextual embeddings of both texts using BERT and measuring their cosine similarity. Nevertheless, BERT is a general-purpose metric and not at all optimized for capturing clinical semantics and radiological findings. This is why the included clinical accuracy metrics are most salient to our quantitative evaluation of radiology report generation.
http://arxiv.org/abs/2310.17811v2
{ "authors": [ "Benjamin Yan", "Ruochen Liu", "David E. Kuo", "Subathra Adithan", "Eduardo Pontes Reis", "Stephen Kwak", "Vasantha Kumar Venugopal", "Chloe P. O'Connell", "Agustina Saenz", "Pranav Rajpurkar", "Michael Moor" ], "categories": [ "cs.AI", "cs.CL" ], "primary_category": "cs.AI", "published": "20231026230638", "title": "Style-Aware Radiology Report Generation with RadGraph and Few-Shot Prompting" }
Understanding Shape and Centroid Deviations in 39 Strong Lensing Galaxy Clusters in Various Dynamical States Raven [email protected] Matthew B. Bayliss1 Keren Sharon2 Guillaume Mahler3 Michael D. Gladders4 Håkon Dahle5 Michael K. Florian6 Jane R. Rigby7 Michael McDonald8 Lauren Elicker1 M. Riley Owens1January 14, 2024 =========================================================================================================================================================================================================================== Rapid advancements in artificial intelligence (AI) have sparked growing concerns among experts, policymakers, and world leaders regarding the potential for increasingly advanced AI systems to pose existential risks. This paper reviews the evidence for existential risks from AI via misalignment, where AI systems develop goals misaligned with human values, and power-seeking, where misaligned AIs actively seek power. The review examines empirical findings, conceptual arguments and expert opinion relating to specification gaming, goal misgeneralization, and power-seeking. The current state of the evidence is found to be concerning but inconclusive regarding the existence of extreme forms of misaligned power-seeking. Strong empirical evidence of specification gaming combined with strong conceptual evidence for power-seeking make it difficult to dismiss the possibility of existential risk from misaligned power-seeking. On the other hand, to date there are no public empirical examples of misaligned power-seeking in AI systems, and so arguments that future systems will pose an existential risk remain somewhat speculative. Given the current state of the evidence, it is hard to be extremely confident either that misaligned power-seeking poses a large existential risk, or that it poses no existential risk. The fact that we cannot confidently rule out existential risk from AI via misaligned power-seeking is cause for serious concern.§ EXECUTIVE SUMMARYConcerns that artificial intelligence could pose an existential risk are growing.This report reviews the evidence for existential risk from AI, focusing on arguments that future AI systems will pose an existential risk through misalignment and power-seeking:* Misalignment: Some capable AI systems will develop goals which are misaligned with human goals. * Specification gaming: Some capable AI systems will learn designer-specified goals which diverge from intended goals in unforeseen ways.* Goal misgeneralization: Some capable AI systems will develop goals which are perfectly correlated with intended goals in training, but diverge once the systems are deployed. * Power-seeking: Some capable, misaligned AI systems will seek power in order to achieve their goals.Our findings are based on a review of relevant literature, a series of interviews with AI researchers working on existential risk from AI <cit.>, and a https://wiki.aiimpacts.org/arguments_for_ai_risk/is_ai_an_existential_threat_to_humanity/database_of_empirical_evidence_about_ai_risknew database of empirical evidence for some claims about existential risk from AI <cit.>.We find that the current state of the evidence for existential risk from misaligned power-seeking is concerning but inconclusive.* There is strong empirical evidence of specification gaming and related phenomena, both in AI systems and other contexts, but it remains unclear whether specification gaming will be sufficiently extreme to pose an existential risk.* For goal misgeneralization, the evidence is more speculative. Examples of goal misgeneralization to date are sparse, open to interpretation, and not in themselves harmful. It’s unclear whether the evidence for goal misgeneralization is weak because it is not in fact a phenomenon which will affect AI systems, or because it will only affect AI systems once they are more goal-directed than at present.* There is also limited empirical evidence of power-seeking, but there are strong conceptual arguments and formal proofs which justify a stronger expectation that power-seeking will arise in some AI systems.Given the current state of the evidence, it is hard to be very confident either that misaligned power-seeking poses a large existential risk, or that it poses no existential risk.That we cannot confidently rule out existential risk from AI via misaligned power-seeking is cause for serious concern.§ INTRODUCTION Many claim that artificial intelligence could pose an existential risk - that AI could lead to human extinction, or to a catastrophe which destroys humanity’s potential.[Ord defines an existential catastrophe as “the destruction of humanity’s long-term potential” <cit.>.]Individual researchers have been making this claim for the last decade <cit.>. More recently, the number of voices raising concerns about existential risk from AI has grown. In May 2023, hundreds of experts signed an open letter stating that “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war” <cit.>. Politicians have also begun to speak about the need to manage existential risk. For example, the UK’s Science, Innovation and Technology Committee has identified “the existential challenge” of AI as “a major threat to human life” as one of twelve areas for policymakers to address <cit.>.The argument that AI could pose an existential risk has been well made elsewhere <cit.>. The increasing prominence of the argument that AI could pose an existential risk, combined with the growing evidence base for some aspects of this argument, make now a good time to review the strength of the evidence for existential risk from AI. §.§ Scope There are several different pathways to existential risk from AI.The 2023 UK AI Safety Summit focuses on two of these pathways:[Some scholars have also pointed out a third pathway to existential risk from AI, via multi-agent interactions. See <cit.>, and the https://acsresearch.org/researchAlignment of Complex Systems Research Group.] * “Misuse risks,[See <cit.> for an introduction to misuse risks, which they term ‘Malicious use”.] for example where a bad actor is aided by new AI capabilities in biological or cyber-attacks, development of dangerous technologies, or critical system interference”* “Loss of control risks that could emerge from advanced systems that we would seek to be aligned with our values and intentions” <cit.> A particular class of loss of control risks is risks from misaligned power-seeking <cit.>. The basic argument for existential risk from misaligned power-seeking is that:[See [sec:A]Appendix A for a discussion of the more detailed argument given in <cit.>.]* (Preconditions) In the not-too-distant future, some AI systems will be sufficiently capable to pose an existential risk.* (Misalignment) Some capable AI systems will develop goals which are misaligned with human goals.* (Power-seeking) Some capable, misaligned AI systems will seek power in order to achieve their goals.* (Existential consequences) This misaligned power-seeking will lead to human disempowerment, which will constitute an existential catastrophe. This report reviews the evidence for existential risk from future AI systems via misalignment and power-seeking.The following table breaks down the argument for existential risk from misaligned power-seeking further, and highlights the areas which are in the scope of this report. [sec:B]Appendix B gives a shallow review of the evidence for some further claims about existential risk from AI which are outside of the scope of this report. §.§ Methodology This report is based on: * A review of the relevant literature on misaligned power-seeking * A series of interviews with AI researchers working on existential risk from AIWe interviewed six AI researchers about the strength of the evidence for existential risk from AI. Summaries and recordings of some of the interviews can be found https://wiki.aiimpacts.org/arguments_for_ai_risk/is_ai_an_existential_threat_to_humanity/interviews_on_the_strength_of_the_evidence_for_ai_risk_claimshere.Note that the sample size is small, and we did not interview AI researchers who are skeptical of existential risk from AI.[ We didn’t have the resources to interview a representative sample, and decided that we would get the most relevant information from speaking with researchers who work on AI existential risk and so are familiar with the evidence.] * A new database of empirical evidence for some claims about existential risk from AIThe full database can be accessed https://wiki.aiimpacts.org/arguments_for_ai_risk/is_ai_an_existential_threat_to_humanity/database_of_empirical_evidence_about_ai_riskhere. It covers empirical evidence only, and includes evidence relating to specification gaming, goal misgeneralization and power-seeking (as well as deceptive alignment, self-improvement, and other claims relating to existential risk from AI).The database draws significantly from existing databases on https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtml?urp=gmail_link gxids=7628specification gaming <cit.> and https://docs.google.com/spreadsheets/d/e/2PACX-1vTo3RkXUAigb25nP7gjpcHriR6XdzA_L5loOcVFj_u7cRAZghWrYKH2L2nU4TA_Vr9KzBX5Bjpz9G_l/pubhtmlgoal misgeneralization <cit.>.§ A REVIEW OF THE EVIDENCE FOR EXISTENTIAL RISK FROM MISALIGNED POWER-SEEKINGMost of the AI existential risk researchers we interviewed regarded the evidence for misaligned power-seeking as at least somewhat speculative or uncertain. [ “The main best objection I get from really smart people on this is that most of the evidence is of a weaker or more speculative form than what we are used to using to evaluate policies, at least really expensive policies like the ones AI doomers are advocating. They basically say, if I believed you based on these sorts of arguments, I would also have to believe lots of other people saying crazy sounding things. And I think they’re right that this is actually a weaker form of evidence that’s easier to spoof.” [36:07]<cit.>“I think that evidence for goal-directedness and correspondingly power-seeking is weaker. There’s kind of a cluster of arguments that are based on systems being goal-directed, both real goal misgeneralization and intentional power-seeking, and so on. And that's something that we're more uncertain about… deceptive alignment is also part of that cluster because that also relies on the system developing more goal-directedness.” [56:25] <cit.>“The arguments about misalignment risk are definitely more uncertain in that they are doing more extrapolation. Both arguments are doing extrapolation. I think the misalignment stuff is sometimes doing a bit more of a difficult extrapolation, because it’s extrapolating these generalization properties which is just notoriously hard to do. I think that means that the case is just much more uncertain, but the case that the stakes are big is very good.” [47:16] <cit.> ] Below, we review the evidence for misaligned power-seeking, including both conceptual and empirical evidence. §.§ The strength of the empirical evidenceIn general, the empirical evidence is weaker than the conceptual arguments for these claims about existential risk from AI. This is discussed in the relevant sections, but there are also some general points to make about the relative weakness of empirical evidence for misaligned power-seeking. Firstly, there are other properties of AI systems which might prove to be preconditions of misaligned power-seeking, but which current systems have not yet attained. It is plausible that systems will only display misaligned power-seeking at higher levels of general capabilities for example,[“The story of you train an AI to fetch a coffee and then it realizes that the only way it can do that is to take over the world is a story about misgeneralization. And it's happening at a very high level of abstraction. You're using this incredibly intelligent system which is reasoning at a very high level about things and it's making the error at that high level... And I think the state of the evidence is… we've never observed a misgeneralization failure at such a high level of abstraction, but that's what we would expect because we don't have AIs that can even reason at that kind of level of abstraction.” [28:36] <cit.>] or that misaligned power-seeking requires a higher level of goal-directedness than current systems have.[“What I'm expecting is happening here is that current systems are not goal-directed enough to show real power-seeking. And so the power-seeking threat model becomes more reliant on these kind of extrapolations of when there are systems which are more capable, they'll probably be at least somewhat more goal-directed and then once we have goal-directedness, we can more convincingly argue that power-seeking is going to be a thing because we have theory and so on, but there's a lot of uncertainty about it because we don't know how much systems will become more goal-directed.” [54:35] <cit.>]Secondly, several of the AI researchers we interviewed clarified that the empirical evidence so far forms only a small or very small part of their reasons for concern about misaligned power-seeking, with more weight placed on conceptual arguments.[ “[Hadshar] Empirical details about capabilities that AI systems have now don’t sound very important to your world view. [Researcher] Exactly.” [30:08] <cit.>“I think that theoretical or conceptual arguments do have a lot of weight. Maybe I would put that at 60% and empirical examples at 40%, but I'm pulling this out of the air a little bit.” [24:00] <cit.>] §.§ The evidence for misalignment In this report, we consider two routes to capable AI systems developing goals which are misaligned with human goals: * Specification gaming,[ "Specification gaming is a behavior that satisfies the literal specification of an objective without achieving the intended outcome." <cit.>. Specification gaming is related to proxy gaming <cit.>, side effects <cit.>, reward gaming <cit.>, reward hacking <cit.>, reward misspecification <cit.>, and Goodhart’s law <cit.>.] where some capable AI systems learn designer-specified goals which diverge from intended goals in unforeseen ways.* Goal misgeneralization,["Goal misgeneralization is a specific form of robustness failure for learning algorithms in which the learned program competently pursues an undesired goal that leads to good performance in training situations but bad performance in novel test situations." <cit.>. Goal misgeneralization is related to goal drift <cit.> and distributional shift <cit.>. ] where some capable AI systems develop goals which are perfectly correlated with intended goals in training, but diverge once the systems are deployed. §.§.§ The evidence for specification gaming One route to AI systems developing misaligned goals is specification gaming, where AI systems learn the goals which they are given, but these goals are misspecified and come apart from intended goals."Specification gaming is a behavior that satisfies the literal specification of an objective without achieving the intended outcome." <cit.> If sufficiently powerful AI systems were to be deployed in high-stakes settings, then the difference between the literal specification and the intended outcome could become extreme, leading to catastrophic outcomes <cit.>.Specification gaming is a well-established phenomenon, both in general and in the context of AI systems. In non-AI contexts, there are numerous examples of variants of specification gaming,[ For discussions about a cluster of related concepts including Goodhart’s Law and proxy failure, see <cit.>.] in economics <cit.>, education <cit.>, healthcare <cit.> and other areas.[ See Table 1 in <cit.> for a collection of examples.] It is clear that at least in human and social systems, such dynamics are widespread.In the context of AI systems, there are both theoretical demonstrations of specification gaming given certain model assumptions <cit.>, and many empirical examples of specification gaming in AI systems, both in toy environments and in deployment <cit.>.[The https://docs.google.com/spreadsheets/d/e/2PACX-1vRPiprOaC3HsCf5Tuum8bRfzYUiKLRqJmbOoC-32JorNdfyTiRRsR7Ea5eWtvsWzuxo8bjOxCG84dAg/pubhtmldatabase linked to from this post contains over 70 examples of specification gaming. See also <cit.>.“One form of the problem has also been studied in the context of feedback loops in machine learning systems (particularly ad placement), based on counterfactual learning and contextual bandits. The proliferation of reward hacking instances across so many different domains suggests that reward hacking may be a deep and general problem, and one that we believe is likely to become more common as agents and environments increase in complexity.” <cit.>. “Reward hacking—where RL agents exploit gaps in misspecified reward functions—has been widely observed” <cit.>. ]For example, OpenAI trained an agent to play the game CoastRunners. The agent was rewarded for hitting targets along the course of a boat race. But instead of racing to the finish line, the agent discovered a loophole where it could race in a circle, repeatedly crashing and setting itself on fire, to earn maximum points <cit.>.While a majority of clear examples of specification gaming in AI systems arise in toy environments like CoastRunners <cit.>, there are already some examples of deployed AI systems engaging in specification gaming, and of this behavior leading to harm, particularly in the areas of bias and misinformation. For example, a healthcare screening system deployed in 2019 was trained to predict health care costs. As less is spent on Black patients’ care because of unequal access to healthcare, the algorithm rated Black patients as less sick than White patients even where Black patients had more underlying chronic illnesses <cit.>. Falsehoods generated by large language models can also be viewed as the result of specification gaming, though here the case is less clear. Language models trained to accurately predict the next token frequently generate false content <cit.>, but as one of our interviewees pointed out, it is a matter of judgment whether this is best interpreted as specification gaming or as a simple capability failure.[“With some of the language model examples, I think you can ask the question, is this really specification gaming, or is it capability failure, or something like that? I think sometimes there's a bit of a judgment call there.” [29:45] <cit.>]The evidence is strong that AI systems will be subject to specification gaming to some degree. It remains unclear whether specification gaming will be sufficiently serious to pose an existential risk. In order to cause large-scale harms, misspecified goals would need to be subtle enough that systems were still deployed in high-stakes settings, but diverge extremely from intended goals in deployment. To date, no examples of specification gaming in AI systems have been catastrophic, so there is no direct evidence of this degree of harm from specification gaming.There are some tentative signs that specification might become a more serious problem as models become more capable. In initial experiments, larger language models and language models with more RLHF are more prone to sycophantic answers, and to expressing a desire to seek power and avoid shutdown <cit.>. Insofar as these behaviors are indeed caused by specification gaming,[That is, the systems are following the specified goal of generating text which receives high positive feedback from humans, but this comes apart from the goal of generating helpful, honest and harmless text. See also <cit.>.] this is cause for concern. Another study has found that when goals are misspecified, more capable RL agents will diverge more from intended goals than less capable agents, suggesting that specification gaming may worsen as capabilities improve. The same study also found that the divergence between intended and misspecified goals was sometimes very sudden, which might make it hard to anticipate and prevent such problems arising in deployment <cit.>. Overall, the evidence for specification gaming is strong, though it remains unclear whether the scale of the problem will be sufficient to pose an existential risk.§.§.§ The evidence for goal misgeneralizationAnother route to AI systems developing misaligned goals is goal misgeneralization, where systems develop goals which are perfectly correlated with intended goals in training, but diverge once the systems are deployed."Goal misgeneralization is a specific form of robustness failure for learning algorithms in which the learned program competently pursues an undesired goal that leads to good performance in training situations but bad performance in novel test situations." <cit.>The underlying mechanism behind goal misgeneralization is distributional shift, where there are systematic differences between the training distribution and the test distribution. Distributional shift is a very widely documented phenomenon in AI systems <cit.>, and out-of-distribution robustness remains unsolved <cit.>. This provides a reason to expect goal misgeneralization to arise.However, the empirical evidence for goal misgeneralization is currently weak, in spite of the prevalence of distributional shift.There are examples of goal misgeneralization in AI systems <cit.>. However, these examples do not conclusively show that goal misgeneralization will arise in a harmful way.Firstly, all of the examples of goal misgeneralization we have found take place in demonstration, rather than in deployed systems. Sometimes these demonstrations involve very obvious and crude differences between the training data and the test data. For instance, <cit.> train a CoinRun agent exclusively on mazes where the cheese is always in the upper right hand corner, and show in testing that the agent learns to navigate to the upper right rather than to the cheese. This shows that goal misgeneralization can occur when the training data is very different to the test data - but doesn’t provide evidence for goal misgeneralization in more realistic settings. We have not found any evidence of real-world harm from goal misgeneralization so far.Secondly, it is currently not possible to demonstrate conclusively that examples of goal misgeneralization actually involve systems learning a goal which is correlated in training but not deployment. It is only possible to observe the behavior of the system in question, not its inner workings, so we cannot know what goal (if any) a system has learned. Examples to date only conclusively show behavioral or functional goal misgeneralization.[ “I think right now the examples we have are more like behavioral goal misgeneralization where you just have different behaviors that are all the same in training but then they become decoupled in the new setting but we don't know how the behavior is going to generalize. We call it goal misgeneralization maybe more as a shorthand. The behavior has different ways of generalizing that are kind of coherent. We can present it as the system learned the wrong goal, but we can't actually say that it has learned a goal. Maybe it’s just following the wrong heuristic or something. I think the current examples are a demonstration of the more obvious kind of effect where the training data doesn't distinguish between all the ways that the behavior could generalize.” [37:11] <cit.> ]Furthermore, it’s often hard to distinguish goal misgeneralization from capability misgeneralization, where the system’s capabilities also fail to generalize.[“I think it's a less well understood phenomenon… it can be hard to distinguish capability misgeneralization from goal misgeneralization.” [33:16] <cit.> ] In the abstract, goal misgeneralization is distinct from capability misgeneralization: “a system’s capabilities generalize but its goal does not generalize as desired. When this happens, the system competently pursues the wrong goal.” <cit.> But in real-world settings, the wrong goal may often lead to capability failure. A system which learns to competently predict that tumors with rulers are malignant based on its training data will fail to competently predict actual malignancy when tested on more diverse data <cit.>. Insofar as goal misgeneralization comes with capability misgeneralization, AI systems which learn very misgeneralized goals are unlikely to be deployed.There are several possible explanations of the weakness of evidence on goal misgeneralization so far.Goal misgeneralization might require a level of goal-directedness which current systems don’t yet have,[“Specifying something as goal misgeneralization also requires some assumption that the system is goal-directed to some degree and that can also be debatable.” [33:16] <cit.>] or an ability to reason at higher levels of abstraction.[“The story of you train an AI to fetch a coffee and then it realizes that the only way it can do that is to take over the world is a story about misgeneralization. And it's happening at a very high level of abstraction. You're using this incredibly intelligent system which is reasoning at a very high level about things and it's making the error at that high level... And I think the state of the evidence is… we've never observed a misgeneralization failure at such a high level of abstraction, but that's what we would expect because we don't have AIs that can even reason at that kind of level of abstraction.” [28:36] <cit.>] Reliably identifying goal misgeneralization might also require more advanced interpretability techniques.[“The mechanism is a lot less well understood. I think to really properly diagnose goal misgeneralization we would need better interpretability tools.” [36:30] <cit.>] Alternatively, the distinction between behavioral and ‘actual’ goal misgeneralization may be misplaced: if sufficiently capable systems engage in behaviors which look like goal misgeneralization, then functionally they are misaligned whether or not their internal representations match our description of goal misgeneralization.So there are some reasons to expect the current evidence of goal misgeneralization to be weak, even if the phenomenon eventually arises strongly. Nevertheless, so far the evidence for goal misgeneralization remains reasonably speculative.[“I think [the evidence for goal misgeneralization] is not as strong [as for specification gaming].” [33:16]<cit.>“These generalization failures at new levels of abstraction are notoriously hard to predict. You have to try and intuit what an extremely large scale neural net will learn from the training data and in which ways it will generalize… I’m relatively persuaded that misgeneralization will continue to happen at higher levels of abstraction, but whether that actually is well described by some of the typical power-seeking stories I’m much less confident and it’s definitely going to be a judgment call.” [28:36] <cit.> ] §.§ The evidence for power-seeking The presence of misaligned goals in and of itself need not pose an existential risk. But if AI systems with misaligned goals successfully and systematically seek power, the result could be existential.In <cit.>, power-seeking is defined as “active efforts by an AI system to gain and maintain power in ways that designers didn’t intend, arising from problems with that system’s objectives." Carlsmith loosely defines power as “the type of thing that helps a wide variety of agents pursue a wide variety of objectives in a given environment.” <cit.> We can take Bostrom’s categories of instrumental goals as illustrative of this “type of thing”:* Self-preservation* Goal-content integrity[“An agent is more likely to act in the future to maximize the realization of its present final goals if it still has those goals in the future. This gives the agent a present instrumental reason to prevent alterations of its final goals.” <cit.>]* Cognitive enhancement* Technological perfection[“An agent may often have instrumental reasons to seek better technology, which at its simplest means seeking more efficient ways of transforming some given set of inputs into valued outputs.” <cit.>]* Resource acquisition <cit.> The conceptual argument that some AI systems will seek power seems strong.[“I think some of the other theoretical arguments like instrumental convergence also generally seems like a very clear argument, and we can observe some of these effects in human systems and corporations and so on.” [25:23] <cit.>] Bostrom’s instrumental convergence thesis is simple and intuitively plausible: “as long as they possess a sufficient level of intelligence, agents having any of a wide range of final goals will pursue similar intermediary goals because they have instrumental reasons to do so.” <cit.>There are formal proofs that the instrumental convergence thesis holds for various kinds of AI systems. <cit.> prove that “most reward functions make it optimal to seek power by keeping a range of options available” in the context of Markov decision processes. <cit.> extend this result to a class of sub-optimal policies, showing that “many decision-making functions are retargetable, and that retargetability is sufficient to cause power-seeking tendencies”. <cit.> further show that agents which learn a goal are likely to engage in power-seeking.The formal and theoretical case for power-seeking in sufficiently capable and goal-directed AI systems is therefore relatively strong.However, the empirical evidence of power-seeking in AI systems is currently weak. There are some demonstrations of RL agents engaging in power-seeking behaviors in toy environments (for example, <cit.>), but no convincing examples of AI systems in the real world seeking power in this way to date.[“I don’t think there’s really empirical evidence [for power-seeking]... To me it’s very uncertain.” [28:36] <cit.>]<cit.> show language models giving “answers that indicate a willingness to pursue potentially dangerous subgoals: resource acquisition, optionality preservation, goal preservation, powerseeking, and more." But indicating willingness is not the same as actually engaging in power-seeking behaviors. Language models might express power-seeking desires merely because their training data contains similar text, and not because they will ever directly seek power.Sycophancy, where language models agree with their users regardless of the accuracy of the statements, could be taken as an example of power-seeking behavior. But as with the results of <cit.>, sycophancy is likely to be simply an imitation of the training data, rather than an intentional behavior.[“Looking at current systems, sycophancy can be considered as a form of power-seeking. Although I think that's also maybe debatable. It's building more influence with the user by agreeing with their views, but it's probably more of a heuristic that is just somehow reinforced than intentional power-seeking.” [49:35] <cit.> ]If the theoretical arguments for power-seeking are strong, why is the empirical evidence to date weak?As with goal misgeneralization, one plausible explanation is that power-seeking behavior depends on a level of goal-directedness or capability in general which current models don’t yet have.[“What I'm expecting is happening here is that current systems are not goal-directed enough to show real power-seeking. And so the power-seeking threat model becomes more reliant on these kind of extrapolations of when there are systems which are more capable, they'll probably be at least somewhat more goal-directed and then once we have goal-directedness, we can more convincingly argue that power-seeking is going to be a thing because we have theory and so on, but there's a lot of uncertainty about it because we don't know how much systems will become more goal-directed.” [54:35] <cit.>]Overall, with strong conceptual arguments but no public empirical evidence, it seems plausible but unproven that some AI systems will seek power.§ CONCLUSION: THE CURRENT STRENGTH OF THE EVIDENCE FOR EXISTENTIAL RISK FROM MISALIGNED POWER-SEEKING The current state of the evidence for existential risk from misaligned power-seeking is concerning but inconclusive.There is strong empirical evidence of specification gaming and related phenomena, both in AI systems and other contexts. We can be reasonably confident therefore that specification gaming will arise to some extent in future AI systems, but it remains unclear whether specification gaming will be sufficiently extreme to pose an existential risk.For goal misgeneralization, the evidence is more speculative. Distributional shift, which is a prerequisite of goal misgeneralization, is a well-documented phenomenon, but the examples of goal misgeneralization to date are sparse, open to interpretation, and not in themselves harmful. It’s unclear whether there is weak evidence for goal misgeneralization because it is not in fact a phenomenon which will affect AI systems to a harmful degree, or because it will only affect AI systems once they are more goal-directed than at present.There is also limited empirical evidence of power-seeking, but there are strong conceptual arguments and formal proofs which justify a stronger expectation that power-seeking will arise in some AI systems.Strong empirical evidence of specification gaming combined with strong conceptual arguments for power-seeking make it difficult to dismiss the possibility of existential risk from misaligned power-seeking. On the other hand, we are not aware of any empirical examples of misaligned power-seeking in AI systems, and so arguments that future systems will pose an existential risk must remain somewhat speculative.Given the current state of the evidence, it is hard to be extremely confident either that misaligned power-seeking poses a large existential risk, or that it poses no existential risk.That we cannot confidently rule out existential risk from AI via misaligned power-seeking is cause for serious concern.§ ACKNOWLEDGEMENTS Thanks to Katja Grace and Harlan Stewart in particular; to Michael Aird, Adam Bales, Rick Korzekwa, Fazl Barez, Sam Clark, Max Dalton, and many others for various levels of feedback and support; and to all the researchers we interviewed. § REFERENCES acl_natbib § APPENDIX A: CARLSMITH’S ARGUMENT FOR EXISTENTIAL RISK VIA POWER-SEEKING AI The following table maps between the premises of <cit.>’s argument, and the claims used in this report (see Table 1). Claims within the scope of this report are bolded. Note that the claims used in this report are not identical to Carlsmith’s premises, though they are closely related.p0.175|p0.375|p0.35 3 2lCarlsmith Claims used in this report 2lBy 2070: (Preconditions: Timelines) The relevant AI systems will be developed in the not-too-distant future. 1. It will become possible and financially feasible to build AI systems with the following properties: Advanced capability: they outperform the best humans on some set of tasks which when performed at advanced levels grant significant power in today’s world (tasks like scientific research, business/military/political strategy, engineering, and persuasion/manipulation). (Preconditions: Capabilities) Some AI systems will be highly capable, in the sense that they are able to perform many important tasks at or above human level Agentic planning: they make and execute plans, in pursuit of objectives, on the basis of models of the world.(Preconditions: Goal-directedness) Some AI systems will be goal-directed, in that they pursue goals consistently over long time periods.Strategic awareness: the models they use in making plans represent with reasonable accuracy the causal upshot of gaining and maintaining power over humans and the real-world environment. (Call these “APS”—Advanced, Planning, Strategically aware—systems.) (Preconditions: Situational awareness) Some AI systems will be aware that they are AI systems, and whether they are in training or deployment.2p0.62. There will be strong incentives to build and deploy APS systems. 2p0.63. It will be much harder to build APS systems that would not seek to gain and maintain power in unintended ways (because of problems with their objectives) on any of the inputs they’d encounter if deployed, than to build APS systems that would do this, but which are at least superficially attractive to deploy anyway. (Misalignment) Some capable AI systems will develop goals which are misaligned with human goals.(Misalignment: Specification gaming) Some capable AI systems will learn designer-specified goals which diverge from intended goals in unforeseen ways. (Misalignment: Goal misgeneralization) Some capable AI systems will develop goals which are perfectly correlated with intended goals in training, but diverge once the systems are deployed. 2p0.64. Some deployed APS systems will be exposed to inputs where they seek power in unintended and high-impact ways (say, collectively causing >$1 trillion dollars of damage), because of problems with their objectives. (Power-seeking) Some capable, misaligned AI systems will seek power in order to achieve their goals.2p0.65. Some of this power-seeking will scale (in aggregate) to the point of permanently disempowering  all of humanity. (Existential consequences: Disempowerment) This misaligned power-seeking will lead to permanent human disempowerment.2p0.66. This disempowerment will constitute an existential catastrophe. (Existential consequences: Existential catastrophe) Permanent human disempowerment will constitute an existential catastrophe.3 § APPENDIX B: SOME EVIDENCE FOR OTHER CLAIMS ABOUT EXISTENTIAL RISK FROM AIWe systematically reviewed the evidence for claims about misalignment and power-seeking. However, in the course of our research and interviews, we came across some evidence for other relevant claims.This appendix contains some of the evidence for goal-directedness, situational awareness, and deceptive alignment. It should not be treated as a comprehensive review of the state of the evidence on these topics. §.§ Some evidence for goal-directedness Roughly, goal-directedness refers to a property of AI systems to persistently pursue a goal. [In <cit.>, goal-directedness is referred to as “agentic planning”, where AI systems “make and execute plans, in pursuit of objectives, on the basis of models of the world.”] Goal-directedness has not been well-defined so far, and so reviewing the evidence for goal-directedness is hampered by unclarity about the concept. [ “Right now it's really hard to distinguish between real goal-directedness and learned heuristics… I think part of the problem with goal-directedness is we don’t really understand the phenomenon that well.” [44:00] <cit.>]That said, it seems plausible that goal-directedness is a direct precondition for goal misgeneralization and for power-seeking,[“Specifying something as goal misgeneralization also requires some assumption that the system is goal-directed to some degree and that can also be debatable.” [33:16] <cit.> “What I'm expecting is happening here is that current systems are not goal-directed enough to show real power-seeking. And so the power-seeking threat model becomes more reliant on these kind of extrapolations of when there are systems which are more capable, they'll probably be at least somewhat more goal-directed and then once we have goal-directedness, we can more convincingly argue that power-seeking is going to be a thing because we have theory and so on, but there's a lot of uncertainty about it because we don't know how much systems will become more goal-directed.” [54:35] <cit.>] so it is an important claim to assess.Coherence theorems offer one kind of conceptual evidence for goal-directedness, but the extent to which they apply to future AI systems is contested <cit.>.[“Some of the theoretical arguments make the case that goal-directedness is an attractor. I think that's something that's more debatable, less clear to me. There have been various discussions on LessWrong and elsewhere about to what extent do coherence arguments imply goal-directedness. And I think the jury is still out on that one.” [42:36] <cit.>]There is limited empirical evidence of goal-directedness in systems so far.[“I think the evidence so far at least for language models, there isn't really convincing evidence of goal-directedness.” [44:00] <cit.>] One of the researchers we interviewed noted that language models may be particularly unsuited to goal-directedness.[ “It’s also possible goal-directedness is kind of hard. And especially, maybe language models are just a kind of system where goal-directedness comes less naturally than other systems like reinforcement learning systems or even with humans or whatever.” [40:26] <cit.>]However, individual researchers we interviewed believe that:* To the extent that language models can simulate humans, they will have the ability to simulate goal-directedness.[ “I think generally the kind of risk scenarios that we are most worried about would involve the system acting intentionally and deliberately towards some objectives but I would expect that intent and goal-directedness comes in degrees and if we see examples of increasing degrees of that then I think that does constitute evidence of that being possible. Although it’s not clear whether it will go all the way to really deliberate systems, but I think especially to the extent that these systems can simulate humans… they have the ability to simulate deliberate intentional action and planning because that's something that humans can do.” [20:20] <cit.>]* There is a clear trend towards systems acting more autonomously.[ “We are already capable of getting AI systems to do simple things relatively autonomously. I don’t think it’s a threshold where now it’s autonomous, now it’s not… I think it’s a spectrum and it’s just very clearly ramping up. We already have things that have a little autonomy but not very much. I think it's just a pretty straightforward trend at this point.” [24:39] <cit.>]One researcher we interviewed highlighted goal-directedness as one of their key uncertainties about existential risk from AI.[ “I think we might see more goal-directed systems which produce clearer examples of internal goal misgeneralization, but also I wouldn't be that surprised if we don't see that. I think that's one of the big uncertainties I have about level of risk. How much can we expect goal-directedness to emerge?” [40:26] <cit.>] §.§ Some evidence for situational awareness “A model is situationally aware if it's aware that it's a model and can recognize whether it's currently in testing or deployment.” <cit.>This is important to arguments about existential risk from AI as situational awareness is plausibly a precondition for successful misaligned power-seeking: a model may need to understand its own situation at a sophisticated level in order to make plans which successfully disempower humans. In particular, situational awareness seems like a precondition for deceptive alignment.There is some empirical work demonstrating situational awareness in large language models, but the results are inconclusive <cit.>. <cit.> find that language models can perform out-of-context reasoning tasks, but only with particular training set ups and data augmentation. <cit.> run various experiments to test awareness, and find that “the models we evaluate are not aware of at least some basic details regarding themselves or their training procedures.” On the other hand, <cit.> use the same questions as <cit.> but find that their model answers 85% accurately.
http://arxiv.org/abs/2310.18244v1
{ "authors": [ "Rose Hadshar" ], "categories": [ "cs.CY", "cs.AI" ], "primary_category": "cs.CY", "published": "20231027162945", "title": "A Review of the Evidence for Existential Risk from AI via Misaligned Power-Seeking" }
AMSbtheoremTheorem lemmaLemma definitionDefinition corollaryCorollary propositionProposition observationObservation propertyProperty conjectureConjecture R i.e.,e.g., Bell 𝕒 a̅ a ḇ η_Λ Λ Λ_d Λ̅ Λ̅_d diag |1 𝕣 𝕣_d 𝕣̃_d 𝕠 𝕓 𝕨 00 ℝ σ̂ σ̂ ω̂ ω̂_cm ρ̂ β̂ ρ̂_c 1̂ in out ℂ_Λ ℂ_Λ^alig 𝕨_d 𝕨_dm 𝕨_cm (𝕨̃_m)_d n t a r r_c 0 v_Λ v κ ω v_Λ ℬ rps
http://arxiv.org/abs/2310.18213v1
{ "authors": [ "Fabricio Toscano", "Diego G. Bussandri", "Gustavo M. Bosyk", "Ana P. Majtey", "Mariela Portesi" ], "categories": [ "quant-ph" ], "primary_category": "quant-ph", "published": "20231027154112", "title": "Optimal quantum teleportation protocols for fixed average fidelity" }
Marvin Morgan [email protected] 0000-0003-4022-6234]Marvin Morgan Department of Astronomy, The University of Texas at Austin, Austin, TX 78712, USA0000-0003-2649-2288]Brendan P. Bowler Department of Astronomy, The University of Texas at Austin, Austin, TX 78712, USA0000-0001-6532-6755]Quang H. Tran Department of Astronomy, The University of Texas at Austin, Austin, TX 78712, USA0000-0003-0967-2893]Erik Petigura Department of Physics and Astronomy, University of California Los Angeles, Los Angeles, CA 90095, USA0000-0001-5909-4433]Vighnesh Nagpal Department of Astronomy, University of California, Berkeley, CA 94720, USA0000-0002-3199-2888]Sarah Blunt Department of Astronomy, California Institute of Technology, Pasadena, CA, USA Transiting giant planets provide a natural opportunity to examine stellar obliquities, which offer clues about the origin and dynamical histories of close-in planets. Hot Jupiters orbiting Sun-like stars show a tendency for obliquity alignment, which suggests that obliquities are rarely excited or that tidal realignment is common.However, the stellar obliquity distribution is less clear for giant planets at wider separations where realignment mechanisms are not expected to operate. In this work, we uniformly derive line-of-sight inclinations for 47 cool stars (T_eff < 6200 K) harboring transiting hot and warm giant planets by combining rotation periods, stellar radii, and v sin i measurements. Among the systems that show signs of spin-orbit misalignment in our sample, three are identified as being misaligned here for the first time. Of particular interest are Kepler-1654, one of the longest-period (1047 d; 2.0 AU) giant planets in a misaligned system, and Kepler-30, a multi-planet misaligned system. By comparing the reconstructed underlying inclination distributions, we find that the inferred minimum misalignment distributions of hot Jupiters spanning a/R_* = 3–20 (≈ 0.01–0.1 AU) and warm Jupiters spanning a/R_* = 20–400 (≈ 0.1–1.9 AU) are in good agreement. With 90% confidence, at least 24^+9_-7% of warm Jupiters and 14^+7_-5% of hot Jupiters around cool stars are misaligned by at least 10^∘. Most stars harboring warm Jupiters are therefore consistent with spin-orbit alignment. The similarity of hot and warm Jupiter misalignment rates suggests that either the occasional misalignments are primordial and originate in misaligned disks, or the same underlying processes that create misaligned hot Jupiters also lead to misaligned warm Jupiters.§ INTRODUCTIONOur Solar System contains two gas giants and two ice giants on coplanar and near-circular orbits at distances beyond 5 AU. The discovery of 51 Peg b, a 0.5 M_Jup planet on a 4.5-day orbit (), and several other early planetary detections (; ; ), planted the first seeds of a blossoming new field of astronomy. These discoveries overhauled the established understanding of planetary formation, migration, and orbital architectures. It is now clear from planet period and eccentricity distributions that substantial orbital evolution is common, and perhaps even ubiquitous among giant planets (). The giant planets in our Solar System reside outside of the “water ice line," the location in a protoplanetary disk where water condenses into solid ice. The solar ice line is currently located in the asteroid belt at ∼2–3 AU (). Beyond this region, rapid planetesimal and core growth is facilitated, which results in more efficient assembly of giant planets (). Long-baseline radial velocity (RV) surveys have found that giant planets are prevalent at orbital distances of 1–10 AU compared to orbits interior or exterior of this range (; ). Direct imaging surveys have also found similar results that show giant planets are less abundant at wider orbital distances (; ; ).Although the occurrence rate of giant planets appears to peak beyond the location of the water ice line, there remains a significant population of gas giants at closer separations. The origin and evolution of these Jovianplanets within ∼ 2 AU of Sun-like stars has proven to be challenging to observationally constrain. Several mechanisms have been proposed to explain the presence of giant planets interior to the water ice line. Early interactions with the protoplanetary disk can result in inward migration (;;). Some gas giants at close separations may have formed in situ if favorable conditions are met (). Kozai-Lidov (KL) oscillations with an outer companion represent another viable mechanism for giant planets to migrate inward when coupled with high-eccentricity tidal damping (; ; ; ). When eccentricities are excited and periastron distances shrink during these oscillations, tidal friction can dissipate orbital energy and circularize the planet's orbit, breaking the KL cycles and freezing the planet’s orbital parameters (; ; ).Planet-planet scattering can also trigger high eccentricity migration through a secular or chaotic exchange of angular momentum between planets (; ;;).High-eccentricity tidal migration driven by KL oscillations or planet-planet scattering is a leading pathway to produce hot Jupiters (; ). However, this process can only occur if the planet passes close enough to its host star to gravitationally interact with the stellar envelope. Warm Jupiters, situated beyond ∼ 0.1 AU, are too far from their host star to raise dissipative tides. <cit.> and <cit.> place an upper limit on KL oscillations as a viable migration mechanism for most hot and warm Jupiters due to a lack of observed highly-eccentric proto-hot Jupiters by Kepler. Proto-hot Jupiters orbiting bright, more metal-rich, nearby stars observed by TESS, such as TOI-3362 b, may not be as uncommon ().The relative alignment of the stellar rotation axis and the planetary orbital plane can provide complementary insight into inward giant planet migration processes of warm Jupiters. A variety of mechanisms can misalign these rotational and orbital angular momentum vectors during or after the era of giant planet formation. Torques induced from binary companions and primordial disk structures can cause the misalignment of the stellar spin axis. <cit.> found that primordial misalignments might be produced by stellar flybys that occur during the epoch of planet formation. In this scenario, stellar companions can induce torques on protoplanetary disks, which can give rise to spin-orbit misalignments with any planets that eventually form (; ; ). <cit.> found that broken and misaligned disks are capable of torquing the spin axis of their host star. Interactions between stellar magnetic fields and circumstellar disks may also be able to generate a broad distribution of spin-orbit angles ().To date, most obliquity measurements have been constrained from the Rossiter–McLaughlin (RM) effect which measures the sky-projected spin–orbit angle between a star’s equatorial plane and a transiting planet's orbital plane, λ (; ). HD 209458 b was the first exoplanet for which this phenomenon was reported (), laying the foundation for over 100 additional measurements (). Most RM measurements have been obtained for hot Jupiters as they have frequent transits, large RM-induced RV amplitudes, and favorable geometric transit probabilities. RM measurements of hot Jupiters have revealed that misalignments are common around hot stars but less frequent around cool stars below the Kraft break (T_eff < 6200 K), which might be a result of tidal realignment (; ; ; ).In contrast, few RM measurements have been obtained for long-period transiting warm Jupiters due to a combination of their infrequent transits, small geometric transit probabilities, and long transit durations.[HIP 41378 d, a Neptune-sized transiting exoplanet with an orbital period of 278 days, is the longest-period planet of any size with an RM measurement ().] The longest-period giant planet for which the RM effect has been measured is HD 80606 b, a transiting warm Jupiter with an orbital period of 111.44 days and a/R_* = 94.64 (; ). [TOI-1859 b (a/R_* = 53.7) is the second-longest-period giant planet for which the RM effect was measured ().] Moving to larger orbital distances provides unique constraints on migration channels as it removes the possibility for tidal circularization, realignment, and synchronization and thus probes alternative migration and misalignment mechanisms.<cit.> found evidence that in single-star systems, warm Jupiters may be preferentially more aligned than hot Jupiters. They attribute this to differences in the formation and migration of hot and warm Jupiters. However, <cit.> found hints of an opposite trend, where hot Jupiters are mostly consistent with alignment while warm Jupiters in their sample have significant misalignments. More recent studies have usedstarspot-induced amplitudes to identify a correlation between increased misalignment with orbital separation moving outward to 50-day orbital periods (; ). If RM measurements are not available, a lower limit on the true obliquity, ψ, can be determined using the inclination of the host star in combination with the inclination of a transiting planet(; ; ). In this work, we investigate the minimum inferred stellar obliquities of stars hosting transiting giant planets beyond 0.1 AU. Minimum misalignment distributions are inferred from homogeneous and self-consistent measurements of i_*, the line-of-sight stellar spin inclination. Together with knowledge of the transiting planet's orbital geometry, this provides information about ψ. Here, we explore a simple question: are warm Jupiter host stars misaligned at similar ratesas hot Jupiter host stars? Establishing whether these two populations are similar or distinct can provide valuable clues about giant planet inward migration timescales and mechanisms.This paper is organized as follows. In Section <ref> we discuss our target selection criteria and describe how we construct our hot and warm Jupiter samples. In Section <ref> we describe our process of measuring individual and population-level stellar inclination distributions. In Section <ref> we present our results and discuss interpretations of the hot and warm Jupiter stellar obliquity distributions. Next, we describe individual misaligned systems in Section <ref>. Finally, we summarize our conclusions in Section <ref>.§ TARGET SELECTION AND ROTATION PERIODS Our sample of warm Jupiters originates from the NASA Exoplanet Archive (), as of August 2022. We selected transiting planets with either a measured minimum mass of m_p sin i = 0.3–13 M_Jup, or a radius R_p > 8 R_⊕, ensuring that low-mass brown dwarfs and sub-Jovian sized planets are excluded. We then isolated systems with scaled orbital distances of a/R_* > 20 to probe giant planets with semi-major axes ≳ 0.1 AU. For a Sun-like star with a field age of several Gyr, a separation of a/R_* > 20 corresponds to a realignment timescale greater than the system age and a circularization timescale of > 1 Gyr (; ; ; ; ). This cut in a/R_* reflects where warm Jupiters are expected to be largely undisturbed by tidal forces, in comparison to hot Jupiters which may have experienced more dynamically violent tidal migration and spin-orbit realignment ().[Note that a threshold of a/R_* = 20 can include giant planets with orbital periods under 10 days depending on the mass and radius of the host star.] This resulted in 110 transiting warm Jupiters orbiting 104 host stars. These systems represent our initial sample to measure stellar inclinations and minimum stellar obliquities, which is only possible for a subset of stars with rotation periods and projected rotational velocities.When available, we analyze TESS, Kepler, and K2 light curves of warm Jupiter host stars in our sample to uniformly determine rotation periods.To avoid confusion between pulsations from γ Dor variables and rotation periods from spot-driven modulations, only stars with spectral types of F5 or later are considered here. We use the<cit.> software package to search for and download all available 30-minute Pre-search Data Conditioning Simple Aperture Photometry <cit.>, 30-minute K2 extracted light curves <cit.>, and 2-minute cadence TESS Science Processing Operations Center (SPOC) PDCSAP <cit.> light curves for each target from the Mikulski Archive for Space Telescopes (MAST) data archive.[http://archive.stsci.edu/kepler/data_search/search.php http://archive.stsci.edu/kepler/data_search/search.php, https://archive.stsci.edu/k2/data_search/search.phphttps://archive.stsci.edu/k2/data_search/search.php https://archive.stsci.edu/missions-and-data/tesshttps://archive.stsci.edu/missions-and-data/tess/] Individual Kepler quarters, K2 campaigns, and TESS sectors are then normalized and stitched together to create the final light curves (see the Notes section of Table <ref> for details).[Some of the data presented in this paper were obtained from MAST at the Space Telescope Science Institute and can be accessed via [10.17909/xk84-vr57]https://doi.org/10.17909/xk84-vr57.] Finally, flares, transits, and other outliers are removed by running a high-pass Savitzky-Golay filter <cit.> through the light curve and selecting data points lying within three sigma of the photometric average.For each normalized light curve, we compute a Generalized Lomb-Scargle periodogram <cit.> over the frequency range 0.0005-100.0 d^-1 (0.01-2000 days) to search for any rotational modulation. Periods and uncertainties were measured by fitting a Gaussian to the highest periodogram peak. In the case where there is a large envelope resulting from fringe patterns in the GLS periodogram, the Gaussian was fit to the total curve in order to reflect that spread. Targets with a single strong peak, whose phase-folded light curves showed clear periodicity, and amplitudes of both the periodogram and rotational modulation were large are considered to have reliable period measurements. There are two host stars, K2-290 and WASP 84, with rotation period measurements adopted from <cit.> and <cit.>respectively, which satisfy our initial selection cuts but did not show clear periodic brightness variations in theirTESS or Kepler light curves. We adopt the published rotation periods for these two systems.We then retrieve v sin iand stellar radii measurements from the literature. Published projected rotational velocities are compiled, and a weighted mean of available measurements is adopted following the procedure in <cit.>. The spectra of K2-281, K2-77, Kepler-486, Kepler-52, and Kepler-1654 were obtained with the HIRES spectrometer () on the 10-m Keck-I Telescope between 2012-2018. The spectra were observed as part of several reconnaissance efforts to characterize Kepler and K2 planet-hosting stars by the California Planet Search () described in <cit.> and <cit.>. Spectral S/N ranged from 22-45 per reduced pixel on blaze at 5500 Å. We usedcode described in <cit.> to determine v sin i. At this S/N,returns v sin i measurements with uncertainties of 1 km s^-1 when v sin i is larger than 2 km s^-1. When v sin i is lower, the results are upper limits with v sin i < 2 km s^-1.All hot Jupiters in our sample, including the parameters for their host stars, are obtained from <cit.> and have measured minimum masses of m_p sin i = 0.3–13 M_Jup and a/R_* < 20. We further filter the sample based on binary architecture. Close binaries with P-type circumbinary planets (planets orbiting around more than one host star) are removed, as migration channels may differ in these dynamically complex systems. Altogether, this yielded samples of 36 transiting hot Jupiters and 24 transiting warm Jupiters with measured rotation periods, v sin i values, and radius estimates for their host stars.To generate a consistent comparison between the hot and warm Jupiter sample, we have made an additional cut to focus on cool stars with T_eff < 6200 K (see Section <ref>). This effective temperature corresponds to the Kraft break, a gradual transition between stars that experience Sun-like spin-down and stars that experience little to no angular momentum loss (). <cit.> discovered that hot stars with thin outer convective zones cannot support magnetized winds while cool stars with T_eff ≲ 6200 K experience substantial angular momentum loss due to the presence of large convection zones and strong winds. A Gaia color-magnitude diagram of our full sample of host stars can be seen in Figure <ref>. The final number of hot and warm Jupiters orbiting cool host stars with P_rot, v sin i, and R_* constraints is 25 and 22, respectively, as shown in Figure <ref>.§ RESULTS§.§ Stellar Inclinations Our approach to infer stellar inclinations follows the Bayesian framework from <cit.>. They considered the relationship between stellar equatorial velocity, 2πR_*/P_rot, andthe projected rotational velocity, v sin i, while properly accounting for the correlation between these parameters. <cit.> derived analytical expressions for the stellar inclination posterior P(i_*) assuming uniform priors on v sin i, R_*, and P_rot; an isotropic (sin i_*) prior on stellar inclination;and a moderately precise constraint on the stellar rotation period (σ_P_rot/P_rot≲20%):!p(i_*| P_rot, R_*, vsin i_*) ∝sin i_* ×e^- (v sin i_* - 2π R_*/P_rotsin i_* )^2/2(σ_vsin i_*^2 + σ_v_eq^2 sin^2 i_* )/√(σ_vsin i_*^2 + σ_v_eq^2 sin^2 i_*), where σ_v_eq =2π R_*/P_rot√((σ_R_*/R_*)^2 + (σ_P_rot/P_rot)^2 ). Here σ_P_rot, σ_v sin i, and σ_R_* are the uncertainties on the rotation period, projected rotational velocity, and stellar radius.Differential rotation will cause starspots located at mid-latitudes to travel faster than the equatorial velocity.This can bias rotation periods inferred from light curves (e.g., ).To account for these potential systematic errors, we inflate the nominal rotation period uncertaintyfrom our light curve periodogram analysis following <cit.>.Assuming a Sun-like pole-to-equator absolute shear of 0.07 rad day^-1, this typically increases the rotation period uncertainty by a factor of ≈3 (with a range of 1–70). Line-of-sight stellar inclination posteriors are determined in this fashion for hot and warm Jupiter host stars in our sample using new and compiled v sin i values, rotation periods, and radius estimates (Table <ref>; Table <ref>). In one instance for Kepler-1654, the host star is a slow rotator and the v sin i value is only constrained to <2 km s^-1. In this case we use Equation A17 from <cit.>, which accounts for rotational broadening as an upper limit:!p(i_* | P_rot, R_*, vsin i_*) ∝sin i_* ×( erf(l - 2π R_*/P_rotsin i_*/√(2) σ_v_eqsin i_*) + erf(√(2)π R_*/σ_v_eq P_rot) ), where l is the upper limit on the projected rotational velocity and erf is the error function.Results for all 61 host stars are shown in Figures <ref>–<ref> and summary statistics for each distribution can be found in Table <ref>.There are a wide variety of constraints; in some cases there is only a small departure from the sin i_* isotropic prior, while in other cases inclinations are constrained to within a few degrees.Overall these results are in good agreement with previous stellar inclination measurements. For instance, <cit.> found i_* = 90^+0_-11 for CoRoT-2 and73^+12_-6 for WASP-62 while we derive i_* = 90^+0.1_-8 and 73^+11_-6, respectively. It is immediately evident from these posterior distributions which systems host misaligned planets.Any distribution that departs from i_* = 90 implies a minimum misalignment by at least that difference because transiting planets have orbital inclinations of ≈90. We note, however, that there could be misaligned systems in this sample that do not have host stars with inclinations that depart from 90 because the true obliquity angle also depends on the polar position angle of the star and the longitude of ascending node of the planet's orbit. Many systems stand out as being significantly misaligned.Some of these are previously known such as HAT-P-20 () and Kepler-63 () while several are newly identified in this work, including Kepler-539 b, with an orbital period of 125 days (a/R_* = 94.61; ) and Kepler-1654 b, which orbits at 1047 days (a/R_* = 370.3; ).For this study we have adopted the following classification for aligned and misaligned systems. Host stars that have a maximum a posteriori probability (MAP) value >10^∘ with 90% confidence are classified as being misaligned. Hosts that have a MAP value >10^∘ with 80% confidence are likely misaligned. Following this framework, 7 out of 25 hot Jupiters around cool stars are either misaligned or likely misaligned.For the warm Jupiter sample, 6 out of 22 stars are misaligned or likely misaligned. We find with 90% confidence that the probability for any particular warm Jupiter host star to be misaligned by at least i_* = 10^∘ is 24^+9_-7%. We also find with 90% confidence the probability for any particular hot Jupiter host star to be misaligned by at least i_* = 10^∘ is 14^+7_-5%. These results are summarized in Table<ref>. §.§ Hierarchical Bayesian Analysis Hierarchical Bayesian modeling (HBM) offers a natural framework to simultaneously infer parameters of individual systems and hyperparameters governing the underlying behavior of a population. In this study we follow the sampling approach outlined in <cit.>. We infer the underlying distribution of i_* values for our sample of hot and warm Jupiter hosts assuming a flexible population-level parametric model. The hot and warm Jupiter samples are separately analyzed using , an open source Python package for fitting population-level distributions to sets of individual system distributions (). Our adopted underlying model is the Beta Distribution, a set of continuous probability distributions constrained on the interval [0,1] with two free parameters α and β:B(x)=Γ(α+β)/Γ(α) Γ(β)e^α-1(1-e)^β-1 . For this analysis, each minimum obliquity distribution is first re-mapped onto a new variable θ' = θ/90 so as to span a range of 0–1 rather than 0–90. In the framework of HBM, α and β become hyperparameters whose posterior distributions are constrained using the affine-invariant Markov chain Monte Carlo sampler().[We also re-parameterized the Beta distribution following <cit.>, where α = μ*κ and β = (1-μ)*κ, and reproduce similar results with a uniform hyperprior spanning 0 to 1 on μ and a log-normal hyperprior on κ centered on 0 with a standard deviation of 3. Results for hot and warm Jupiter hosts are consistent, reinforcing the similarity of the two populations.] To test the impact of our choice of hyperpriors on the α and β posteriors, we carry out two fits using different hyperpriors on each parameter: a Truncated Gaussian with μ = 0.69, σ = 1.0 p(x) = 1/σ1/√(2π) e^-1/2(x-μ/σ)^2/1-1/2(1+erf(-μ/σ√(2))) , and a log-uniform distribution ranging from 0.01 to 100 ()p(x) = 1/x . A burn-in fraction of 50% is adopted and 50 walkers are run for 5 × 10^4 steps. The best-fit posterior values are reported in Table <ref>. An overview of stellar inclination posteriors for the hot and warm Jupiter samples can be found in Figure <ref>. lcl 0pt 3 Adopted Projected Rotational Velocities. v sin i Name(km s^-1) ReferenceKepler-91.1±1.0<cit.>Kepler-92.2±0.5<cit.>Kepler-92.3<cit.> Kepler-92.0±0.4 Adopted Kepler-515.4 ± 0.6 <cit.>Kepler-515.5 ± 1.0 <cit.>Kepler-516.83 <cit.> Kepler-515.4±0.5 Adopted Kepler-289 5.5±0.5 <cit.> Kepler-289 5.8±1.0 <cit.>Kepler-289 5.2<cit.>Kepler-2895.6±0.4 Adopted Kepler-447 7.3±0.5 <cit.> Kepler-447 6.9±1.0 <cit.>Kepler-447 7.47 <cit.>Kepler-4477.2±0.4 Adopted Kepler-539 3.5±0.5 <cit.> Kepler-539 3.0±1.0 <cit.> Kepler-539 2.8±0.5 <cit.>Kepler-539 6.54<cit.> Kepler-5393.1±0.3 Adopted Kepler-16540.3 <cit.>Kepler-1654< 2.0<cit.> Kepler-1654< 2.0This work Kepler-1654< 2.0Adopted V1298 Tau 24.10 ± 1.4 <cit.> V1298 Tau 24.87 ± 0.19<cit.>V1298 Tau24.8 ±0.2 Adopted K2-774 <cit.> K2-772.9 ±1.0 This work K2-772.9 ±1.0 Adopted K2-139 2.8±0.6 <cit.> K2-139 1.7<cit.> K2-1392.8±0.6 Adopted K2-281 3±1This work K2-2813±1 Adopted TOI-4562 17±0.5 <cit.> TOI-4562 15.7±0.5 <cit.> TOI-4562 16.5±0.56 <cit.> TOI-456216.4±0.3 Adopted TOI-1227 16.65±0.24 <cit.> TOI-122716.65±0.24 Adopted Kepler-272.4±1.0<cit.>Kepler-270.6±5.0<cit.> Kepler-272.76 ±1.53 <cit.>Kepler-273.0±1.0<cit.>Kepler-272.7±0.6 Adopted Kepler-283.8 ± 1.0<cit.>Kepler-28 5.5<cit.> Kepler-283.8±1.0 Adopted Kepler-302.3±1.0<cit.> Kepler-301.94± 0.22 <cit.> Kepler-302.2 ± 1.0 <cit.>Kepler-302.0±0.2 Adopted Kepler-47020.9±1.0<cit.>Kepler-47020.5 <cit.> Kepler-47020.9±1.0 Adopted Kepler-4862.2 ±1.0This workKepler-4862.2 ±1.0 Adopted Kepler-523 ±1This workKepler-523 ±1 Adopted Kepler-635.8± 0.5<cit.> Kepler-634.8± 1.0<cit.> Kepler-633.8± 0.5<cit.>Kepler-6314± 3 <cit.>Kepler-630± 3 <cit.> Kepler-635.43<cit.>Kepler-637.2 <cit.> Kepler-635.9<cit.> Kepler-635.0±0.3 Adopted Kepler-753.1± 1.0<cit.>Kepler-753.2± 1.0 <cit.> Kepler-753.2±0.7 Adopted Kepler-468 3.9± 1.0 <cit.> Kepler-4683.9± 1.0 Adopted K2-329 1.9± 0.5 <cit.>K2-3291.9± 0.5Adopted lcccc0pt 5 Beta distribution Model Posteriors from MCMC Fitting Sample Hyperprior α β Hot Jupiter Gaussian2.40^+0.67_-0.590.59^+0.24_-0.16 Warm Jupiter Gaussian2.04^+0.63_-0.540.56^+0.23_-0.16Hot Jupiter Log-uniform5.14^+4.04_-2.07 0.99^+1.09_-0.42Warm Jupiter Log-uniform 2.74^+1.43_-0.98 0.63^+0.39_-0.21 § DISCUSSIONOnly a few previous studies have attempted to compare the spin-orbit patterns of warm Jupiters to those of hot Jupiters using fully or partially constrained obliquity angles. <cit.> examined the obliquities of short-period giant planet host stars and used a threshold of a/R_* > 10 to distinguish hot and warm Jupiters. In their sample, three of the four systems below ∼6200 K, with large scaled orbital distances beyond a/R_* > 10, are significantly misaligned. From this they speculated that warm Jupiters may have higher rates of misalignment compared to hot Jupiters. In this scenario, the difference between preferentially aligned hot Jupiter orientations and the broader warm Jupiter stellar obliquity distribution was attributed to tidal interactions at close separations. <cit.> also compared cool stars hosting hot and warm Jupiters in a consistent fashion using a cutoff of a/R_* > 11 to separate the two populations of planets. They found that all 12 of their warm Jupiter host stars are aligned and concluded that warm Jupiter hosts are more aligned than their hot Jupiter counterparts at the 3.3σ level. This suggests marginally significant evidence for a difference. This finding is counter to the trends hinted at in <cit.>. Our analysis in this study comprises 22 warm Jupiters with (minimum) obliquity constraints beyond a/R_* > 20, making it over 5 times larger than previous samples from <cit.> and <cit.> at the same scaled distance. We find that warm Jupiter obliquities fall between these previous results: hot and warm Jupiter host stars show a modest fraction (14–24%)of misalignments below the Kraft Break, and the underlying minimum obliquity distributions are identical, at least to within the precision available given existing sample sizes. As seen in Figure <ref>, there is no discernible distinction in the underlying parent distributions of i_* values. Furthermore, the log-uniform and truncated Gaussian hyperpriors produce underlying i_* distributions that are similar between both populations of host stars, indicating that the posterior shapes are being driven by the data and not the hyperpriors. This is also reflected quantitatively in the consistent constraints on the Beta Distribution model parameters α and β (Table <ref>). This suggests that warm Jupiters are occasionally misaligned at similar rates as hot Jupiters. Warm Jupiters do not appear to have more excited obliquities () or more aligned obliquities () compared to close-in giant planets.Tidal torques have traditionally been invoked to damp stellar obliquities and realign planets at close separations (; ; ), but the similar minimum obliquity distributions within and beyond 0.1 AU call into question whether tides are in fact impacting the population-level obliquities of hot Jupiters. Tidal models have successfully shown that obliquity damping is possible for planets with orbital periods ≲ 3 days, however, it is less efficient at larger scaled distances ().The small fraction of misalignments observed among host stars below the Kraft break could instead be interpreted as cool stars being set by a primordial misaligned disk distribution—mostly aligned, but with occasional misalignment. Obliquities may also occur if broken or misaligned disks torque the spin axis of the host star ().This would imply that the hot and warm Jupiters around these cool stars either formed from coplanar planet-planet scattering or disk migration. <cit.> found the timescale for spin-orbit alignment is comparable to the orbit decay time for hot Jupiters. Therefore planets that are observed to be aligned most likely formed co-planar and did not experience consequential tidal interactions. Tidal realignment would therefore not need to be introduced as the host stars are already primordially aligned or misaligned. If tides are not dominating the obliquity distribution of hot Jupiters, the different obliquity distributions for low- and high-mass stars could instead reflect differences in the distribution of primordial misaligned disks, or some other mechanism that is preferentially exciting hot Jupiter inclinations around hot stars. For instance, high-mass stars harbor more giant planets (; ; ) which could increase the chances of scattering events and excite mutual inclinations. The observation that this occurs near the Kraft break could be coincidental. Indeed, <cit.> recently found that stellar mass, as opposed to stellar effective temperature, is a better predictor of stellar obliquity. Alternatively, hot Jupiter host stars may have formed with broad obliquities and over time became tidally realigned as a result of interactions bewteen the planet and the convective envelopes of cool stars, while hot stars retained a range of misalignments. In this scenario, which follows the traditional interpretation of hot Jupiter obliquity patterns, warm Jupiters around cool stars are predominantly formed aligned and migrate through mechanisms that do not excite inclinations, like coplanar planet-planet scattering or disk migration. However, for this to hold true, we would have to be observing the realignment process of hot Jupiters at a special time where it happens to be consistent with the warm Jupiter host star obliquity distribution. In the past, the hot Jupiter distribution would have been much broader and in the future (several Gyr from now) it would be narrower.In summary, we conclude that the consistency of alignment between both hot and warm Jupiter host stars indicates that either tidal realignment is not shaping the hot Jupiter obliquity distribution, or we are observing the hot Jupiter realignment process at a time that happens to match the obliquity distribution of warm Jupiters. It is also interesting to consider the broader obliquity distribution of cold Jupiters at wider separations. Little is known about spin-orbit angles for giant planets beyond∼2 AU. However, <cit.> report that stars hosting directly imaged planets within 20 AU mostly show angular momentum alignment, in contrast to more massive brown dwarf companions. The trend of low obliquities for warm Jupiters may therefore extend to wide separations, although the sample of imaged planets with obliquity constraints remains quite limited ().Future studies and additional observations are needed to distinguish which migration channels are dominating the hot and warm Jupiter populations. To further assess primordial misalignments and planet-host star interactions, observations of hot Jupiters around young stars, warm Jupiters around hot stars, and the primordial distribution of protoplanetary disk orientations would be helpful. Although each of these tests would be informative, they possess their own significant observational challenges. For instance, there are few hot Jupiters known around young stars, and it is difficult to measure obliqities of hot stars harbouring warm Jupiters. We did not include RM measurements in this analysis, but measurements of the projected angle between the orbital and stellar spin axes λ, combined with the stellar inclination i_*, will be valuable to fully characterize full obliquity measurements to these systems (; ; ; ).§.§ Potential Biaseslcccccc0pt 5 Misalignment Probabilities Misalignment Threshold 4cProbability Threshold Sample≥ Δ i 2c> 80% 2c≥ 90%Hot Jupiter 5^∘ 23/25 89^+3_-6%12/25 50^+9_-9% Warm Jupiter 5^∘ 20/22 89^+5_-7% 6/22 29^+10_-8%Hot Jupiter 10^∘ 9/25 37^+10_-8% 3/2514^+7_-5%Warm Jupiter 10^∘ 6/22 29^+10_-8% 5/22 24^+9_-7% Hot Jupiter 20^∘ 2/2510^+6_-3% 2/25 10^+6_-3%Warm Jupiter 20^∘ 5/22 24^+9_-7% 2/22 12^+7_-4% Here we outline potential biases in this analysis.These could in principle impact our results, either in an absolute sense (such as by biasing measurements) or in a relative sense (for instance, when comparing hot and warm Jupiter distributions).We argue that while there are several ways to individually bias i_* values or i_* distributions, it is unlikely that these impact the relative comparison of hot and warm Jupiter obliquities—a key result from this study. §.§.§ i* Analysis Bias Our results rely on the homogeneous and self-consistent analysis of i_* measurements. These measurements provide meaningful constraints on misalignments, but without sky-projected obliquities (through RM measurements, for instance), the true obliquity cannot be fully determined. There are several factors that could bias stellar inclinations including overestimated v sin i values for slow rotators, miscalculation of rotation periods due to spots at non-equitorial latitudes, and over (or under) estimates of R_* from evolutionary models or SED fitting. However, when comparing the distributions of these parameters for hot and warm Jupiters, there are no indications of strong differences that might impact one sample over the other.§.§.§ Age BiasThe formation of hot Jupiters from disk migration or high-eccentricity migration can occur over a broad range of timescales from a disk lifetime to a Hubble time. However, once hot Jupiters have migrated, tidal realignment is generally expected to operate on long timescales of several Gyr for planets with orbital periods greater than a few days (; ).Age is therefore an important parameter to consider between our hot and warm Jupiter samples.If tidal torques shape the hot Jupiter obliquity distribution, then a young hot Jupiter sample might show a broader i_* distribution while an older population would be preferentially aligned. This could impact the interpretation of our comparison of the reconstructed stellar inclination distributions. In Figure <ref>, we find that the warm Jupiter population is on average younger than the hot Jupiter population. The hot Jupiter sample spans all ages while the warm Jupiters only have ages up to 6 Gyr. One explanation for the younger warm Jupiter sample is that we only include systems with readily retrievable light curve rotation periods. This biases our warm Jupiter population to younger systems because rotation periods are shorter, starspot covering fractions are larger, and the light curve amplitudes are higher. To assess the impact of the broader hot Jupiter ages, we ran additional Hierarchical Bayesian statistical tests to more fairly compare the hot and warm Jupiter populations by selecting systems with effective temperatures less than 6200 K and younger than 6 Gyr (the full range of the warm Jupiter sample). For the hot Jupiter sample < 6 Gyr, we find α and β values of [1.84^+0.7_-0.60, 0.76^+0.46_-0.31] and[4.06^+6.37_-2.25, 1.47^+3.38_-0.91] for Gaussian and log-uniform hyperpriors, respectively. The warm Jupiter sample < 6 Gyr, yields α and β values of [2.04^+0.63_-0.54, 0.57^+0.23_-0.16] and [2.74^+1.43_-0.97, 0.63^+0.38_-0.21] for Gaussian and log-uniform priors, respectively. There is no significant difference in the underlying frequency of host star alignment or reconstructed i_* distributions, as seen in Figure <ref>. We conclude that the broader age distributions of hot Jupiters does not appear to impact the results or interpretation from this work. §.§.§ Orbital Distance BiasIt is also possible that our choice for scaled orbital distance to separate the hot and warm Jupiter samples could impact the results, especially given the modest sizes of both samples. To test this, we account for orbital distance as a potential bias by separating the hot and warm Jupiter populations with an a/R_* cut at 10. This value is closer to the distinction of the two Jovian populations in <cit.> and <cit.>. Additional HBM statistical tests are run with an a/R_* cut of 10 and an effective temperature cut at 6200 K to isolate cool host stars. For the hot Jupiter sample (a/R_* < 10), we find α and β values of [2.36^+0.71_-0.63, 0.54^+0.24_-0.16] and [31.74^+39.87_-21.81, 1.55^+3.56_-0.87] for Gaussian and log-uniform priors, respectively. We note that with this particular test, the hot Jupiter sub-sample is more prior dependent than other tests we ran. The warm Jupiter sample (a/R_* > 10), produced α and β values of [2.05^+0.63_-0.54, 0.56^+0.23_-0.16] and [3.14^+1.41_-1.01, 0.61^+0.29_-0.18] for Gaussian and log-uniform priors, respectively. No distinct difference is evident when compared to our nominal threshold of a/R_* = 20 as seen in Figure <ref>.We conclude that our specific choice of a/R_* to define the hot and warm Jupiter samples does not appear to impact the results.§.§.§ Small Sample Bias<cit.> and <cit.> performed tests to assess how reliably an input underlying distribution could be reproduced using HBM with simulated measurements as a function of sample size and measurement uncertainty.Although their experiments were carried out for eccentricities, the results can equally apply to stellar inclinations.Several hyperpriors on the Beta distribution shape parameters were tested; <cit.> found that a Truncated Gaussian hyperprior reliably recovered the characteristic shape of the input distribution for sample sizes as small as 5 and eccentricity uncertainties as large as 0.2, which corresponds to stellar inclination uncertainties of 18.Samples of 20—similar to the sizes used in this study—were even more accurate and substantially improved the precision of the posterior distribution.This indicates that although the samples remain modest forthe hot and warm Jupiter populations, the consistency of the recovered underlying distributions is expected to be robust.§.§.§ Viewing Angle BiasIn order to infer true spin-orbit angles, the sky-projected obliquity, stellar inclination, and inclination of the planet's orbital plane must be known. The most readily way to fix one of these parameters is through an edge-on configuration with transiting planets. However, if i_* is known, the orientation of the spin axis relative to the orientation of the orbital plane implies that i_* is a minimum misalignment angle, and bounds ψ to be between | Δi | ≈ | π - i_* | (for a transiting planet) and ≈ π – | π – i_* |or between 0 and 180.This means that true obliquities can, in principle, be very different than inferred minimum obliquities, even for large values of i_*.Figure <ref> demonstrates how ψ values from the literature correlate with our measurements of i_* for systems in our sample where true obliquities are available. A lower i_* and higher ψ measurement both indicate increased misalignment. All systems consistent with misalignment in our i_* analysis are also misaligned in ψ space. This demonstrates that our constraints on i_* (as well as P_rot, v sin i, and R_*) are reasonable as they do not fall below the 1:1 relation when compared with ψ.It also illustrates that while ψ can and does depart from i_*, a preferentially aligned distribution in λ, like that of the hot Jupiters around cool stars, also imprints a preferentially aligned distribution in i_*. In addition, in Figure <ref> we show that our equatorial velocities are consistent to within 2 sigma of our adoptedv sin i measurements. This further reinforces the reliability of our P_rot, v sin i, and R_* measurements.Differentiating between co-planirity and the alignment of the star's rotational and planet's orbital axis could also play a role. For instance, the star and planet could both be coplanar but the planet could be orbiting retrograde. This could impact both our true reconstructed underlying i_* distributions and our relative comparison, if there is a significant difference between the rate of hot and warm Jupiters on retrograde orbits. One argument against this playing a significant role comes from RM measurements where the fraction of retrograde orbits is small (; ; ). Most hot Jupiters with RM measurements have been found to orbit prograde, although the rate of retrograde warm Jupiters is not yet established.§ NOTES ON INDIVIDUAL SYSTEMS With at least 90% confidence, we report 3 new misaligned transiting planets in this study based on the inferred inclination of the rotational axis of their host stars. These systems are misaligned by at least 10^∘ and have MAP values of i_* < 80^∘.§.§ Kepler-1654Kepler-1654 is a G-type star hosting a 0.8 R_Jup planet on a 1047-day orbit (2.0 AU) ().We report a line-of-sight stellar spin inclination of 29^+5_-13. Here we have adopted the upper limit of 2 km s^-1 from <cit.> to infer the posterior distribution of i_*, which is shown in Figure <ref>. This implies that the star’s equatorial plane is significantly misaligned with the orbital plane of the planet by at least 61^+13_-5. To date, Kepler 1654 b is the longest-period giant planet found in a misaligned system. The origin of the misalignment may be a sign of an undetected planetary companion.§.§ Kepler-539Kepler-539, a solar-type G star hosting a giant planet with a minimum mass of 0.97 M_Jup and a period of 125 days (0.5 AU), was announced by <cit.>. We derive a stellar inclination of i_* = 51^+11_-8. The full posterior distribution is shown in Figure <ref>. After Kepler-1654 b, Kepler-539 b is the second-longest orbiting planet in a misaligned system currently known, with a misalignment between the orbital plane and stellar inclination of 39^+8_-11.§.§ Kepler-30 Kepler-30 is a Sun-like star that hosts three transiting planets (). Kepler-30 c is a giant planet in this multi-planet system with a minimum mass of 2 M_Jup and period of 60 days (0.3 AU). Using two different methods to measure λ, <cit.> report values of 4^+10_-10 and -1^+10_-10, suggesting alignment of the stellar spin axis with the orbital plane. We derive an i_* = 43^+15_-9, as can be seen in Figure <ref>. We combine the two independent measurements of λ from <cit.> as a weighted mean (λ = 1.5 ± 7.1) together with i_* and report a 3D obliquity ψ = 90^+34_-34. This differs from the conclusions drawn by <cit.> because although the projected stellar spin axis may be aligned with the orbital plane, the stellar inclination is misaligned, resulting in a high obliquity. This severe offset suggests the misalignment was present during the system's early stages when planets were forming in the disk, or some other mechanism has subsequently tilted the star's orientation after formation. The Kepler-30 system joins other misaligned multi-planet systems with ψ ≳ 40 including K2-290 (; ; ), Kepler-56 (; ), and Kepler-129 (). § CONCLUSIONIn this work, we presented line-of-sight inclinations for 48 cool stars harboring giant planets. We find Kepler-1654 b and Kepler-539 b to be two of the longest-period giant planets known in misaligned systems. In addition, Kepler-30 is a newly identified misaligned multi-planet system. By comparing the reconstructed underlying i_* distributions using heirarchical Bayesian modeling, we do not find a distinct difference between the inferred minimum misalignments of hot and warm Jupiter host stars. Below the Kraft break, we find with 90% confidence that 24^+9_-7% of warm Jupiters and 14^+7_-5% of hot Jupiters are misaligned by at least i_* = 10^∘. There are two broad interpretations when considering this result together with the excited obliquity distribution of hot Jupiters around hot stars.* In the first scenario, giant planets form and undergo inward coplanar migration in aligned disks. Tidal realignment of hot Jupiters is not damping the obliquity distribution around cool stars, and instead the obliquities of more massive stars are preferentially excited, perhaps because of increased scattering in the presence of more multiple giant planets or a broader initial disk distribution. The transition near the Kraft Break is coincidental and the differences in obliquity distributions between cool and hot stars is best described by stellar mass. * Alternatively, tidal realignment is operating, but the evolving hot Jupiter obliquity distribution (from broad to narrow) happens to match the warm Jupiter distribution right now at the typical ages of field stars. Further observations of transiting planets around evolved stars, hot Jupiters around young stars, warm Jupiters around hot stars, and the distribution of protoplanetary disk orientations will be necessary to disentangle the primordial and post-formation misalignment hypotheses. Obliquity measurements will provide valuable clues into the relatively unknown dynamical histories of these two planet populations. § ACKNOWLEDGMENTSWe thank Rebekah Dawson, Eugene Chiang, and J.J. Zanazzi for insightful conversations. B.P.B. acknowledges support from the National Science Foundation grant AST-1909209, NASA Exoplanet Research Program grant 20-XRP20_2-0119, and the Alfred P. Sloan Foundation.This research has made use of the NASA Exoplanet Archive, which is operated by the California Institute of Technology, under contract with the National Aeronautics and Space Administration under the Exoplanet Exploration Program and <cit.>, an online database with sources collected by the Centre de Données de Strasbourg (CDS). <cit.>, <cit.>,<cit.>,<cit.>,<cit.>, and<cit.> lcccccccccccccc Host star properties. Name Light Curve P_rot σ_P,tot vsini_* σ_v sin i i_* σ_i_* a/R_* σ_a/R_*,tot R_* σ_R_*,tot T_eff Age Ref.Provenance (d) (d) (km s^-1) (km s^-1) (^∘) (^∘)(R_⊙) (R_⊙) (K)(Gyr)15cWarm Jupiters (a/R_* > 20) Kepler-9 Kepler Q1-17 16.82 0.062.0 0.4 46.01 ^+17.02_-12.38 31.3 1.1 0.96 0.02 5774^+60_-60 1.91^+1.09_-0.57 1 Kepler-27 Kepler Q1-17 14.77 0.06 2.7 0.6 90 ^+0.14_-27.28 30.12 0.6 0.761 ^+0.049_-0.046 5294^+68_-56 1.62^+0.98_-0.47 2,3 Kepler-28 Kepler Q1-17 17.97 0.08 3.8 1.0 90 ^+0.05_-25.26 25.75 0.80 0.649 ^+0.029_-0.0274690^+63_-50 2.24^+1.82_-0.772,3 Kepler-30 Kepler Q1-17 16.19 0.06 2.0 0.2 42.86 ^+15.49_-9.41 68.1 9.2 0.95 0.12 5452^+58_-68 1.58^+0.92_-0.42 4,5 Kepler-51 Kepler Q1-17 8.18 0.03 5.4 0.5 88.11 ^+1.8_-17.78 94.1 2.2 0.881 0.011 5670^+60_-60 0.5^+0.25_-0.25 6,7 Kepler-52 Kepler Q1-17 11.99 0.06 3.0 1.0 90 ^+0.09_-30.71 23.07 0.37 0.63 ^+0.01_-0.02 4242^+41_-353.55^+4.6_-1.95 8,9 Kepler-63 Kepler Q0-17 5.4 0.03 5.0 0.3 36.51 ^+3.52_-2.65 20.79 0.46 0.9 ^+0.027_-0.022 5576^+50_-50 0.21^+0.05_-0.05 10 Kepler-75Kepler Q1-17 19.2 0.08 3.20.7 90 ^+0.09_-22.92 20.49 0.75 0.88 0.04 5206^+40_-456.2^+3.5_-2.8 11,12 Kepler-289 Kepler Q1-17 8.74 0.11 5.6 0.4 76.49 ^+13.1_-6.84 108.6 1.1 1.0 0.02 5990^+38_-380.65^+0.44_-0.44 3,13 Kepler-447 Kepler6.47 0.037.20.4 61.77 ^+19.9_-10.22 20.41 ^+0.36_-0.19 1.05 0.19 5615^+60_-55 2.69^+2.02_-1.1614 Kepler-468 Kepler Q1-17 11.09 0.06 3.9 1.0 90 ^+0.09_-29.71 54.52 1.25 0.87 ^+0.02_-0.015498^+60_-68 2^+1.29_-0.73 9 Kepler-470 Kepler Q0-17 24.69 0.14 20.9 190^+0.05_-5.5824.04 1.082 1.66 ^+0.67_-0.3 6613^+197_-197 1.86^+0.68_-0.439 Kepler-486 Kepler 30.39 0.25 2.21.0 90 ^+0.05_-33.9 50.62 1.35 0.75 ^+0.02_-0.034926^+44_-45 4.47^+6.15_-2.669 Kepler-539 Kepler Q0-17 11.97 0.03 3.1 0.3 51.19 ^+11.12_-8.06 94.61 ^+7.53_-6.50 0.95 0.025820^+80_-80 2^+1.2_-0.63 15 Kepler-1654 Kepler Q0-17 16.95 0.08 < 2 0.0 29.35 ^+4.81_-13.18 370.3 ^+2.2_-4.7 1.18 0.03 5597^+95_-935^+1_-1 16 K2-77 K2 Campaign 4 20.56 2.15 2.9 1.090 ^+0.05_-29.67 23.2 ^+6.4_-2.3 0.76 0.03 5070^+50_-50 0.85^+0.45_-0.4517 K2-139 K2 Campaign 7 17.26 1.53 2.80.690 ^+0.09_-26.11 47.25 ^+0.73_-1.98 0.88 0.01 5370^+68_-68 1.8^+0.3_-0.318,19 K2-281 K2 Campaign 8 28.65 4.64 3.0 1.0 90 ^+0.05_-29.62 25.9 ^+0.69_-1.85 0.76 0.01 4812^+72_-72 ⋯ 19 K2-329 K2 Campaign 12 25.31 3.73 1.9 0.5 90 ^+0.09_-28.45 26.62 0.46 0.822 0.02 5282^+40_-391.8^+2.2_-1.3 20,21 TOI-1227 TESS 1.66 0.03 16.65 0.24 77.35 ^+11.21_-5.4 34.01 ^+0.97_-1.00 0.56 0.03 3072^+74_-74 0.01^+0.002_-0.002 22 TOI-4562 TESS 3.81 0.03 16.4 0.3 90 ^+0.09_-7.92 147.4 ^+1.44_-1.26 1.152 0.046 6096^+32_-32 0.3^+1_-1 23 V1298 Tau TESS2.97 0.06 24.8 0.2 90^+0.05_-8.5527 1.1 1.34 0.06 4970^+120_-120 0.02^+0.004_-0.004 24 K2-290 ⋯ 6.63 0.66 6.9 ^+0.5_-0.6 37.1 ^+8.24_-6.26 43.5 1.21.51 0.08 6302^+120_-1204^+1.6_-0.827,28 WASP-84 ⋯ 14.36 0.35 2.56 0.08 70.82 ^+13.37_-7.83 21.70 0.72 0.77 0.02 5280^+80_-802.1^+1.6_-1.6 25,26 15cHot Jupiters (a/R_* < 20) CoRoT-2 ⋯4.520.02 10.3 0.9 90 ^+0.09_-17.2 18.92 ^+2.15_-2.420.9 0.02 5598^+50_-502.66^+1.62_-1.62 29,30 CoRoT-18 ⋯ 5.53 0.33 8 1 90 ^+0.05_-22.65 7.01 ^+0.28_-0.38 0.88 0.03 5440^+100_-10010.69^+3.82_-3.8231 EPIC 246851721 ⋯ 1.14 0.06 74.92^+0.62_-0.60 90 ^+0.05_-11.489.59 0.23 1.62 0.04 6202^+50_-52 3.02^+0.44_-0.46 32 HAT-P-20 ⋯ 14.48 0.02 1.85 0.27 52.63 ^+16.43_-11.66 11.36 0.14 0.68 0.01 4595^+45_-45 6.7^+5.7_-3.8 33 HAT-P-22 ⋯28.7 0.41.65 0.26 64.79 ^+20.39_-10.63 8.45 0.4 1.06 0.05 5314^+50_-50 12.4^+2.4_-2.4 34 HAT-P-36 ⋯15.3 0.4 3.12 0.75 73.16 ^+16.75_-14.94.93 0.1 1.04 0.025620^+40_-406.6^+2.9_-1.8 35 HATS-2 ⋯24.98 0.04 1.5 0.5 65.06 ^+24.08_-13.19 5.51 0.14 0.9 0.02 5227^+95_-95 9.7^+2.9_-2.9 29,36 HD 189733 ⋯ 11.95 0.02 3.25 0.0290 ^+0.05_-14.18 8.98 0.33 0.75 0.03 5050^+50_-50 6.2^+3.4_-3.4 37,38 HD 209458 ⋯10.65 0.75 4.8 0.2 60.78 ^+14.9_-8.37 8.78 0.15 1.16 0.01 6117^+50_-50 4^+1.2_-1.2 39,40 K2-29 ⋯10.76 0.22 3.7 0.568.66 ^+18.01_-9.05 10.54 0.14 0.86 0.01 5358^+38_-38 2.6^+1.2_-2.3541Kepler-17 ⋯ 12.09 0.24 4.7 190 ^+0.09_-24.36 5.7 ^+0.14_-0.41 0.98 ^+0.02_-0.055781^+85_-85 2.9^+1.5_-1.642 Qatar-1 ⋯23.7 0.12 1.7 0.3 90^+0.09_-26.34 6.25 0.160.8 0.02 4910^+100_-100 8.9^+3.7_-3.7 43,44 Qatar-2 ⋯18.5 1.9 2.8 0.5 90^+0.09_-19.94 6.53 0.1 0.7 0.01 4645^+50_-50 9.4^+3.2_-3.2 45,46,47 WASP-4 ⋯ 22.2 3.3 2.14^+0.38_-0.35 90^+0.05_-26.83 5.48 0.15 0.91 0.02 5540^+55_-55 7^+2.9_-2.9 29,48 WASP-5 ⋯16.2 0.4 3.2 0.3 71.5^+17.06_-7.48 5.42 0.22 1.09 0.045770^+65_-65 5.6^+2.2_-2.2 29,49WASP-6 ⋯23.8 0.15 1.6^+0.27_-0.17 63.08^+19.63_-10.49 10.3 0.4 0.86 0.03 5375^+65_-65 11^+3_-7 29,50,51 WASP-8 ⋯ 15.31 0.8 1.9 0.05 35.97^+4.87_-3.73 18 0.43 0.98 0.02 5690^+36_-36 4^+1_-1 52 WASP-19 ⋯12.13 2.14.4 0.9 90^+0.09_-28.18 3.45 0.07 1.02 0.01 5460^+90_-90 9.95^+2.49_-2.49 53,54 WASP-32 ⋯11.6 1 3.9^+0.4_-0.5 54.97^+19.23_-11.07 7.63 0.35 1.11 0.05 6100^+100_-100 2.22^+0.62_-0.7355,56 WASP-41 ⋯18.41 0.05 1.6 1.1 62.22^+27.37_-16.61 9.95 0.18 0.89 0.01 5546^+33_-33 9.8^+2.3_-3.9 29,57 WASP-43 ⋯15.6 0.4 2.26 0.54 90 ^+0.09_-27.78 4.92^+0.09_-0.1 0.67 0.01 4520^+120_-120 7^+7_-7 29,33 WASP-52 ⋯ 17.26 ^+0.51_-0.39 2.62 0.07 90 ^+0.05_-11.89 7.23 0.21 0.79 0.02 5000^+100_-100 10.7^+1.9_-1.9 58,59 WASP-69 ⋯ 23.07 0.16 2.2 0.4 90 ^+0.09_-21.66 11.97 0.44 0.81 0.03 4700^+50_-50 7^+7_-7 29,60WASP-85 ⋯ 13.08 0.26 3.41 0.89 86.53 ^+3.38_-27.82 8.97 0.32 0.94 0.02 5685^+65_-65 0.5^+0.3_-0.1 61WASP-94A ⋯ 10.48 1.64.2 0.533^+11.57_-7.47 7.3^+0.26_-0.22 1.62^+0.05_-0.046170^+80_-80 2.7^+0.6_-0.6 62 Kepler-8 ⋯ 7.13 0.14 8.9 1 57.85^+14.86_-10.58 6.98 0.18 1.5 0.04 6213^+150_-150 3.8^+1.5_-1.5 53 Kepler-448 ⋯ 1.29 0.03 66.43 ^+1.00_-0.95 90 ^+0.09_-15.76 19.92 1.88 1.63 0.15 6820^+120_-120 1.4^+0.5_-0.5 63WASP-7 ⋯ 3.68 1.23 14 244.57 ^+29.31_-11.16 9.08 0.56 1.48 0.09 6520^+70_-70 2.4^+1_-1 64 WASP-12 ⋯ 6.77 1.58 1.6 ^+0.8_-0.4 8.37 ^+5.14_-3.6 3.04 ^+0.11_-0.1 1.66^+0.05_-0.04 6313^+52_-52 2^+0.7_-2 53 WASP-33 ⋯ 0.52 0.05 86.63^+0.32_-0.3736.15^+4.69_-4.09 3.69^+0.05_-0.1 1.51^+0.02_-0.03 7430^+100_-100 0.1^+0.4_-0.09 65 WASP-62 ⋯ 6.65 0.13 9.3 0.272.85^+12.33_-5.99 9.53 0.391.28 0.05 6230^+80_-80 0.8^+0.6_-0.6 66 WASP-76 ⋯ 9.29 1.27 1.48 0.28 9.18^+2.57_-2.11 4.02 0.161.76 0.07 6329^+65_-65 1.82^+0.27_-0.27 67 WASP-121 ⋯ 3.38 0.4 13.56^+0.68_-0.69 39.12^+8.11_-6.3 3.8 0.11 1.44 0.03 6586^+59_-59 1.5^+1_-1 68 WASP-167 ⋯ 1.02 0.1 49.94 0.04 34.22^+4.72_-3.83 4.28 0.14 1.79 0.05 7043^+89_-68 1.54^+0.4_-0.4 69 XO-2 ⋯ 41.6 1.1 1.07 0.09 62.36^+19.4_-10.54 7.79 ^+0.36_-0.59 1 0.03 5332^+57_-57 7.8^+1.2_-1.370 XO-6 ⋯ 1.79 0.0648 362.04 ^+16.12_-9.59 8.08 1.03 1.93 0.18 6720^+100_-100 1.88^+0.9_-0.271 Kepler-447 was observed in Kepler Q0-7, Q9-11, Q13-15, Q17. Kepler-486 was observed in Kepler Q1-7, Q9-11, Q13-15, Q17. TOI-1227 was observed in TESS Sector 11, 12, 38. TOI-4562 was observed in TESS Sector 27-39. V1298 Tau was observed in TESS Sector 43, 44. The references column incorporates discovery, R_*, P_rot, Age, and T_eff references for all systems above. Values not found in the cited references are either taken from TEPCAT (), <cit.>, or <cit.>. (1) <cit.>; (2) <cit.>; (3) <cit.>; (4) <cit.>; (5) <cit.>; (6) <cit.>; (7) <cit.>; (8) <cit.>; (9) <cit.>; (10) <cit.>; (11) <cit.>; (12) <cit.>; (13) <cit.>; (14) <cit.>; (15) <cit.>; (16) <cit.>; (17) <cit.>; (18) <cit.>; (19) <cit.>; (20) <cit.>; (21) <cit.>; (22) <cit.>; (23) <cit.>; (24) <cit.>; (25) <cit.>; (26) <cit.>; (27) <cit.>; (28) <cit.>; (29) <cit.>; (30) <cit.>; (31) <cit.>; (32) <cit.>; (33) <cit.>; (34) <cit.>; (35) <cit.>; (36) <cit.>; (37) <cit.>; (38) <cit.>; (39) <cit.>; (40) <cit.>; (41) <cit.>; (42) <cit.>; (43) <cit.>; (44) <cit.>; (45) <cit.>; (46) <cit.>; (47) <cit.>; (48) <cit.>; (49) <cit.>; (50) <cit.>; (51) <cit.>; (52) <cit.>; (53) <cit.>; (54) <cit.>; (55) <cit.>; (56) <cit.>; (57) <cit.>; (58) <cit.>; (59) <cit.>; (60) <cit.>; (61) <cit.>; (62) <cit.>; (63) <cit.>; (64) <cit.>; (65) <cit.>; (66) <cit.>; (67) <cit.>; (68) <cit.>; (69) <cit.>; (70) <cit.>; (71) <cit.>aasjournal § LIGHT CURVE ANALYSIS Light curve analysis using Kepler, K2, TESS photometry, along with generalized Lomb-Scargle periodograms and phased light curves. § JOINT POSTERIOR DISTRIBUTIONS Here we show the joint posterior distributions between the hyperparameters α and β of our underlying Beta distribution.Figure <ref> illustrates how the joint and marginalized distributions are impacted by different hyperpriors between the hot and warm Jupiter host stars.
http://arxiv.org/abs/2310.18445v1
{ "authors": [ "Marvin Morgan", "Brendan P. Bowler", "Quang H. Tran", "Erik Petigura", "Vighnesh Nagpal", "Sarah Blunt" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20231027194055", "title": "Signs of Similar Stellar Obliquity Distributions for Hot and Warm Jupiters Orbiting Cool Stars" }
EP VI, Center for Electronic Correlations and Magnetism, Institute of Physics, University of Augsburg, D-86159 Augsburg, Germany [][email protected] VI, Center for Electronic Correlations and Magnetism, Institute of Physics, University of Augsburg, D-86159 Augsburg, Germany The spin-reversal in dilute  with x < 1 % is dominated by resonant quantum tunneling of spatially well separated states. We report on the effect of finite couplings between those states that give rise to cooperative, simultaneous quantum tunneling of two spins. This phenomenon, known as spin-spin cross relaxation, effectively elucidates the fine-structure observed in isothermal magnetization loops, a previously unresolved aspect.Temperature and field-dependent magnetization measurements were conducted over a range from T = 2 K to 300 K in applied fields of up to μ_0H = 7 T. Magnetic dipole fields are computed numerically.Our findings affirm the absence of stoichiometric defects in  and underscore its exemplary suitability as a model system for investigating spin-reversal processes at the microscopic level.This is attributed to its comparatively simple crystal structure, the availability of large single crystals, elevated characteristic energies, and well-defined energy levels Cooperative quantum tunneling of the magnetization in Fe-doped Li_3N A. Jesche January 14, 2024 ====================================================================§ INTRODUCTIONControlling spin states in a solid at the atomic level is at the forefront of modern data storage and quantum computing.One of the most fascinating ways a spin can change its orientation is by quantum tunneling that takes place not only on microscopic but also on mesoscopic scales <cit.>.This tunneling process allows for transitions between spin states that are separated by a large barrier although there is insufficient energy for a classical passage.Initially discovered in single molecule magnets (SMMs) <cit.>, this effect has since been intensively studied in the context of quantum computation <cit.>, quantum coherence <cit.>,and quantum entanglement <cit.>.Whereas most SMMs are build from clusters of magnetic ions, there are also systems based on the magnetic moment of isolated ions <cit.> that are referred to as single ion magnets (SIMs).Such mono-nuclear magnetic centers were also studied in a comparatively small number of purely inorganic materials <cit.>, for example Ho-doped LiYF_4 <cit.>,Cu-doped alkaline-earth phosphate apatites <cit.>, or Fe-doped Li_3N <cit.>.The latter shows quantum tunneling of the magnetization at comparatively high temperatures T > 10 K and extremely strong sensitivity to small applied fields <cit.>: H = 30 Oe applied parallel to the easy axis almost fully suppresses quantum tunneling and freezes the orientation of the magnetic moment either parallel or anti-parallel to the easy axis.On the other hand, H = 100 Oe applied perpendicular to the easy axis leads to strongly enhanced spin flip probability, a clear indication for resonant tunneling <cit.>. The availability of large single crystals, high characteristic temperatures and energy scales, and structural simplicity make Li_2(Li_1-xFe_x)N an ideal model system to improve our understanding of quantum tunneling and related phenomena.X-ray spectroscopy results further reinforce this suitability, revealing that Li_2(Li_1-xFe_x)N is clean of defects and disorder with a random distribution of Fe in the Li_3N host matrix <cit.>.This is corroborated by Moessbauer spectroscopy, which revealed that quantum tunneling is still effective at high temperatures of T ∼ 70 K <cit.>. However, so far the structure of the isothermal magnetization curves (M(H)), which serve as one of the primary sources for analyzing quantum tunneling effects in SMMs and SIMs, was not well understood. Whereas the large step at H = 0 is accurately described by resonant tunneling of isolated Fe moments, there are additional, small but well-defined anomalies, for example at μ_0H = 0.13 and 0.4 T, that form a complex fine-structure (see below).A rough estimate yields a corresponding magnetic energy scale of 0.1-0.5 meV, which seems too large for hyperfine couplings but too small for any known electronic interaction <cit.>.Based on single-atom processes, the first step would require applied fields of several tens of Tesla in order to cause a crossing of energy levels.Here we show that almost all details of M(H) are well described by cooperative quantum tunneling (of pairs) of spins, which is a manifestation of spin-spin cross relaxation (SSCR) <cit.>. The process is based on a simultaneous transition of two spins under conservation of the total energy (see Fig. <ref>).A weak coupling due to dipolar and/or exchange interactions leads to a collective quantum process <cit.>.We shall discuss the presence of mesoscopic, entangled pairs of spins in  that extend over several unit cells of the host lattice.§ EXPERIMENTALSingle crystals of several millimeters along a side were grown from a lithium-rich flux <cit.>.Temperature-dependent and isothermal magnetization were measured using a Quantum Design Magnetic Property Measurement System (MPMS3) equipped with a 7 T magnet. The data obtained were corrected for the diamagnetic sample holder: The sample was sandwiched between two Torlon discs and fixed inside a straw. The diamagnetic contribution of the Li_3N host was subsequently subtracted using χ_M(Li^1+) = -8.8· 10^-12 m^3mol^-1 <cit.> and χ_M(N^3-) = -1.63· 10^-10 m^3mol^-1 <cit.>.Chemical analysis was performed using inductively coupled plasma optical emission spectroscopy (ICP-OES, Vista-MPX). To this end, the samples were dissolved in a mixture of hydrochloric acid and distilled water. For simplicity, the nominal Fe concentrations are used through the text.The measured values for x are as follows: 0.001 → 0.00130(10), 0.003 → 0.00291(18), 0.005 → 0.00533(32), 0.009 → 0.00886(53). § RESULTS§.§ Temperature dependent magnetization In the following section, we are going to show that the measured magnetic susceptibility is consistent with the presence of non-equidistant doublet states, which is a prerequisite for the occurrence of SSCR. Figure <ref> shows the temperature-dependent magnetization M(T)· T of a single crystal of Li_2.999Fe_0.001N after field cooling in μ_0 H = 7 T. The H-field was applied parallel to the crystallographic c-axis. M(T)· T is expressed in units of Bohr magneton per Fe (for clarity) and displays a temperature dependence over the whole investigated temperature range (2-300 K).This is typical for systems with a large magnetic anisotropy and reflects changes in the Boltzmann populations of the relevant energy levels with temperature <cit.>.Therefore, the temperature dependence of the magnetization was modeled taking into account explicitly the Boltzmann probabilities of the low-lying magnetic states.These can be best described as an effective J = 7/2 multiplet, split by spin-orbit coupling into four doublets with quantum numbers m_J = ± 7/2, ± 5/2, ± 3/2, ± 1/2 <cit.> (see Fig. <ref> for a schematic of the level scheme). Since each state carries the moment μ(m_J) = -g m_J μ_B, the resulting total magnetic moment of the system (projected along the applied field) is given by equation (<ref>): μ (T) = ∑_i μ_i p_i = =1/Z∑_m_J = -J^J g μ_B m_J exp(-E_m_J + g m_J μ_B B/k_BT), where Z represents the partition function: Z = ∑_i p_i = ∑_m_J = -J^J exp(-E_m_J + g m_J μ_B B/k_BT), and E_m_J equals the energy of |m_J⟩ with respect to the ground state doublet |± 7/2⟩, g = 10/7 is the Landé factor obtained for J = 7/2, L = 2, and S = 3/2 <cit.>. In a first approach equidistant energy levels, separated by Δ E,ware introduced, in accordance with the almost equidistant levels calculated by quantum chemistry methods <cit.>. The free parameters in the fit are Δ E, the Fe concentration x and a small, temperature independent offset. Due to extremely slow relaxation for temperatures below T=16 K <cit.> the fit was restricted to the interval [20 K, 300 K].The solid line in Fig.<ref> represents a fit of equation (<ref>) and shows a remarkable agreement with the measured data.An energy level separation of Δ E = 22.4(2) meV (261 K) was found. The Fe content x=0.12(1) % obtained from the fit is in excellent agreement with the concentration determined via ICP-OES [x=0.13(1) %].The temperature-independent offset converged to a small value close to 0.4 % of the low-temperature magnetization.Note that the Landé factor of g = 10/7 is actually determined for a free ion. Nevertheless, when releasing g in the fit, it converged to a value that deviates by less than 10^-4 from the Landé factor.In accordance with small deviations of the calculated levels from equidistance <cit.>, the parameters a and b (see Fig. <ref> and schematic inset in Fig. <ref>b,c) were introduced, representing additional energy shifts of the excited states m_J = ± 3/2 and m_J = ± 1/2, respectively.The parameters determined from the fit with a = b = 0 (see above) were fixed, and a and b were varied separately until the calculated curve showed significant deviations from the measured data (the error is dominated by the weighing error).The corresponding curves are shown in Fig. <ref>b,c, where a_-, b_- denote the lower and a_+, b_+ the upper limit of a and b, respectively.The intervals for a = [-1.3 meV, +1.9 meV] and b = [-2.6 meV, +3.9 meV] obtained in this way are slightly asymmetric, indicating a small increase of the level distance with increasing energy, in accordance with the calculation in Ref. <cit.>.As expected, the allowed interval for the shift of the higher excited level b is larger than for the shift a of the lower excited state, reflecting the reduced influence of higher energy levels in the studied temperature range. §.§ Isothermal magnetization Figure <ref> shows isothermal magnetization curves of  at T = 2 K and a sweep-rate of μ_0dH/dt = 0.39 mT/s for samples with different Fe content x. Arrows indicate the direction of the field sweep. The H-field was applied parallel to the crystallographic c-axis.In order to obtain saturation, the samples were field cooled prior to the measurement in μ_0H = 7 T.For better comparison, the curves are normalized to their saturation magnetization at T = 2 K [M_sat = (5.0± 0.3) μ_B/Fe]. The isothermal magnetization curves show pronounced steps at several field values with the largest change in magnetization at H = 0 (Fig. <ref>a,b)Whereas the step at H = 0 shows a decrease in intensity for higher Fe concentrations, all steps in finite fields increase with x. This is also clearly seen in the derivatives shown in Fig. <ref>c,d. Even without invoking the microscopic origin of this behavior, it indicates that the step at H = 0 is caused by isolated, non-interaction Fe-atoms whereas Fe-Fe interactions are at play at the smaller steps in finite fields.As shown in Fig. <ref>, the position of the steps in M(H) can be well explained by the occurrence of identical energy differences between the magnetic states.Instead of energy levels as depicted in Fig. <ref>, the differences between those levels are plotted in Fig. <ref>a as function of H. Accordingly, the value of 22.4 meV at H = 0 corresponds to the energy splitting between ground state and first excited state, whereas 22.5 meV is the energy difference between first and second exited state, that is Δ E + a with a = 0.1 meV.Note that this is well within the range of a = [-1.3 meV, +1.9 meV] obtained from the analysis of M(T).As a function of H, several crossings of energy level differences are observed in the field range up to μ_0H = 1 T.Those fit well with the peak positions found in the derivative of M(H), which is plotted in Fig. <ref>b. The first crossing at μ_0H = 0.13 T corresponds to simultaneous transitions | -7/2 ⟩→| +5/2 ⟩ and | -3/2 ⟩→| +5/2 ⟩. Another set of anomalies in M(H) is observed around H ≈ 4 T (not shown).Those are well described by similar crossings of energy differences assuming b = 2.8 meV, which is also well in the allowed range of b = [-2.6 meV, +3.9 meV]The level scheme depicted in Fig.<ref>a has been calculated based on g = 10/7, level splittings of Δ = 22.4 meV + a/b, and the values of m_J provided above (Fig. <ref>), which allow for a precise description of M(T).Nevertheless, it must be approached with caution, as the precise crossings are highly dependent on both the size of the magnetic moments and the exact zero-field splittings.More likely than not, the g-factors of the doublet states differ somewhat from the values given above. The value of a = 0.1 meV was chosen such that the preponderance of the data is well described. However, other combinations are also possible including those that describe the peak in the derivative at μ_0H = 0.5 T.We refrain from presenting these in order to keep the number of correlated parameters low and because of further difficulties that are discussed in the following. § DISCUSSIONSo far, the data was described in the framework of non-interacting spins. However, a finite coupling (by dipolar or exchange interactions) between the spins is necessary for SSCR to emerge and suitable Hamiltonians have been set up and diagonalized for SMMs (see for example <cit.>). Here we focus on the experimental findings and refrain from an elaborate theoretical analysis for the following reasons: a) significant mixing of the eigenstates of the free ion is expected.Even in the simplest picture, the crystal electric field mixes |± 7/2 ⟩ and |± 5/2 ⟩ states in the hexagonal symmetry of the Fe atom <cit.>. b) There is significant mixing between Fe 3d and 4s states <cit.>c) In contrast to most of the SMMs, the zero-field splitting of  cannot be described by a uniaxial anisotropy constant that contributes with -D S_z^2 to the Hamiltonian. Instead, a more complex energy level scheme is present <cit.>. d) The simultaneous presence of a large crystal electric field and an unquenched oribtal moment <cit.> Those characteristic properties of  make it - in the authors opinion - impossible to take advantage of established schemes to find and diagonalize an effective Hamiltonian. Nevertheless, a preliminary analysis suggests that the major difficulty of SSCR in such a single-ion picture, that is a negligible occupation of energy levels more than 22 meV above the ground state at T = 2 K, is resolved in a more rigorous treatment. SSCR effects similar to the ones presented here were observed in SMMs, for example in complexes containing clusters of Mn<cit.>,Ni <cit.>,or Fe ions <cit.>.Furthermore, the effect was demonstrated in single crystals of LiYF_4 doped with Ho ions <cit.>. Typically in those reports, the vast majority of steps in M(H) is attributed to resonant tunneling of isolated spins and SSCR transitions present smaller, additional anomalies. In stark contrast, there is only one single-ion transition in  but several associated with SSCR.The reason is the significantly larger anisotropy energy of the title compound that causes the (avoided) level crossing to appear only at large applied fields of several tens of Tesla.An experimental verification seems challenging since the relaxation is slow compared to the pulses at high-magnetic-field facilities <cit.>.Accordingly, the SSCR transitions in  are more pronounced since, besides the zero-field transition, they provide the only spin-reversal mechanism for μ_0H < 10 T.Similar to our findings, the step size of SSCR transitions in M(H) was found to increase with increasing concentration of magnetic ions as shown for the SMM [(Pc)_2Ho_0.02Y_0.98]^-TBA^+ when increasing the Ho concentration <cit.>. It remains to be seen, whether the transverse field dependence of SSCR in  is smooth when compared to single atom transitions <cit.>. Finally, we are going to elaborate on the strength of the coupling between the Fe magnetic moments. Since a combinatorial analysis <cit.> is cumbersome for such low Fe concentrations, a numerical approach was chosen by placing Fe atoms on the lattice of Li_3N according to various Fe concentrations x using the NumPy library <cit.> for Python. A 40 × 40 × 40 lattice has been simulated over 1000 times.Since SSCR is a two-particle process between pairs of Fe atoms, we focus on the field created by the nearest neighbor (n.n). Magnetic dipole fields B_ dip were calculated for a fully polarized state with all magnetic moments (μ = 5 μ_B) pointing along the crystallographic c-axis. The results are summarized in Table <ref>. The first value represents the largest possible dipole field that is caused by an Fe atom placed on a nearest neighbor Fe site along the c-axis.Basically, even larger values are possible for chains of Fe atoms. However, those are extremely unlikely to form for x < 0.01. For Fe atoms sitting on the in-plane n.n. site, the largest possible, negative dipole field amounts to -99 mT. Due to the six in-plane n.n., the probability of being occupied is somewhat larger: for x = 0.9 % it amounts to 5.3 %, whereas it drastically reduces to 0.6 % for the smallest Fe concentration of x = 0.1 %.The average field caused by the nearest neighbor for x = 0.1 % is B_ dip = 2.1 mT along the c-axis and B_ dip = 1.7 mT perpendicular to the c-axis.The standard deviation is significantly larger than the average since B_ dip(r) is highly non-linear and the average is dominated by a small number of particularly large values. Nevertheless, those results reveal that a significant number of Fe atoms are subject to magnetic dipolar fields in the range of 10 mT. Larger values for the average B_ dipwere obtained for x = 0.9 % that roughly scale with the Fe concentration. Furthermore, the simulation allows to draw conclusions on the single-atom transition at H ≈ 0. To this extent, we calculated the ratio of Fe atoms that are subject to B_ dip < 3 mT, which was found to be the threshold for quantum tunneling to appear <cit.>. The ratio decreases from 92(5) % for x = 0.1 % to 56(4) % for x = 0.9 %. This scales well with a decrease of the step size in M(H) by roughly 1/2 (Fig. <ref>c). § SUMMARYThis study investigates quantum tunneling of the magnetization in , a model system due to the comparatively simple crystal structure, the availability of large single crystals, sharp energy levels and high characteristic energy scales (with respect to anisotropy energy, relaxation rates, and crossover to the quantum tunneling regime).The temperature-dependent magnetization of dilute  can be satisfactorily described as a result of the magnetic moment of single, isolated Fe ions.Through detailed measurements of isothermal magnetization, the research uncovers complex spin transitions that deviate from conventional non-interacting spins.Instead, the driving force behind the observed magnetic anomalies is identified as cooperative quantum tunneling of spin pairs, known as spin-spin cross relaxation (SSCR). In particular, it is shown that the observed behavior is not caused by structural defects.We believe that this work represents a further important step in understanding the complex magnetic behavior of the structurally rather simple compound  and will improve our understanding of spin-reversal processes on a microscopic scale. § ACKNOWLEDGMENTSWe thank Alexander Herrnberger and Klaus Wiedenmann for technical support, Andrea Moos for performing ICP-OES. Helpful comments provided by Thilo Kopp are gratefully acknowledged.This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - JE 748/1.45 fxundefined [1]ifx#1fnum [1]#1firstoftwosecondoftwo fx [1]#1firstoftwosecondoftwonoop [0]secondoftworef[1]@startlink#1@href href[1]#1@endlink anitize@url [0]` 12`$12`&12`#12`1̂2`_12`%12 startlink[1] endlink[0]rl [1]href #1 @bib@innerbibempty[Wernsdorfer et al.(2002a)Wernsdorfer, Aliaga-Alcalde, Hendrickson, and Christou]Wernsdorfer2002c author author W. Wernsdorfer, author N. Aliaga-Alcalde, author D. N. Hendrickson, and author G. Christou, title title Exchange-biased quantum tunnelling in a supramolecular dimer of single-molecule magnets, https://doi.org/10.1038/416406a journal journal Nature volume 416, pages 406 (year 2002a)NoStop [Barbara(2014)]Barbara2014 author author B. Barbara, title title Quantum tunneling of the collective spins of single-molecule magnets: From early studies to quantum coherence, in https://doi.org/10.1007/978-3-642-40609-6_2 booktitle Molecular Magnets, editor edited byeditor J. Bartolomé, editor F. Luis, and editor J. F. Fernández (publisher Springer Berlin Heidelberg, year 2014) pp. pages 17–60NoStop [Thomas et al.(1996)Thomas, Lionti, Ballou, Gatteschi, Sessoli, and Barbara]Thomas1996 author author L. Thomas, author F. Lionti, author R. Ballou, author D. Gatteschi, author R. Sessoli, and author B. Barbara, title title Macroscopic quantum tunnelling of magnetization in a single crystal of nanomagnets, https://doi.org/10.1038/383145a0 journal journal Nature volume 383,pages 145 (year 1996)NoStop [Barbara et al.(1995)Barbara, Wernsdorfer, Sampaio, Park, Paulsen, Novak, Ferré, Mailly, Sessoli, Caneschi, Hasselbach, Benoit, and Thomas]Barbara1995 author author B. Barbara, author W. Wernsdorfer, author L. Sampaio, author J. Park, author C. Paulsen, author M. Novak, author R. Ferré, author D. Mailly, author R. Sessoli, author A. Caneschi, author K. Hasselbach, author A. Benoit, and author L. Thomas, title title Mesoscopic quantum tunneling of the magnetization, https://doi.org/http://dx.doi.org/10.1016/0304-8853(94)00585-0 journal journal J. Magn. Magn. Mater. volume 140, pages 1825 (year 1995)NoStop [Friedman et al.(1996)Friedman, Sarachik, Tejada, andZiolo]Friedman1996 author author J. R. Friedman, author M. P. Sarachik, author J. Tejada,and author R. Ziolo, title title Macroscopic Measurement of Resonant Magnetization Tunneling in High-Spin Molecules, https://doi.org/10.1103/PhysRevLett.76.3830 journal journal Phys. Rev. Lett. volume 76, pages 3830 (year 1996)NoStop [Leuenberger and Loss(2001)]Leuenberger2001 author author M. N. Leuenberger and author D. Loss, title title Quantum computing in molecular magnets, https://doi.org/10.1038/35071024 journal journal Nature volume 410,pages 789 (year 2001)NoStop [Bertaina et al.(2007)Bertaina, Gambarelli, Tkachuk, Kurkin, Malkin, Stepanov,and Barbara]Bertaina2007 author author S. Bertaina, author S. Gambarelli, author A. Tkachuk, author I. N. Kurkin, author B. Malkin, author A. Stepanov, and author B. Barbara, title title Rare-earth solid-state qubits, https://doi.org/10.1038/nnano.2006.174 journal journal Nature Nanotech. volume 2, pages 39 (year 2007)NoStop [Candini et al.(2010)Candini, Lorusso, Troiani, Ghirri, Carretta, Santini, Amoretti, Muryn, Tuna, Timco, McInnes, Winpenny, Wernsdorfer, and Affronte]Candini2010 author author A. Candini, author G. Lorusso, author F. Troiani, author A. Ghirri, author S. Carretta, author P. Santini, author G. Amoretti, author C. Muryn, author F. Tuna, author G. Timco, author E. J. L. McInnes, author R. E. P. Winpenny, author W. Wernsdorfer, and author M. Affronte, title title Entanglement in supramolecular spin systems of two weakly coupled antiferromagnetic rings (purple-𝐂𝐫_7Ni), https://doi.org/10.1103/PhysRevLett.104.037203 journal journal Phys. Rev. Lett. volume 104,pages 037203 (year 2010)NoStop [Ishikawa et al.(2003)Ishikawa, Sugita, Ishikawa, Koshihara, and Kaizu]Ishikawa2003 author author N. Ishikawa, author M. Sugita, author T. Ishikawa, author S. Koshihara, and author Y. Kaizu, title title Lanthanide Double-Decker Complexes Functioning as Magnets at the Single-Molecular Level, https://doi.org/10.1021/ja029629n journal journal J. Am. Chem. Soc. volume 125, pages 8694 (year 2003)NoStop [Alam et al.(2006)Alam, Dremov, Müller, Postnikov, Mal, Hussain, and Kortz]Alam2006 author author M. S. Alam, author V. Dremov, author P. Müller, author A. V. Postnikov, author S. S. Mal, author F. Hussain, and author U. Kortz, title title STM/STS Observation of Polyoxoanions on HOPG Surfaces: the Wheel-Shaped [Cu_20Cl(OH)_24(H_2O)_12(P_8W_48O_184)]^25- and the Ball-Shaped [{Sn(CH_3)_2(H_2O)}_24{Sn(CH_3)_2}_12(A-PW_9O_34)_12]^36-,@noopjournal journal Inorganic Chemistry volume 45, pages 2866 (year 2006)NoStop [AlDamen et al.(2008)AlDamen, Clemente-Juan, Coronado, Mart-Gastaldo, and Gaita-Ariño]AlDamen2008 author author M. A. AlDamen, author J. M. Clemente-Juan, author E. Coronado, author C. Mart-Gastaldo, and author A. Gaita-Ariño, title title Mononuclear lanthanide single-molecule magnets based on polyoxometalates, https://doi.org/10.1021/ja801659m journal journal J. Am. Chem. Soc. volume 130, pages 8874 (year 2008)NoStop [Zadrozny et al.(2013a)Zadrozny, Xiao, Atanasov, Long, Grandjean, Neese, and Long]Zadrozny2013a author author J. M. Zadrozny, author D. J. Xiao, author M. Atanasov, author G. J. Long, author F. Grandjean, author F. Neese, and author J. R. Long, title title Magnetic blocking in a linear iron(I) complex, https://doi.org/10.1038/nchem.1630 journal journal Nat. Chem. volume 5, pages 57 (year 2013a)NoStop [Zykin et al.(2020)Zykin, Kazin, and Jansen]Zykin2020 author author M. A. Zykin, author P. E. Kazin,and author M. Jansen,title title All-inorganic single-ion magnets in ceramic matrices, journal journal Chemistry – A European Journal volume n/a, https://doi.org/10.1002/chem.201905290 10.1002/chem.201905290 (year 2020), note publisher: John Wiley & Sons, LtdNoStop [Giraud et al.(2001)Giraud, Wernsdorfer, Tkachuk, Mailly, and Barbara]Giraud2001 author author R. Giraud, author W. Wernsdorfer, author A. M. Tkachuk, author D. Mailly, and author B. Barbara, title title Nuclear Spin Driven Quantum Relaxation in LiY_0.998Ho_0.002F_4, https://doi.org/10.1103/PhysRevLett.87.057203 journal journal Phys. Rev. Lett. volume 87,pages 057203 (year 2001)NoStop [Kazin et al.(2014)Kazin, Zykin, Schnelle, Felser,and Jansen]Kazin2014 author author P. E. Kazin, author M. A. Zykin, author W. Schnelle, author C. Felser, and author M. Jansen, title title Rich diversity of single-ion magnet features in the linear ocuiiio- ion confined in the hexagonal channels of alkaline-earth phosphate apatites, https://doi.org/10.1039/C4CC03966A journal journal Chem. Commun. volume 50,pages 9325 (year 2014)NoStop [Jesche et al.(2014)Jesche, McCallum, Thimmaiah, Jacobs, Taufour, Kreyssig, Houk, Bud'ko, and Canfield]Jesche2014b author author A. Jesche, author R. W. McCallum, author S. Thimmaiah, author J. L. Jacobs, author V. Taufour, author A. Kreyssig, author R. S. Houk, author S. L. Bud'ko, and author P. C. Canfield, title title Giant magnetic anisotropy and tunnelling of the magnetization in Li_2(Li_1-xFe_x)N, http://dx.doi.org/10.1038/ncomms4333 journal journal Nat. Commun. volume 5:3333 (year 2014), note doi: 10.1038/ncomms4333NoStop [Fix et al.(2018a)Fix, Atkinson, Canfield, del Barco, and Jesche]Fix2018c author author M. Fix, author J. H. Atkinson, author P. C. Canfield, author E. del Barco, and author A. Jesche, title title Extreme Field Sensitivity of Magnetic Tunneling in Fe-Doped Li_3N, https://doi.org/10.1103/PhysRevLett.120.147202 journal journal Phys. Rev. Lett. volume 120,pages 147202 (year 2018a)NoStop [Huzan et al.(2020)Huzan, Fix, Aramini, Bencok, Mosselmans, Hayama, Breitner, Gee, Titus, Arrio, Jesche, and Baker]Huzan2020 author author M. S. Huzan, author M. Fix, author M. Aramini, author P. Bencok, author J. F. W. Mosselmans, author S. Hayama, author F. A.Breitner, author L. B.Gee, author C. J. Titus, author M.-A. Arrio, author A. Jesche, and author M. L. Baker, title title Single-ion magnetism in the extended solid-state: insights from X-ray absorption and emission spectroscopy, https://doi.org/10.1039/D0SC03787G journal journal Chem. Sci. volume 11, pages 11801 (year 2020)NoStop [Bräuninger et al.(2020)Bräuninger, Jesche, Kamusella, Seewald, Fix, Sarkar, Zvyagin, and Klauss]Braeuninger2020 author author S. A. Bräuninger, author A. Jesche, author S. Kamusella, author F. Seewald, author M. Fix, author R. Sarkar, author A. A. Zvyagin, and author H.-H. Klauss, title title Magnetic field tuning of low-energy spin dynamics in the single-atomic magnet Li_2(Li_1xFe_x)N,https://doi.org/10.1103/PhysRevB.102.054426 journal journal Phys. Rev. B volume 102,pages 054426 (year 2020)NoStop [Barbara(2003)]Barbara2003 author author B. Barbara, title title Two bodies are better than one, https://doi.org/10.1038/421032a journal journal Nature volume 421, pages 32 (year 2003)NoStop [Xu et al.(2017)Xu, Zangeneh, Yadav, Avdoshenko, van den Brink, Jesche, and Hozoi]Xu2017 author author L. Xu, author Z. Zangeneh, author R. Yadav, author S. Avdoshenko, author J. van den Brink, author A. Jesche, and author L. Hozoi, title title Spin-reversal energy barriers of 305 K for Fe^2+ d^6 ions with linear ligand coordination, https://doi.org/10.1039/C7NR03041J journal journal Nanoscale volume 9, pages 10596 (year 2017)NoStop [Bloembergen et al.(1959)Bloembergen, Shapiro, Pershan, andArtman]Bloembergen1959 author author N. Bloembergen, author S. Shapiro, author P. S. Pershan, and author J. O. Artman, title title Cross-relaxation in spin systems, https://doi.org/10.1103/PhysRev.114.445 journal journal Phys. Rev. volume 114, pages 445 (year 1959)NoStop [Wernsdorfer et al.(2002b)Wernsdorfer, Bhaduri, Tiron, Hendrickson, andChristou]Wernsdorfer2002a author author W. Wernsdorfer, author S. Bhaduri, author R. Tiron, author D. N. Hendrickson, andauthor G. Christou, title title Spin-Spin Cross Relaxation in Single-Molecule Magnets, https://doi.org/10.1103/PhysRevLett.89.197201 journal journal Phys. Rev. Lett. volume 89, pages 197201 (year 2002b)NoStop [Jesche and Canfield(2014)]Jesche2014c author author A. Jesche and author P. C. Canfield, title title Single crystal growth from light, volatile and reactive materials using lithium and calcium flux, https://doi.org/10.1080/14786435.2014.913114 journal journal Philos. Mag. volume 94, pages 2372 (year 2014)NoStop [Banhart et al.(1986)Banhart, Ebert, Voitländer, andWinter]Banhart1986 author author J. Banhart, author H. Ebert, author J. Voitländer, andauthor H. Winter, title title Diamagnetic susceptibility of pure metals and binary alloys, https://doi.org/10.1016/0304-8853(86)90030-2 journal journal J. Magn. Magn. Mater. volume 61, pages 221 (year 1986)NoStop [Höhn et al.(2009)Höhn, Hoffmann, Hunger, Leoni, Nitsche, Schnelle, and Kniep]Hohn2009 author author P. Höhn, author S. Hoffmann, author J. Hunger, author S. Leoni, author F. Nitsche, author W. Schnelle, and author R. Kniep, title title β-Ca_3N_2, a Metastable Nitride in the System Ca-N, https://doi.org/10.1002/chem.200801857 journal journal Chem. Eur. J. volume 15, pages 3419 (year 2009)NoStop [Van Vleck(1978)]VanVleck1978 author author J. H. Van Vleck, title title Quantum mechanics: The key to understanding magnetism, https://doi.org/10.1126/science.201.4351.113 journal journal Science volume 201, pages 113 (year 1978)NoStop [Zadrozny et al.(2013b)Zadrozny, Atanasov, Bryan, Lin, Rekken, Power, Neese, and Long]Zadrozny2013c author author J. M. Zadrozny, author M. Atanasov, author A. M. Bryan, author C.-Y. Lin, author B. D. Rekken, author P. P. Power, author F. Neese, and author J. R. Long, title title Slow magnetization dynamics in a series of two-coordinate iron(II) complexes, https://doi.org/10.1039/C2SC20801F journal journal Chem. Sci. volume 4, pages 125 (year 2013b)NoStop [Segal and Wallace(1970)]Segal1970 author author E. Segal and author W. E. Wallace, title title Rare-earth ions in a hexaognal field I, https://doi.org/10.1016/0022-4596(70)90093-9 journal journal J. Solid State Chem.istryvolume 2, pages 347 (year 1970)NoStop [Novák and Wagner(2002)]Novak2002 author author P. Novák and author F. R. Wagner, title title Electronic structure of lithium nitridoferrate: Effects of correlation and spin-orbit coupling,https://doi.org/10.1103/PhysRevB.66.184434 journal journal Phys. Rev. B volume 66,pages 184434 (year 2002)NoStop [Huzan et al.()Huzan, Burrow, Fix, Chong, Bencok, Aramini, Jesche,and Baker]Huzan2023 author author M. S. Huzan, author T. G. Burrow, author M. Fix, author S. K. Chong, author P. Bencok, author M. Aramini, author A. Jesche, and author M. L.Baker, @noopnote Quantifying the effect of 3d-4s mixing on linearly coordinated metal-ions by L_2,3-edge XAS and XMCD, submitted to J. Am. Chem. Soc.Stop [Wernsdorfer et al.(2005)Wernsdorfer, Bhaduri, Vinslava, andChristou]Wernsdorfer2005 author author W. Wernsdorfer, author S. Bhaduri, author A. Vinslava,and author G. Christou,title title Landau-Zener tunneling in the presence of weak intermolecular interactions in a crystal of Mn_4 Single-Molecule Magnets, https://doi.org/10.1103/PhysRevB.72.214429 journal journal Phys. Rev. B volume 72, pages 214429 (year 2005)NoStop [Wernsdorfer et al.(2004)Wernsdorfer, Bhaduri, Tiron, Hendrickson, and Christou]Wernsdorfer2004b author author W. Wernsdorfer, author S. Bhaduri, author R. Tiron, author D. Hendrickson, andauthor G. Christou, title title Two-body tunnel transitions in a Mn_4 single-molecule magnet, https://doi.org/https://doi.org/10.1016/j.jmmm.2003.12.041 journal journal J. Magn. Magn. Mater. volume 272–276, pages 1109 (year 2004),note proceedings of the International Conference on Magnetism (ICM 2003)NoStop [Milios et al.(2006)Milios, Manoli, Rajaraman, Mishra, Budd, White, Parsons, Wernsdorfer, Christou, and Brechin]Milios2006 author author C. J. Milios, author M. Manoli, author G. Rajaraman, author A. Mishra, author L. E. Budd, author F. White, author S. Parsons, author W. Wernsdorfer, author G. Christou, and author E. K. Brechin, title title A Family of [Mn6] Complexes Featuring Tripodal Ligands, https://doi.org/10.1021/ic060676g journal journal Inorg. Chem. volume 45, pages 6782 (year 2006)NoStop [Yang et al.(2006)Yang, Wernsdorfer, Zakharov, Karaki, Yamaguchi, Isidro, Lu, Wilson, Rheingold, Ishimoto, and Hendrickson]YangEn2006 author author E.-C. Yang, author W. Wernsdorfer, author L. N. Zakharov, author Y. Karaki, author A. Yamaguchi, author R. M. Isidro, author G.-D. Lu, author S. A. Wilson, author A. L.Rheingold, author H. Ishimoto, and author D. N. Hendrickson, title title Fast Magnetization Tunneling in Tetranickel(II) Single-Molecule Magnets, https://doi.org/10.1021/ic050093r journal journal Inorg. Chem. volume 45, pages 529 (year 2006)NoStop [Hameury et al.(2013)Hameury, Kayser, Pattacini, Rogez, Wernsdorfer, and Braunstein]Hameury2013 author author S. Hameury, author L. Kayser, author R. Pattacini, author G. Rogez, author W. Wernsdorfer, and author P. Braunstein, title title Synthesis of cubane-type Ni(II) complexes from pyridyl-alcohol ligands; their single-molecule magnet behaviour, https://doi.org/10.1039/C3DT32869D journal journal Dalton Trans. volume 42, pages 5013 (year 2013)NoStop [Vergnani et al.(2012)Vergnani, Barra, Neugebauer, Rodriguez-Douton, Sessoli, Sorace, Wernsdorfer, and Cornia]Vergnani2012 author author L. Vergnani, author A.-L. Barra, author P. Neugebauer, author M. J. Rodriguez-Douton, author R. Sessoli, author L. Sorace, author W. Wernsdorfer, and author A. Cornia, title title Magnetic bistability of isolated giant-spin centers in a diamagnetic crystalline matrix, https://doi.org/10.1002/chem.201103251 journal journal Chem. Eur. J. volume 18, pages 3390 (year 2012)NoStop [Compain et al.(2009)Compain, Mialane, Dolbecq, Mbomekallé, Marrot, Sécheresse, Rivière, Rogez, and Wernsdorfer]Compain2009 author author J.-D. Compain, author P. Mialane, author A. Dolbecq, author I. M. Mbomekallé, author J. Marrot, author F. Sécheresse, author E. Rivière, author G. Rogez, and author W. Wernsdorfer, title title Iron polyoxometalate single-molecule magnets, https://doi.org/10.1002/anie.200900117 journal journal Angew. Chem. Int. Ed. volume 48,pages 3077 (year 2009)NoStop [Cornia et al.(2019)Cornia, Mannini, Sessoli, and Gatteschi]Cornia2019 author author A. Cornia, author M. Mannini, author R. Sessoli, and author D. Gatteschi, title title Propeller-Shaped Fe_4 and Fe_3M Molecular Nanomagnets: A Journey from Crystals to Addressable Single Molecules,https://doi.org/10.1002/ejic.201801266 journal journal Eur. J. Inorg. Chem. volume 2019, pages 552 (year 2019)NoStop [Giraud et al.(2003)Giraud, Tkachuk, and Barbara]Giraud2003b author author R. Giraud, author A. M. Tkachuk, and author B. Barbara, title title Quantum dynamics of atomic magnets: Cotunneling and dipolar-biased tunneling, @noopjournal journal Phys. Rev. Lett. volume 91, pages 257204 (year 2003)NoStop [Barbara et al.(2004)Barbara, Giraud, Wernsdorfer, Mailly, Lejay, Tkachuk, and Suzuki]Barbara2004 author author B. Barbara, author R. Giraud, author W. Wernsdorfer, author D. Mailly, author P. Lejay, author A. Tkachuk, and author H. Suzuki, title title Evidence for resonant magnetic tunneling of rare-earth ions: from insulating to metallic matrix, https://doi.org/https://doi.org/10.1016/j.jmmm.2003.12.654 journal journal J. Magn. Magn. Mater. volume 272-276, pages 1024(year 2004)NoStop [Fix et al.(2018b)Fix, Jesche, Jantz, Bräuninger, Klauss, Manna, Pietsch, Höppe, and Canfield]Fix2018b author author M. Fix, author A. Jesche, author S. G. Jantz, author S. A. Bräuninger, author H.-H. Klauss, author R. S. Manna, author I. M. Pietsch, author H. A. Höppe, and author P. C. Canfield, title title Ferromagnetism versus slow paramagnetic relaxation in Fe-doped Li_3N, https://doi.org/10.1103/PhysRevB.97.064419 journal journal Phys. Rev. B volume 97, pages 064419 (year 2018b)NoStop [Ishikawa et al.(2005)Ishikawa, Sugita, and Wernsdorfer]Ishikawa2005 author author N. Ishikawa, author M. Sugita,and author W. Wernsdorfer,title title Nuclear Spin Driven Quantum Tunneling of Magnetization in a New Lanthanide Single-Molecule Magnet: Bis(Phthalocyaninato)holmium Anion, https://doi.org/10.1021/ja0428661 journal journal J. Am. Chem. Soc. volume 127, pages 3650 (year 2005)NoStop [Klatyk et al.(2002)Klatyk, Schnelle, Wagner, Niewa, Novák, Kniep, Waldeck, Ksenofontov, and Gütlich]Klatyk2002 author author J. Klatyk, author W. Schnelle, author F. R. Wagner, author R. Niewa, author P. Novák, author R. Kniep, author M. Waldeck, author V. Ksenofontov, and author P. Gütlich, title title Large Orbital Moments and Internal Magnetic Fields in Lithium Nitridoferrate(I), @noopjournal journal Phys. Rev. Lett. volume 88, pages 207202 (year 2002)NoStop [Harris et al.(2020)Harris, Millman, van der Walt, Gommers, Virtanen, Cournapeau, Wieser, Taylor, Berg, Smith, Kern, Picus, Hoyer, van Kerkwijk, Brett, Haldane, del Río, Wiebe, Peterson, Gérard-Marchant, Sheppard, Reddy, Weckesser, Abbasi, Gohlke, and Oliphant]Harris2020 author author C. R. Harris, author K. J. Millman, author S. J. van der Walt, author R. Gommers, author P. Virtanen, author D. Cournapeau, author E. Wieser, author J. Taylor, author S. Berg, author N. J.Smith, author R. Kern, author M. Picus, author S. Hoyer, author M. H. van Kerkwijk, author M. Brett, author A. Haldane, author J. F. del Río, author M. Wiebe, author P. Peterson, author P. Gérard-Marchant, author K. Sheppard, author T. Reddy, author W. Weckesser, author H. Abbasi, author C. Gohlke, and author T. E.Oliphant, title title Array programming with NumPy, https://doi.org/10.1038/s41586-020-2649-2 journal journal Nature volume 585, pages 357 (year 2020)NoStop
http://arxiv.org/abs/2310.18185v1
{ "authors": [ "M. Fix", "A. Jesche" ], "categories": [ "cond-mat.str-el", "quant-ph" ], "primary_category": "cond-mat.str-el", "published": "20231027145942", "title": "Cooperative quantum tunneling of the magnetization in Fe-doped Li$_3$N" }
Debiased population of very young asteroid families Vokrouhlický et al.Institute of Astronomy, Charles University, V Holešovičkách 2,CZ-180 00 Prague 8, Czech Republic [email protected] Department of Space Studies, Southwest Research Institute,1050 Walnut St., Suite 300, Boulder, CO 80302, USA Asteroid families that are less than one million years old offer a unique possibility toinvestigate recent asteroid disruption events and test ideas about their dynamical evolution.Observations provided by powerful all-sky surveys have led to an enormous increasein the numberof detected asteroids over the past decade. When the known populations are well characterized,they can be used to determine asteroid detection probabilities, including those in young families,as a function of their absolute magnitude. We use observations from the Catalina Sky Survey (CSS) to determine the bias-corrected population ofsmall members in four young families down to sizes equivalent to several hundred meters. Using the most recent catalog of known asteroids, we identified members from four youngfamilies for which the population has grown appreciably over recent times. A large fraction ofthese bodies have also been detected by CSS. We used synthetic populations of asteroids, with theirmagnitude distribution controlled by a small number of parameters, as a template for the bias-corrected modelof these families. Applying the known detection probability of the CSS observations, we couldadjust these model parameters to match the observed (biased) populations in the young families. In the case of three families, Datura, Adelaide, and Rampo, we find evidence that the magnitudedistribution transitions from steep to shallow slopes near300 to400 meters. Conversely, the Hobson family population may be represented by a single power-law model. The Lucascavin family has a limited population; no new members have beendiscovered over the past two decades. We consider a model of parent body rotational fission with the escaping secondary tidally split into two components (thereby providing threemembers within this family). In support of this idea, we find that no other asteroid with absolutemagnitude H≤ 18.3 accompanies the known three members in the Lucascavin family. A similarresult is found for the archetypal asteroid pair Rheinland–Kurpfalz.Debiased population of very young asteroid families D. Vokrouhlický1, D. Nesvorný2, M. Brož1, W.F. Bottke2 Received: January 14, 2024; accepted: October 20, 2023 ================================================================================== § INTRODUCTION More than a century ago, <cit.> discovered the first examples of statistically significant clusters in the space of asteroid heliocentric orbital elements (using proper values of the semimajor axis, eccentricity, and inclination). Suspecting their mutual relation, he coined the term asteroid families. Hirayamarightly proposed that the families are collections of asteroids related to parent bodies that disrupted sometime in the past. He even identified asteroid collisions as the sourceof these catastrophic events. Over time, asteroid families became a core element of Solar System small body science. They provide (i) an important constraint on asteroid collisional models; (ii) a unique tool to study the internal structure of large asteroids, both in terms of their chemical homogeneity and mechanical structure; (iii) an important source of impactor showers that include both large projectiles and dust onto the surfaces of the terrestrial planets (including the Earth); (iv) an arena for studying a plethora of dynamical processes affecting the orbits and spins of asteroids; and (v) many more <cit.>. In this paper, we explore (ii), namely the capability of asteroid family data to constrain the internal structure of the parent body. Over the past two decades or so, sophisticated numerical approaches have been developed to model energetic asteroidal collisions, the subsequent dispersal, and gravitational re-accumulation of resulting fragments <cit.>. The outcome of these simulations, which may be compared to the information provided byasteroid families, sensitively depends on assumptions about the internal properties of the parent body. One type of dataset includes the size frequency distribution of asteroid members in the family. While determining asteroid family members looks straightforward, it has potential complications. This is because many families extend over non-negligible portions of the asteroid belt. As a result, the proper zone in orbital element space in which the family members are located may contain a certain fraction of unrelated (interloping) asteroids. Methods to estimate the interloper fraction have been developed <cit.>, but their validity is limited and their results are necessarily of a statistical, rather than deterministic, nature. Additionally, progress from powerful and automated surveys over the past decades makes it more difficult to deal with the interloper problem because small asteroid spatial densities increasely fill proper element space. Unless we know the size distribution of the background and the family population, more asteroids mean that there are more interlopers to deal with.Fortunately, the ability of surveys to increase the known asteroid populations has also brought into play a new and interesting niche that allows us to determine the complete (bias-corrected) population of the family members. The fundamental goal of this paper is to try to exploit this possibility. Our focus here is on a special subclass of asteroid families characterized by extremely young ages, namely those that are ≃ 1 Myr or less. Already the first examples, which were discovered little less than two decades ago <cit.>, help usunderstand the means to get rid of potential interlopers. Consider that the unusual youth of these families means that five of the six osculating orbital elements are clustered (semimajor axis a, eccentricity e, inclination I, and longitudes of node Ω and perihelion ϖ), rather than the standard three proper orbital elements used for most family work (semimajor axis a_ p, eccentricity e_ p, and inclination I_ p). This immediately has two positive consequences. First, our work can use simpler osculating elements rather than less (population-wise) accessible proper elements. Second, the additional two dimensions of the orbital element arena in which we searched for these very young families have a diluted spatial density of known asteroids. The very young families show up as distinct, and often isolated clusters, allowing us to largely circumvent the interloper problem. Additionally, their recent origin has allowed us to accurately determine each family’s age by propagating the asteroid orbits backward in time and then by observing how the orbits rearrange themselves into a tighter cluster at the epoch of its formation.This procedure has helped to further eliminate interlopers. As far as the population count is concerned, we are then left with the observational bias produced by telescopic limitations (basically the capability of a given instrument to detect asteroids to some apparent magnitude). Here we can compensate for this problem to a degree by using asteroids taken from a well-characterized and sufficiently long-lasting survey. Profiting from our earlier work, in which we developed a new model for the near-Earth asteroid population, we use a careful characterization of the Catalina Sky Survey (1.5-m Mt. Lemmon telescope, G96) observations in between 2016 and 2022. We apply this rich dataset to determine the bias-corrected population offour very young familiesand a few more clusters of interest. [The first attempt of the method has been carried out by <cit.>, who applied it tothe case of Datura family. However, both (i) the precise detection efficiency characterization of theolder set of Catalina survey observations, and (ii) mainly the Datura family known population, weresignificantly smaller than in the present paper.]We first briefly describe the observation set in Sec. <ref>. Next, we introduce the very young families that we are going to analyze in this paper (Sec. <ref>), providing their new identification and full membership in the Appendix. In Sec. <ref> we develop an approach to determine the complete population of the families, based on their biased population and information about the survey detection probability, and we apply it to the selected cases. In Sec. <ref> we discuss the implications of our results and provide some discussion ofpotential future work. § CATALINA SKY SURVEY OBSERVATIONS Catalina Sky Survey [<https://catalina.lpl.arizona.edu/>] (CSS), managed by Steward Observatory of the University of Arizona, has been one of the most prolific survey programs over the past decade <cit.>. While primarily dedicated to the discovery and further tracking of near-Earth objects with the goal to characterize a significant fraction of the population with sizes as small as 140 m, CSS observations represent an invaluable source of information for other studies in planetary science or astronomy in general. Here, we use observations of the CSS 1.5-m survey telescope located at Mt. Lemmon (MPC observatory code G96). Our method builds on the work of <cit.>, who constructed a new model of the near-Earth object population using CSS data. They carried out a detailed analysis of the asteroid detection probabilities for the G96 operations over the period between January 2013 and June 2022. This interval was divided into two phases: (i) observations before May 14, 2016 (phase 1), and (ii) observations after May 31, 2016 (phase 2).The first phase contained 61,585 well-characterized frames, in the form of sequences of four that were typically 30 s exposure images, while the second phase had 162,280 well-characterized frames. The reason for the difference was due to longer timespan of the phase 2 but also an important upgrade of the CSS CCD camera in the second half of May 2016. The new camera had four times the field of view, and better photometric sensitivity, allowing the surveyto cover a much larger latitude region about the ecliptic. The superiority of the CSS observations taken during phase 2 allows us to drop the phase 1 data in most of the work below. Only in the case of Lucascavin family and Rheinland-Kurpfalz pair do we combine observations from the two phases into a final result.The final product of interest for our work here is the detection probability p(H) as a function of the absolute magnitude H for asteroids in a chosen family. In principle, pdepends not only on H, but on all orbital elements (in other words, it is specific to a particular body). Members in the youngest known asteroid families to date, however, have their orbit longitudes λ uniformly distributed in between 0^∘ and 360^∘. This is because the characteristic λ dispersal timescale after the family forming event is only about 1-3 kyr; all families which we consider here are at least an order of magnitude older than this value. Conversely, a property ofvery young asteroid families are that they have a tight clustering in the other five orbital elements, including the longitude of node Ω and longitude of perihelion ϖ. As a result, the detection probability p(H) assigned to a given family has been computed using the mean values of osculating orbital elements, except for λ where the individual probabilities have been averaged. [We used 10,000 synthetic orbits characteristic to the family and λ uniformlydistributed in its definition interval to determine the mean value of p(H).] Only in the case of two families –Datura and Rampo– we used the secular angles Ω and ϖ to randomly sample their observed interval of values shown in Figs. <ref> and <ref>. As seen in those figures, and expected from theoretical considerations, theΩ vs. ϖ values are strongly correlated in very young families. We take this correlation into account when computing the mean detection probability p(H). Apart fromp(H), we can also determine a detection rate r(H), namely a statistically mean number of the survey fields of view in which a given family member with an absolute magnitude H should have been detected. While correlated with p(H), r(H) contains additional information and may be thus used as a consistency check in our analysis below. Technical details of the numerical methods that allow us to determine p(H) and r(H) can be found in <cit.>. § VERY YOUNG FAMILIES In this section we introduce four very young asteroid families, namely Datura, Adelaide, Hobson and Rampo, whose known population is large enough that they are suitable candidates for our debiasing efforts. [Obviously, a second criterion of their selection is that CSS detected a significantfraction of known members in these families during its phase 2 operations.] We also consider two additional families, Wasserburg and Martes, that have extremely young ages but whose population is limited. For these examples, we do not perform a full-scale debiasing analysis but instead argue that a large population of small undetected members should exist near the currently known population. Finally, we consider two special cases: the very young asteroid family Lucascavin and the asteroid pair Rheinland–Kurpfalz. Here, our goal is actually opposite to the previous cases. Our working hypothesis is that further smaller fragments in their location might not exist. As a result, we use CSS observations to set an upper limit on the size or magnitude of the unseen members to explore whether thishypothesis might be correct.Table <ref> provides a basic overview of the asteroid clusters and pairs that are analyzed in this paper, as well as some notes on the goals we hope to achieve. The identification method used to find the families, and full listing of the family members for each family analyzed in this paper, is provided in the Appendix. In what follows, we provide basic information about the investigated families, with slightly more attention paid to the Datura family.The debiasing procedure to constrain a complete population of members in the above-mentioned families is presented in the next Sec. <ref>. §.§ Very young families with large population of members Datura.– The cluster of asteroids about the largest member (1270) Datura is an archetype of very young families. In this sense it is comparable to the Karin family, which is an excellent example of a sizable young family having an age less than ≃ 10 Myr but secular angles distributed uniformly in the 0^∘ to 360^∘ interval. Not only is the Datura family the first example discovered in the very young family class <cit.>, but its location in the inner part of the main belt allowed us to readily collect the physical parameters of the largest members and study the role of the very young families in a broader context of planetary science<cit.>. The number of known members in the Datura family has also grown quickly from only 7 in 2006 to 17 in 2017. Here we make use of the accelerating pace with which asteroids have been discovered during recent years and report a currently known Datura population of N_ obs=91 members (possibly even 94 members, see Table <ref>). Importantly, N_ CSS=60 of them has been also detected by CSS during its phase 2 operations. We note that <cit.> already attempted to use CSS observations for their Datura population debiasing efforts. Our current work, however, surpasses the detail and accuracy of this earlier work. <cit.> could use only the 13 largest members in the Datura family detected by CSS between 2005 and 2012. Thanks to the camera update by CSS in 2016, the six years of CSS operations between 2016 and 2022 has led to a much larger Datura population and an improved characterization of family member detection probabilities.Before we turn our attention to the magnitude distribution of the Datura members, we use this family to exemplify some common features of very young clusters. They help to justify membership of given asteroids within the family, even without a further substantiation via a detailed study of their past orbital convergence using numerical integrations (see a brief discussion of this issue in the Appendix). A correlation between the osculating values of the secular angles, namely longitude of node Ω and longitude of perihelion ϖ, is a characteristic property of several very young families (unless the family is extremely young, such that Ω and ϖ are clustered within a degree or so, basically corresponding to their initial dispersal). Denoting ΔΩ and Δϖ as the angular difference with respect to the largest body in the family, the initial phase of the dispersal process is described by a linear approximation. Thus at time T, one has ΔΩ(T)≃ CT +O(T^2) and a similar equation for thelongitude of perihelion, with C=(∂ s/ ∂ a) Δ a, where s is the proper nodal frequency and Δ a is the difference in semimajor axis with respect to the largest body produced by the initial velocity ejection. The smallest observed fragments in Datura have Δ a≃ 2× 10^-3 au, corresponding to their ejection by ≃ 10 m s^-1 (only slightlylarger than the escape velocity from the parent body of the family). Together with (∂ s/∂ a)≃ 40 arcsec yr^-1 au^-1, we can estimate their angular difference ΔΩ≃ 11^∘ in T≃ 500 kyr (see Fig. <ref>). A similar analysis for Δϖ results in about half this value. Given that in both ΔΩ and Δϖ the nonlinear terms in time T are still very small <cit.>, they are strongly correlated with a slope -0.5. The early dispersal phase of very young families is characterized by additional correlations between the osculating elements, namely (i) the eccentricity and longitude of perihelion, and (ii) the inclination and the longitude of node (see, e.g., data in the Tables given in the Appendix). As mentioned above, these extra correlations between osculating orbital elements help to strengthen justification of the membership in the family.The cumulative magnitude distribution N(<H) of Datura family members is shown inFig. <ref>. The magnitudes H for the six lowest-numbered members were determined accurately using calibrated observations, and expressed at the mid-value of the lightcurve, by <cit.> and <cit.>. The magnitudes for other Datura members were taken from the MPC catalog. We show both the distribution of all known members (black symbols), and highlight also the sample of 60 members which have been detected by CSS (red symbols). Data for these asteroids may be used for debiasing of the Datura family population, since only for them we have the detection probability well characterized. The bottom panelof Fig. <ref> shows detection probability p(H) of Datura members as a function of their absolute magnitude. As explained above, this is a result based on an analysis of 10,000 synthetic orbits in theDatura family zonewhich makes p(H) very smooth. At the first sight, itmight be surprising that p≃ 1 up to magnitude 18, signaling that the population of thefamily members is complete up to that limit. This inference, however, is correct and a result of (i) a six yearsurvey , (ii) the small value of Datura-like orbital inclinations, such that CSS fields-of-view did not miss an opportunity to detect the asteroids in the Datura family, and (iii) a typical 50% photometric detection limit of CSS in between 20.5 and 21.5apparent magnitude (in the visual bands). Neglecting a small phase-angle correction in the Pogson's relation between absolute H and apparent m magnitudes, we have H≃ m-5 log(r Δ), where r and Δ are heliocentric and geocentric distances of the asteroid. At opposition, and near aphelion to cover the worst case situation, we have r≃ 2.7 au and Δ≃ 1.7 au. As a result, the limiting magnitude m≃ 21 translates to H≃ 17.8. During the 6 yr period of CSS phase 2, the configuration eventually becomes favorable to detection, explaining the completion limit at H≃ 18 magnitude. At the opposite end of things, the probability p has a tail to nearly 21 magnitude. This means CSS with its best nightly limits near the apparent 22 magnitude have a chance to detect small Datura members when they happento be near the perihelion of their orbit at opposition. In order to verify that the detection probability p(H) shown in Fig. <ref> is reasonable, we also determined the expected mean rate r(H) of CSS phase 2 detectionsand compared it with the actual number of detections of all 60 identified Datura members. This result is presented in Fig. <ref>.The largest body (1270) Datura was found to bedetected 18 times, and even members up to magnitude H=18 were typically detected more than 10 times. This is a good verification ofpopulation completeness. Only after that limit does the number of detections decrease,with no Datura member having H>20 magnitude detected. This outcomecorresponds to the inferred detection probability: p<0.1 for H>20. The mean value of the actual Datura-member detections computed using a running window of consecutive 7 asteroids is shown by the blue curve. The scatter of the number of detections about the predicted red line is not surprising because the latter has been computed as a mean value from 10,000 synthetic Datura members. The important point is that the blue curve, though computed as a mean over a much smaller number of cases (additionally having different H values), reasonably follows the predicted mean rate. This points to consistency in evaluation of the detection probability too. Adelaide.– The cluster of five small objects about the inner main belt asteroid (525) Adelaide was first reported by <cit.>. Apart from an approximate age of 500 kyr, fewdetails were given in this paper. <cit.>, while trying to search for secondary subclusters in the very young asteroid families, analyzed the Adelaide family and identified 19 of its members. The case was finally revisited by <cit.>, who noted a significant population increase to about 50 small asteroids in this family. They confirmed the earlier age estimate and considered a possibility of a causal link between formation of the Datura andAdelaide families (which they rejected). <cit.> identified already 72 members, and our current census of the Adelaide family population reveals N_ obs=79 members, yet another important increase. The population increase rate of the Adelaide family is among the largest of the very young families. A fortunate circumstance for our analysis is that N_ CSS=63 of the members were also detected by CSS in its phase 2.Figure <ref> shows the osculating secular angles Ω and ϖ of the Adelaide family population from Table <ref> in the Appendix. The Ω vs. ϖcorrelation is weaker than that of the Datura-family members (Fig. <ref>). <cit.>, while analyzing behavior of the backward propagated orbits in the Adelaide family, noted a weak chaotic signature triggered by a conjoint effect of weak mean-motion resonances and distant encounters with Mars. We suspect they are also the origin of the observed scatter in the correlation between the secular angles seen in Fig. <ref>. Nevertheless, the orbits show a high degree of clustering even in the subspace of the secular angles, which in effect strengthens their membership in the family.The cumulative magnitude distribution of the Adelaide members is shown in Fig. <ref>. Its extreme behavior has been already noted by <cit.>: (i) the largest remnant (525) Adelaide is separated from other members in the family by an unusually large gap of 6 magnitudes in the absolute magnitude H scale, and (ii) the small fragment population has an extremely steep H-distribution in between 18 and 19 (the local power-law N(<H)∝ 10^γ H approximation requires γ≃ 2 or even larger). This shape is characteristic of a cratering event on (525) Adelaide. The bottom panel in Fig. <ref> shows the mean detection probability p(H) of the Adelaide members.The range in which p(H) drops from one to zero, namely H≃ 18.4 to H≃ 20.4 magnitudes, is narrower than in the case of the Datura family (with the completion limit at even higher magnitude). This is readily explained by a smaller eccentricity of Adelaide-like orbits for nearly the same value of semimajor axis and inclination. To further check our results, we also compared the number of CSS phase 2 detections of the 63 Adelaide members and their mean computed rate r(H) (Fig. <ref>). The largestasteroid (525) Adelaide has been detected 8 times, which conforms –within fluctuation– to the predicted rate of about 13. We note the decrease of r(H) for objects brighter than magnitude 13. This phenomenon in the CSS observations has to do with the saturation of the signal for bright objects, as they can become confused with stationary sources hiding their sky-plane motion. Such a configuration may occasionally happen when (525) Adelaide is at opposition near perihelion of its orbit. Small members then sample the tail of r(H) values with only few detections predicted. The running mean ofdetections (blue curve) appears to follow the predicted r(H) dependence reasonably well. Hobson.– <cit.> identified a small cluster of asteroids associated with the largest member (57738) 2001 UZ160 and set an upper age of 500 kyr for its formation event. They also noted a nearby asteroid (18777) Hobson, but were unsure about its relation to the cluster, mainly because Hobson and 2001 UZ160 have similar sizes, which they considered unusual for the outcome of a collisional fragmentation of the parent body. <cit.> then revisited the situation and proved that Hobson was associated with the cluster.They derived an age for the family of 365± 67 kyr.By 2018, their Hobson population consisted of nine members, which shortlyimproved to 11 by the work of <cit.>. These latter authors also rejected the possibility of the Hobson family formation by rotation fission, and conducted valuable photometric observations of the two largest members Hobson and 2001 UZ160. The two similar-size largest remnants also intrigued <cit.>, who revisited the nature of the parent object of this family (counting already 45 Hobson members, and <cit.> reported another increase to 51 members). Using the SPH/N-body formation simulation, their results implied a very special impact and target combination was required. As a novel idea, they also argued the Hobson family may result from collisional fragmentation of a component in a parent binary. In this work we report N_ obs=60 members (likely even one more, see Table <ref>), out of which N_ CSS=33 were detected during the phase 2 of CSS operations. The dispersion of the secular angles within about two degrees is a consequence of the very young age of the Hobson family. We thus turn our attention directly to the cumulative magnitude distribution of its members shown on Fig. <ref>. The two largest asteroids –(18777) Hobson and (57738) 2001 UZ160– are its most outstanding feature. Their orbital convergence has been independently verified by <cit.> and <cit.>, while <cit.> determined the identical values of theV-R color index (compliant with the S-type taxonomy). As a result,their membership to the cluster appears to be solid. The bottom panel on Fig. <ref> shows the mean detection probability p(H) determined for the CSS phase 2 operations. It appears similar to that of the Datura family except for about a magnitude shift towards smaller H, which implies completion down to H≃ 17.1 magnitude. This result is easily understood by a comparison with the Datura family; Hobson's family has similar eccentricity and inclination values, but a larger set of semimajor axis values. The Hobson family resides in the central part of the main asteroid belt next to the J3/1 mean motion with Jupiter. The inferred mean rate of detections r(H) for the phase 2 of CSS matches, within the statistical fluctuations, the actual number of detections of Hobson members (Fig. <ref>). The brightest two asteroids stand out with more than 15 detections, whilemembers in the small-size tail typically have fewer than five detections. Rampo.– The core of this family, namely two small asteroids tightly clustered about (10321) Rampo, has been found by <cit.>. Focusing on asteroid pairs, these authors reported a probable age between 0.5 and 1.1 Myr. About a decade later, <cit.> discovered another four small members in this family and used backward orbital integration to assess a more accurate age of 780^+130_-90 kyr. Finally, <cit.> revisited the Rampo family population and identified 36 small members around the largest remnant (10321) Rampo. Here we find the Rampo family population has increased to N_ obs=42 (possibly even 44, seeTable <ref> in the Appendix); N_ CSS=26 of them were detected during CSS phase 2.The correlation of the secular angles Ω and ϖ, shown in Fig. <ref>, is exemplary among the very young families. The family must be still in the dispersion regime that is linear with time (i.e., the same discussed for the Datura family). Similarly to the Datura case, the orbits of Rampo family members exhibit strong correlations in thepairs of orbital elements e vs. ϖ and I vs. Ω, providing us with a usefuljustification for their family membership.The cumulative magnitude distribution of Rampo family members shares some similarities with the Datura cluster; compare Figs. <ref> and <ref>. The small differences consist of: (i) a larger magnitude gap Δ H between the largest members and the second largest member (Δ H≃ 3.8 for Datura and Δ H≃ 3.2 forRampo), and (ii) a larger size of (1270) Datura over (10321) Rampo <cit.>. Similar to Datura, the former feature suggeststhat the family may have been formed by a large cratering event, though more work on this issue is required <cit.>. The Rampo members have a detection probability p(H) computed for phase 2 of CSS transitions that go from one at H≃ 18 to zero at H≃ 20. This sharp transition is due to their small eccentricities. The completion limit is similar for both families because their aphelion distances are comparable (on the other hand, the perihelion distance is smaller for Datura orbits and thus its p(H) reaches to larger absolute magnitudes). As in all cases discussed in this paper, the number of CSS phase 2 detections of Rampo family members nicely follows the predicted rate r(H) (Fig. <ref>).§.§ Extremely young asteroid families with small number of known members Wasserburg.– A very tight asteroid pair of two Hungaria objects (4765) Wasserburg and 2001 XO105 was reported by<cit.>. <cit.>, analyzing the formation process of asteroid pairs, included this couple in their sample and reported an approximate age larger than 90 kyr. <cit.>, compiling the most detailed study of the asteroid pair population, noted a small asteroid 2016 GL253 accompanying the pair on a very close orbit and suggested the trio of asteroids may be the large-end tipof a very young family in the Hungaria population. <cit.> confirmed the trend, detecting six members in what they called the Wasserburg family. Here we find two more members in the family, completing the count at N_ obs=8. Interestingly, all of them were also detected during phase 2 of the CSS operations, thence N_ CSS=8. The cumulative magnitude distribution of the presently known members of the Wasserburg family is shown in Fig. <ref>. The bottom panel on the same figure provides the detection probability p(H) during CSS phase 2 operations. The completion limit is near H≃ 18.5 magnitude, impressively large in spite of the high inclination of the Wasserburg family orbits (being part of the Hungaria zone). Some of these orbits may be missed by the fields-of-view of CSS. The situation improved after July 2016, however, with the wide field camera reaching well beyond the ± 30^∘ zone around the ecliptic. So the geometric losses are small, and the heliocentric proximity of the Hungaria region helped to detect even small asteroids. Indeed, the six smallest members in the Wasserburg family have anabsolute magnitude near or even above the H=19 limit. As mentioned in the preamble of this Section, the small number of identified members in this family does not permit a full-scale debiasing effort. Accordingly, we only conducted the simplest estimate to characterize the complete Wasserburg population using the following steps: * We considered the observed (biased) population of the family members andsorted their absolute magnitude values {H_i}, with i=1,…, N_ obs,from the smallest to the largest value; * By definition, the observed population increases by one when shiftingalong the list according to the ordered H-values; we assume the largest memberin the family is bright enough such that p(H_1)=1;* The simplest estimate of the complete population is then obtained by againmoving along the vector {H_i} of ordered absolute magnitudes, but nowincrementing the population by 1/p(H_i) instead of one.The result is shown by the blue curve at the top panel of Fig. <ref>. Since even the smallest Wasserburg fragment has p(H_8)≃ 0.71 (in other words, detection of even the smallest known fragments isexpected), the complete population does not deviate too much from the observed population. Up to that point the cumulative magnitude distribution is very steep, locally approximated by a power law with an exponent of γ≃ 1.4. This value is only slightly shallower than that observed in the case of the Adelaide family. From that similarity, we may tentatively conclude that Wasserburg family has resulted from a huge cratering event in (4765) Wasserburg itself, though againthere are many additional possibilities <cit.>. However, an outstanding puzzle here is to explain why the current surveys have yet todetect any smaller fragments. This reason is because of the inferred steepness of the magnitude distribution, and the non-negligible detection probability p(H_8) mentioned above. In other words, a fair number of the subsequent members in the Wasserburg family should have a detection probability ≃ 0.5, yetnone have been detected. Does this mean that the magnitude distribution beyond the detected population suddenly becomes shallow. The answer to this question is left forfuture analysis.Martes.– <cit.> mentioned (5026) Martes and 2005 WW113 among their list of tight asteroid pairs. As also noted by these authors, some of these pairs were expected to be the two largest members in a collisionally born asteroid family (e.g.,Wasserburg family). Recently, <cit.> reported a third member in the tight orbital region about Martes, namely 2010 TB155, while our census in this paper increases the number by three more small objects, withN_ obs=6 (Table <ref>), with the last three asteroids associated with the Martes cluster discovered in Autumn 2022. [All three of them were pre-covered on CCD images taken by Pan-STARRS in2011, and also detected in 2014 by a 4-m Victor M. Blanco telescope on Cerro Tololo,using the Dark Energy Camera, which can reach much fainter objects than the 1.5-mG96 telescope.] Only the largest three members in the Martes family were detected by CSS, such that N_ CSS=3. The Martes cluster is a part of a much larger Erigone family, whose age has been estimated to ≃ 280 Myr <cit.> or 130± 30 Myr by <cit.>. This association is justified by the objects having the same spectral taxonomic type Ch as the Erigone family and (5026) Martes <cit.>. The extremely clustered orbital elements of the Martes members suggest an unusually young age for thefamily. Indeed, <cit.> found 18± 1 kyr, a slight improvement on the resultof <cit.>. We find that the orbits of the smallest three members may also converge to this time window, further justifying the Martes age, but a detailed analysis would need to consider the thermal accelerations in the simulation. We leave this effort to a separate study, but conclude here that the Martes family has the youngest currently known age.Figure <ref> shows the absolute magnitude distribution of the Martes family members. Admittedly this distribution is an incomplete portion of the family population, and for that reason we do not attempt a serious debiasing effort. We only note the behavior of the detection probability p(H) determined for the phase 2 operations of CSS (bottom panel on Fig. <ref>). Martes-family orbits have the largest eccentricity among our sample, and this produces the largest stretch of Hvalues in which p(H) decreases from 1 to 0. At magnitude H≃ 20 we have p≃ 0.1. Taken at a face value, we would infer a large population of small members in the Martes family, such that every one of the three may represent in fact ≃ 1/p≃ 10asteroids. Thislogic might beflawed, however, because the three small members were notdetected by phase 2 CSS. Strictly speaking,we should not use them to infer anything about Martes family magnitude distribution. Nevertheless, we believe our inferences may be close to reality. This is because all three smallest asteroids in the Martes family weredetected by G96/CSS in September 2022. This time period is technically out of the phase 2 interval, but only by a small amount. It also shows the capability of G96 to detect them. The size of the Martes population at H≃ 20 is left for future work. §.§ Starving young asteroid families with only three known members and asteroid pairsLucascavin.– This very tight cluster of three asteroids was discovered by <cit.>, who also estimated its age to 300-800 kyr (the large uncertainty is due to small size of the two small members –see Table <ref>– and unconstrained magnitude of the thermal accelerations in their orbit). A decade later, <cit.> found thethreeoriginal members were still the only ones in this cluster. They alsocalculated its age to be between 500-1000 kyr using a different method. Assuming the population is complete, these authors also argued that the estimated sizes of the Lucascavin members, and the ≃ 5.79 hr rotation period, might be enough forrotation fission of the parent object to explain their origin <cit.>. The difference with respect to the population of pairs is that theassumption that the secondary, escaping from the primary after the fission event, would split into two components (namely the two small members (180255) 2003 VM9 and (209570) 2004 XL40). This possibility was theoretically predicted by <cit.>. If, however, numeroussmaller fragments are found in the Lucascavin family, this scenario would become less plausible. Therefore, unlikeour study of other clusters in this paper, the goal of our analysis here is to “disprove” the existence of further fragments in the family. Obviously, we cannot meet this goal in an absolute manner, but we can set a lower limit on the absolute magnitude of a putative companion (or, in other words, an upper limit on its size).Moving towards that goal, we note that all three known members in the Lucascavin family were detected during both phases 1 and 2 of the CSS operations (in our notation, we thus have N_ obs=3 and N_ CSS=3). [The smaller members, (180255) 2003 VM9 and (209570) 2004 XL40, were detectedonly 1 and 4 times during the phase 1, though.] In order to use as much information as possible, we have combined data from both phases of the CSS operations. Given their different performance, we consider both phases as independent (and uncorrelated) sources of information. Denoting then the detection probability during the phase 1 by p_1, and similarly the detection probability during the phase 2 by p_2, the combined total detection probability p during both phases isp = 1- (1-p_1)(1-p_2).Note that we first characterized the non-detection during both phases (the second term), and then take the complement to unity, which expresses detection in at least one of the CSS phases. Results are shown in Fig. <ref>.We first briefly comment on the behavior of p_1 and p_2 (the blue and red curves). The interesting, and at the first sight puzzling, feature of p_1 is that it does not reach a value of 1 even for rather bright objects (its maximum value is only about 0.9). This is not a mistake, but the result of the Lucascavin cluster’s semimajor axis. The synodic period of its motion with respect to an Earth observer is in an approximate 7:5 resonance over a year. As a result, for a survey spanning only a short period of time (such as little more than 3 yr of our CSS phase 1 operations), it may happen thatLucascavin objects with certain values of mean longitude in orbit λ never occur in the field-of-view (reasonable solar elongations on the night sky). Since this is a purely geometrical effect, it affects the detection probability of even very bright objects<cit.>. As the duration of the survey extends, this effect minimizes and even disappears. As a result, p_2 in the 6 yr interval ofCSS phase 2 (red curve) does not suffer from this problem. The overall detection probability p_1 is smaller than p_2, but both reach p_1≃ p_2≃ 0 at similar H≃ 20.5. This outcome is because the apparent magnitude detection limit is similar for both phases.Following the trend of the black curve of Fig. <ref>, p(H), we note that p(H)≃ 1 up to H≃ 18.3. Therefore the Lucascavin population is complete to this magnitude limit. This calculation is a conservative estimate because observations of other surveys may push this limit to higher values. The limit is about one magnitude larger than that of the two small members in the Lucascavin family (≃ 17.25). Our result may be therefore interpreted in two ways: either (i) it sets a constraint on Lucascavin family magnitude distribution, or (ii) it starts tracing the population void beyond the known set. The former case would imply at least a magnitude gap between the third and the fourth largest members in the family (this is not impossible, see, e.g., Fig. <ref>). The latter case may support the idea that the Lucascavin family formed by rotation fission, with the secondary disrupting into two pieces.Rheinland and Kurpfalz.– The pair of asteroids composed of a primary (6070) Rheinland and a secondary (54827) Kurpfalz is the best studied archetype in its class. This is because the two asteroids are rather large, namely the D_1≃ 4.4± 0.6 km size primary and the D_2≃ 2.2± 0.3 km size secondary (absolute magnitudes H_1=14.17± 0.07 and H_2=15.69± 0.04), and reside in the inner part of the asteroid belt. Their discoveries in 1991 and 2001, and prediscovery data extending to 1950 and 1991, imply a wealth of astrometric observations allowing accurate orbit determination. This has been noticed already by <cit.>, who used this pair to demonstrate they could reach full convergence in Cartesian space of the two orbits in the past. From this result, they determined the pairhad an age of ≃ 17 kyr. Later, <cit.> and <cit.> conducted photometric observations of both asteroids with the goal to determine their rotation state, including pole orientation, and shape model. Intriguingly, the spin orientation at the likely moment of their formation has not been found to beparallel for the two components, but instead is slightly tilted by about 38^∘. The well confined spin state for both components in this pair allowed themto pin down the formation epoch to 16.34± 0.04 kyr <cit.>. An interesting clue about the formation process, fission of a critically rotating parent body <cit.>, is also provided by spectroscopic observations of Rheinland and Kurpfalz: while the first has been found a typical S-class object, the taxonomy of the latter is either Sq- or even Q-class <cit.>.Similarly to the case of the Lucascavin family, we aim to determine the magnitude limit for nonexistence of a putative companion fragment following Rheinland and Kurpfalz on their heliocentric orbit. Since both Rheinland and Kurpfalz were detected during CSS phases 1 and 2, we may again combine detection probabilities p_1 and p_2 to obtain the total probability p according to the formula (<ref>). Results are shown in Fig. <ref>. In this case, p_1 is comfortably close to unity even for the fainter component (54827) Kurpfalz. [In fact, (6070) Rheinland has been detected 8 and 26 times during the phases 1 and2, while (54827) Kurpfalz has been detected 8 and 21 times the phases 1 and2.] However, p_1 starts dropping to zero right after H_2 of the secondary, such thatlimited useful information would have been reached if we only had thephase 1 data. Luckily, the power of the CSS phase 2 observations make extending the final detection probability p for the orbits in this pair to unity, even near H≃ 18. We may thus conclude that the available observations rule out a companion fragment of this pair to this limit, which is Δ H≃ 2.3 larger than H_2 of the secondary. Assuming the same albedo, the hypothetical companion –if it exists– must have a size smaller than ≃ 10^-0.2Δ H D_2≃ 0.8 km. § RESULTS We now proceed towards a more advanced debiasing method than previously used in the case of the Wasserburg family. The four families introduced in Sec. <ref> with large-enough known population of members –Datura, Adelaide, Rampo and Hobson– will serve us as our testbed cases.The method, in essence similar to what has been used by <cit.>, goes as follows: * First, we consider the CSS phase 2 detected sample {H_i^ o} (i=1,…,N_ CSS) of the family asteroids and we select a certain member H_j^ o forwhich p(H_j^ o)≃ 1 (we call it a “branching point”). We assume that thepopulation is complete up to the absolute magnitude of that member and becomes incompletefor magnitudes larger thanH_j^ o. The cumulative magnitude distribution is therefore representedby the observed population until H_j^ o, where it has N_1 members, andthen continued with a synthetic (model) population as described below.We also denotethe number of family members with magnitudes ≥ H_j^ o detected duringthe CSS phase 2 by N_ CSS'(≤ N_ CSS).* Second, we generated the total synthetic population of family members {H_i^ s} havingabsolute magnitudes in between H_1=H_1^ s=H_j^ o and a certain value H_2sufficiently larger than H^ o_ N_ CSS with a statistical distribution of the tested magnitude distribution function (we use the sequence of M models described below andalways order the magnitude sequence from the smallest to the largest).* Third, we used the detection probability p(H) of the CSS observations to transform the totalsynthetic population to the biased synthetic population {H_i^ b}, such that eachof {H_i^ s} is consulted as to its detectability.Inpractice, for each H_i^ swe evaluated p(H_i^ s) and compared it to a uniformly random number r between 0 and 1,providing a rationale for detectability or non-detectability: (i) if r≤ p(H_i^ s), theasteroid is deemed detected and we record {H_i^ s} in the {H_i^ b} sequence, and(ii) if r> p(H_i^ s), the asteroid is deemed not detected and we proceed to the next{H_i^ s} value.* Fourth, we evaluated a chi-square type target function χ^2=∑_i=1^N_ CSS'(H_i^ b-H_j+i-1^ o/σ_i)^2 , comparing the modeled and biased magnitude distribution to the detected set {H_i^ o} by CSS beyond the branching magnitude H_j^ o. For sake of simplicity, we (i) use σ_i=0.1 magnitude for all bodies, and (ii)adopt Gaussian statistics to judge the goodness-of-the-fit and set confidence limits on the adjusted parameters of the model needed to construct the complete (not-biased) synthetic population {H_i^ s}.As for the synthetic population, we use the following sequence of power-law models: * Model M1 – a straight single-slope power lawN(<H)∝ 10^γ Hwith one adjustable parameter γ (the absolute normalization for all M-modelsis set by number N_1=N(<H_1) of family asteroids at H_1, because we make sure that thepopulation is complete to that limit);* Model M2 – a broken power-law model with one adjustable break-point at H_ break(H_1 ≤ H_ break≤ H_2) and two adjustable slope exponents γ_1 and γ_2 for H values in the intervals (H_1,H_ break) and (H_ break,H_2) respectively;* Model M3 – a broken power-law model with two adjustable break-points at H_ break, 1and H_ break, 2 (H_1 ≤ H_ break, 1 < H_ break, 2≤ H_2) and three adjustable slopeexponents γ_1, γ_2 and γ_3 for H values in the intervals (H_1,H_ break, 1),(H_ break, 1,H_ break, 2) and (H_ break, 2,H_2) respectively;and similarly for Mi model with 2i-1 parameters (i-1 break-points and i slopes for the intermediate intervals of H). In practice, we limit ourselves to M3 at maximum in this paper.Denote p the set of model parameters (e.g., p=(H_ break,γ_1,γ_2) for the M2 model). Since χ^2=χ^2( p) in (<ref>), the usual goal is to minimize its value by selecting the best-fit p_⋆ parameter choice. We use a simple Monte Carlo sampling of p space to find these values and to map χ^2 behavior within some zone about the minimum value χ^2_ min=χ^2( p_⋆). The confidence limits on p are found by choosing a certain domain with a threshold χ^2=χ^2_ min+Δχ^2. For instance, the 99% confidence limit in one, three and five parametric degrees of freedom in M1, M2 and M3 models corresponds to Δχ^2=6.63, 11.3 and 15.1 respectively <cit.>. Similarly the measure of the goodness-of-fit is judged from the χ^2_ min value using the incomplete gamma function as discussed in <cit.>. Datura.– Considering thedata in Fig. <ref> we chose j=9 in the case of the Datura family, namely taking the absolute magnitude H_j^ o=18.09 of the ninth family member as the branching point (i.e., N_ CSS'=52 in this case). We will test the M1 and M2 models. [Results discussed in this section do not include (429988) 2013 PZ36 among the familymembers. However, our tests showed that they are robust. By including this body, we observeonly a statistically insignificant change of the solution, the largest for γ_1 =0.70^+0.15_-0.09 parameter of the M2 model.]In the former case, we find γ = 0.70^+0.03_-0.02 (99% confidence level) and the best-fit solution having χ^2_ min=13.36. In the latter case, we find H_ break=19.13^+0.37_-0.48, γ_1=0.75^+0.15_-0.09, and γ_2=0.31^+0.30_-0.25 (99% confidence level) and the best-fit solution having a significantly improved χ^2_ min=3.85 (the improvement for the M3 model is already statistically insignificant). The best-fit solutions of both models are shown in Fig. <ref>. While formally the minimum χ^2_ min values are both statistically justifiable using the Q-function measure <cit.>, the M1 performs quite worse beyond H≃ 19.5. This is because continuing the steep power-law distribution required by the magnitude distribution of the Datura members between H = 18 and 19 would keep pushing the detactable population high (given the only slow decay of the detection probability p(H) from Fig. <ref>). This problem is remedied by setting a break-point at which the distribution becomes shallower; this behavior is readily provided by the M2 model. The upper abscissa on both panels of Fig. <ref> is an estimate of Datura member size using the geometric albedo value p_V=0.24. The break-point magnitude H_ break solution within the M2 model maps onto a 0.3-0.5 km range of sizes.Figure <ref> provides more detailed information on the M2 model parameter solution. In spite of weak correlations, the solution seems to be well-behaved. Interestingly, the slope exponents satisfy γ_1 > 0.6 and γ_2 < 0.6. The magnitude slope γ translates to an exponent α=-5γ of a cumulative size distribution (assuming constant albedo on a given interval of H-values). Therefore the threshold value 0.6 maps onto a critical size exponent -3: for shallower distributions the mass is dominated by the largest members, while for steeper distributions the mass is dominated by the smallest fragments. In our M2 solution for Datura members, the mass is dominated by the sizes at the breakpoint, while in the M1 solution the fragment mass cannot be well constrained because it is dominated by the smallest members. Here we use the M2 solution and estimate the total mass m_ frag contained in Datura family memberswith absolute magnitudes between 16 and 20 from our complete model (i.e., excluding(1270) Datura itself). We also normalize m_ frag by the mass m_ LF of (1270) Datura. The statistical distribution of this ratio, as mapped from the 99% confidence level parametric region shown in Fig. <ref>, is shown in Fig. <ref>. We find m_ frag/m_ LF = 0.033^+0.005_-0.002. Unless the cumulative number of Datura members becomes significantly steeper somewhere beyond the magnitude limit 20, which is certainly possible <cit.>, we estimate that their collective mass only represents ≃ 3.3% of the (1270) Datura mass. From this analysis, we suggest the family may have been formed from a large cratering event. Adelaide.– The extreme nature of the magnitude distribution in the Adelaide family (Fig. <ref>) makes us choose j=2, therefore we associate the point to the first member next to (525) Adelaide with H_j^ o=18.18. With that choice we have N_ CSS'=62. In this case, we test M1, M2 and M3 models.We find that the single power-law model M1 is incompatible with the family data. The formally best-fit slope γ≃ 1.86 tries to compromise between the extremely steep part of the magnitude distribution between 18.18 and ≃ 18.75 and much shallower distribution beyond. However, none of the features is matched well and the formal χ^2_ min≃ 235 has to be statistically rejected. The basic inconsistency of such a model stems from the behavior of the detection probability p(H) shown in the bottom panel of Fig. <ref>. In simple words, p(H) is quite smooth and gradual even beyond ≃ 19 magnitude and does not resemble the sharp lack of detected fragments at ≃ 18.7 magnitude. In the Adelaide family case, we need some slope change even in the complete population, and this is provided by models M2 and M3.In the former case, we find H_ break=18.78^+0.14_-0.26, γ_1=2.08^+0.92_-0.19, and γ_2=0.47^+0.30_-0.24 (99% confidence level). The best-fit solution has χ^2_ min =12.18. The model reflects a slope change from steep to shallownear 18.75. The minimum of χ^2reached in the M2 model is fully acceptable,yet the left panel on Fig. <ref> indicates the solution may still be improved (obviously at the expense of more parameters). This is provided by the M3 model (the right panel onFig. <ref>), which has H_ break, 1=18.57^+0.29_-0.26, H_ break, 2= 19.04^+1.00_-0.14, γ_1=2.41^+0.67_-0.48, γ_2=1.00^+0.77_-0.59, and γ_3= 0.34^+0.44_-0.28 (99% confidence level) and has χ^2_ min=3.77. Figure <ref> shows 2D projections of the M2 model parameters, resembling those for Datura family in Fig. <ref>, except for γ_1 value significantly steeper. The M3 model parameters are more correlated within each other, as many combinations for positions of the two breakpoints H_ break, 1 and H_ break, 2 and the intermediate slopes γ_1 and γ_2 , are possible. Obviously, the solution of the faintest-slope γ_3 is consistently shallow, even shallower than γ_2 in model M2 (see Fig. <ref>). There is a robust, common result following from the M2 and M3 models: (i) the initial slope parameter in the 18.2-18.6 absolute magnitude range must be very steep (i.e., 2-3, and (ii) the final slope beyond absolute magnitude 19 must be rather shallow (i.e., 0.1-0.7). Given the shallow magnitude distribution at the limit of very small Adelaide members (for most part <0.6), the bias-corrected fragment population mass is dominated by H≃ 19 Adelaide members. We can thus repeat the computation performed for the Datura family, and compute the ratio m_ frag/m_ LF of the Adelaide members with H>18 (m_ frag) and the mass of the largest asteroid (525) Adelaide itself (m_ LF). Obviously, we carry out this computation for the bias-corrected populations of the M2 and M3 models, rather than the observed population of the Adelaide family members. The results, shown in Fig. <ref>, provide tight constraints on the complete population of the Adelaide members in the 18-20 magnitude range: m_ frag/m_ LF = 0.0088^+0.0013_-0.0009 for the M2 model, and m_ frag/m_ LF = 0.0084^+0.0003_-0.0005 for the M3 model. If these estimates hold also for the population at the family origin (see Sec. <ref> for an alternative option), the Adelaide family is an exemplary case of a large cratering event. We estimated the size of the expected crater on (525) Adelaide in Sec. <ref>.Rampo.– In this case, we use j=4, corresponding to a H_j^ o=18.09 magnitude branching point (Fig. <ref>). Using that choice we have N_ CSS'=22, slightly less data than for the Datura and Adelaide families. We tested the M1 and M2 models in this situation. The best-fit with a single power-law M1 model only reaches χ^2_ min=31.4 (with the median slope parameter γ≃ 1.44). Given N_ CSS'=22 data points, this solution is statistically unacceptable <cit.>. Figure <ref> illustrates the problem in a graphical way, namely the predicted population of fragments beyond magnitude 19 (blue dashed line on the left panel) becomes steep and incompatible with the single Rampo fragment detected by CSS. Things improve if the magnitude of the power-law model M1 is shifted to H_j^ o=18.0 (still within the assumed 0.1 magnitude uncertainty). This helps to straighten the sequence ofobserved members immediately after H_j^ o, where the detection probability is still close to 1. With that change, the single power-law model M1 provides best match with χ^2_ min=17.60 (and, obviously, smaller slope γ≃ 1.17). While not impressive, the solution is formally acceptable, but it suffers from the same problem in matching the faint end of the observed Rampo population using CSS.The broken power-law model M2 performs much better in this circumstance. It reaches χ^2_ min=1.52, and the parameter solution H_ break=18.47^+0.37_-0.28, γ_1=1.72^+1.28_-0.40, and γ_2=0.51^+0.39_-0.49 (99% confidence level; see Fig. <ref>). The overall median slope ≃ 1.17-1.44 is thus traded for a steeper leg initially, followed with a shallower part beyond H_ break. The small χ^2_ min conforms to the visually perfect match shown on the right panel of Fig. <ref>.Figure <ref> shows the model predicted mass in the Rampo members between magnitudes 18 and ≃ 19.5, namely m_ frag/m_ LF = 0.16^+0.08_-0.02. However, since the γ_2 slope beyond H_ break tends to be steep (with value larger than 0.6 not excluded), the real fragment mass with respect to (10321) Rampo may be even larger. In any case, out of the three families analyzed so far, the Rampo family represents the most energetic collisional event. Hobson.– The record of observed Hobson members, both the total count and the subset detected by CSS, is comparable to the Rampo family. However, because of Hobson's larger heliocentric distance, and its larger eccentricity, the predicted detection probability by CSS is shifted by nearly a magnitude towards small H values (see Figs. <ref> and <ref>). This allows us to conduct the bias-correction on a shifted segment of Hobson member magnitudes/sizes if compared to Rampo, which explains the differences in results.In this case, we use j=3, corresponding to the H_j^ o=17.10 magnitude branching point (Fig. <ref>). Using that choice, we have N_ CSS'=31. We tested the M1 and M2 models in this situation.Given the aforementioned difference in detection probabilities for the Rampo and Hobson families, the M1 model is currently sufficient to match the Hobson population between ≃ 17 and ≃ 19 magnitudes (Fig. <ref>). The best-fit simulation reaches χ^2_ min=6.31, while the simulations using the M2 model were able to improve this value to χ^2_ min=5.55. This is not enough ofa statistically significant difference to justify the necessity of a broken power-law model for the Hobson population of members; the simple power-law model performs just as well. The slope parameter is γ = 0.81^+0.03_-0.02 (99% confidence level). Because this value is larger than 0.6, we cannot estimate the mass contained in the fragment population, (as the smallest asteroids still dominate the mass). We can only set a lower limit from the population available to us, and this gives m_ frag/m_ LF≥ 0.6. In this case, m_ LF contains the mass of the two largest asteroids, (18777) Hobson and (57738) 2001 UZ160. Clearly, the Hobson family results from the catastrophic disruption of a parent body. § DISCUSSION AND CONCLUSIONS Our work provides evidence for a break in the magnitude distribution in several of the very young families analyzed here. Before consideringimplications, however, we first must attempt to further justify the result and understand its meaning. We can think of atleast two conventional reasons for what we see. Missing halo of small members?– The first possibility is that we were unableto identify small family members beyond H_ break. Their deficit, quantitatively as shown by the shallow slope at faint magnitudes, may not be real, but instead represents a failure in our the clustering association. Perhaps, many of these small fragments were ejected with larger velocities and drifted farther from the core of the family. This scenario is a plausible situation for larger and older families in the main belt, which are identified in 3D proper element space by their large spatial densities of asteroids compared to the background population<cit.>. For the very young families, however,clusters in the 5D space of osculating orbital elements, with additional tracers such as the correlated values of the secular angles Ω and ϖ (Sec. <ref>), help to minimize the problem of missing members (if identified in our catalogs). The nominal family-identification method, described in the Appendix, uses a very conservative search zone (followed by a control on the past convergence of the orbits). In order to demonstrate the margin we allow, we present a more in-depth test in the case of the Adelaide family here.We use four nested boxes around the asteroid (525) Adelaide in osculating orbital elements (data from MPC catalog as of May 15, 2023), with the following parameters: * Box 1 defined by the following differences in semimajor axis a, eccentricity e,inclination I, longitude of node Ω, and argument of perihelion ω:(δ a,δ e,δ I,δΩ,δω)=(± 0.01,± 0.01,± 0.1^∘,10^∘,10^∘);* Box 2 defined by the following differences in semimajor axis a, eccentricity e,inclination I, longitude of node Ω, and argument of perihelion ω:(δ a,δ e,δ I,δΩ,δω)=(± 0.02,± 0.02,± 0.15^∘,20^∘,20^∘);* Box 3 defined by the following differences in semimajor axis a, eccentricity e,inclination I, longitude of node Ω, and argument of perihelion ω:(δ a,δ e,δ I,δΩ,δω)=(± 0.03,± 0.03,± 0.2^∘,30^∘,30^∘);* Box 4 defined by the following differences in semimajor axis a, eccentricity e,inclination I, longitude of node Ω, and argument of perihelion ω:(δ a,δ e,δ I,δΩ,δω)=(± 0.035,± 0.035,± 0.25^∘,35^∘,35^∘).Our nominal procedure described in the Appendix uses Box 3, where we identified all 79 Adelaide family members listed in Table <ref>. Here are the data of asteroid populations found in the subsequent boxes: (i) Box 1 contains 74 asteroids, all Adelaide family members and no background objects, (ii) Box 2 contains 84 asteroids, all 79 Adelaide family members and 5 background objects, (iii) Box 3 contains 105 asteroids, all 79 Adelaide family members and 26 background objects, and (iv) Box 4 contains 135 asteroids, all 79 Adelaide family members and 56 background objects. The identified members of the Adelaide family reside in the interior two boxes (for most part already in Box 1). The background population of asteroids slowly ramps from the Box 2 stage. [Taken very naively, namely just multiplying dimensions of the box in all searched orbitalelements, the "volume" of the Box 4 is ≃ 2.3 larger than that of the Box3.The number of background asteroids increased by a factor of 56/26≃ 2.15. This may indicateroughly uniform, but very sparse, population of background population at the location of theAdelaide family.] These statistics make us believe that we are not missing any distant (and small) Adelaide members. A similar situation applies to other families as well. Collisional comminution of family members beyond H_ break?– The bias-corrected population of the family members, as follows from our analysis, tells us about the current population several hundreds ofthousands of years after the origin of the clusters. Thispopulation may have experienced some degree of collisional evolution over thatinterval, enough to disrupt some family members.As a result, we must verify whether the transition to a shallower magnitude distribution beyond H_ break in the case of the Datura, Adelaide and Rampo families is not simply produced by collisional comminution.We note that the size distribution of the main belt becomes shallow below 1 km in diameter, and its equivalent steepness at ≃ 500 m may be as small as γ≃ 0.3 <cit.>. Any submerged population introduced into this vast population of projectiles, such as a volume-limited new family, tends to equilibrate with the background (assuming disruption laws are the same for the background and family objects). The crucial issue withyoung asteroid families is the timescale of this process: has enough time passed since the origin of the family to reach equilibriumfor members that are hundreds of meters?In order to explore this issue we performed the following numerical experiment. We used the well-tested Monte Carlo code Boulder <cit.>to track the collisional evolution of multiple small-body populations. Here wesimulatedboth the internal impact/cratering/disruption processes within each of the populations and also themutual collisional interaction of the populations (i.e., objects in one population may serve as impactors for the other and vice versa). The code version we adopted models the size-frequency distribution for each of the populations but does not include the orbital dynamics of the population members. We used two populations: (i) the background population of main belt asteroids taken from <cit.>, and (ii) the young family population. We were only interested in a brief interval of time lasting ≤ 1 Myr (i.e., equal to the estimated age for the corresponding family).The origin of the simulation was the formation epoch of the family. The main belt population is effectively in equilibrium for the relevant sizes of about ten meters and larger, but the family population is expected to evolve with time; proving or disproving changes of the family size distribution at hundred meter and larger sizes was the goal of our simple test. The initial size distribution of the family was equal to the best-fitting, bias-corrected solution from Sec. <ref> with the following modification: we disregarded the breakpoint at H_ break in the M2 (and higher) class of solutions and continued the distribution with the power-exponent γ_1 from the first magnitude interval (H_1,H_ break). We considered 0.24 geometric albedo to convert absolute magnitude in Sec. <ref> to sizes.Finally, we neededto specify parameters of the collisional interaction – intrinsic collisional probability p_i and mean relative velocity v̅ at impact– within each of the populations and across them. This was done as follows.The intrinsic values of p_i and v̅ of the main belt population have been evaluated in many previous studies, and there is some small variation among them (related mostly to the smallest-size bodies used for their determination). We used p_i=2.9× 10^-18 km^-2 yr^-1 and v̅=5.3 km s^-1 <cit.>. For simplicity, thesame valueswere taken for main belt projectiles impacting the young asteroid family population. The latter was deemed to be negligible in the relevant sizes of ten meters and larger (see Fig. <ref>), which allowed us to neglect family members as a meaningfulpopulation of impactors formain belt asteroids. The tricky part of the calculation was to determine the intrinsic collisional parameters for the family population. This is because p_i and v̅ depend on the orbital architecture of the family population, which experienced strong evolution immediately after the family formation event.The initially extremely compact cloud of fragments should first disperse in orbital mean anomaly (over a characteristic timescale of few thousands of years), and subsequently continues to disperse in longitude of node and perihelion (reaching about 20^∘ interval for a ≃ 500 kyr old Datura family, e.g., Fig <ref>). This highly dynamical situation implies that the intrinsic family values of p_i and v̅ are also strongly time dependent. Importantly, because of the initial orbital similarity, the collision probabilities may also be very high. Since assumptions of the most commonly used scheme to evaluate p_i and v̅, notably the Öpik-Wetherill approach are not satisfied <cit.>, we used a more direct approach based on a numerical orbital integration of a finite sample of n bodies in the population <cit.>. Monitoring the orbits over a time interval Δ T, we recorded all mutual close encounters at a small-enough distance R (in our simulations we used R up to 0.002 au). The available number of pair combinations is n_ pair = n(n-1)/2. If N such encounters are found, we have an estimatep_i ≃N n_ pairR^2Δ T.Ideally, one should evaluate the whole congruence of encounters by varying the threshold distance R and verify that N(R)∝ R^2, such that p_i converges to a constant value. We verified this behavior is satisfied in our experiment. More importantly, as the orbits in the family undergo theirdynamical evolution, we find that p_i changes as a function of time. We considered the case of the Datura family as an exemplary case for ourmethod. In order to track thecharacteristic orbital evolution of Datura members, we created a synthetic Datura family consisting of its 57 largest members (Table <ref>). The initial configuration was created by propagating Datura's orbit backward in time until the argument of perihelion was ω≃ 0^∘ and true anomaly f≃ 150^∘. We assumed an isotropic and size-dependent velocity ejection fieldV(D) = 1ms^-1(D 2km)^-0.5,which allows us to create a configuration that, in the (a,e) and (a,i) planes, resembles the distribution of Datura members <cit.> (for instance semimajor axes spread ± 0.001 au). We used symplectic integrator rmvs3, part of a well-tested package swift, [<http://www.boulder.swri.edu/ hal/swift.html>] and included perturbations from eight planets and the massive dwarf planets Ceres and Vesta. We also randomly assigned thermal accelerations (i.e., the Yarkovsky effect) to the family members in the transverse direction. The smallest members in our simulation were thus given semimajor axis drift rates up to da/dt≃± 0.0006 au Myr^-1. We determined mutual distances of all simulated particles at every timestep of 3.6525 days, seeking very close encounters for determination of p_i from Eq. (<ref>) (determination of the encounter configurations was implemented on-line by seeking minima on the memory-sorted mutual distances), and propagated the synthetic family for a timespan of 500 kyr corresponding to the Datura age <cit.>.We evaluated a ”cumulative” p̅_i value by taking Δ T in Eq. (<ref>) the current epoch in the integrated system and counting N from all encounters until that moment. The results are shown in Fig. <ref>. We find that p̅_i peaks at ≃ 10 yr, representing three revolutions about the Sun. At that time the fragment configurations stay orbitally compact but encounters are beginning to decrease as the orbital angles begin to spread. The peak value p̅_i≃ 10^-12 km^-2 yr^-1 is six orders of magnitude larger than the mean value over the main belt population. This value obviously rapidly decreases in time, but at 500 kyr, which is the current epoch for Datura family, it still attains p̅_i≃ 1.38×10^-15 km^-2 yr^-1, namely three orders of magnitude larger than the mean value for the main belt. At face value, using this value alone, one would think that post family-formation collisions cannot and should not be neglected.However, the mean encounter velocities over the age of the Datura family are very small; we find v̅≃ 36 m s^-1, with the full range of 0.3 to 500 m s^-1. As much as these values are impressive, [We found it interesting to present some details of this numerical experiment, since we are notaware of a similar work previously published. It might be used as a template for studies of othervery young families in the future.] one may anticipate whether internal or external (main belt) impactors would be more important for the Datura-family collisional evolution. The intrinsic collisional probability of Datura members between each other is about three orders of magnitude larger than the probability being hit by background main belt projectiles. However, the main belt impactor population is about four orders of magnitude more numerous (Fig. <ref>). Therefore, we expect that main belt projectiles will dominate collisional evolution in the family.Finally, it is useful to mention that catastrophic breakups are characterized by the critical impact specific energy Q^⋆_D, namely the energy per unit target mass delivered by the projectile required for catastrophic disruption of the target (i.e., such that one-half the mass of the target body escapes). Many studies have dealt with this important quantity <cit.>, but here we assume a simple relation (density ρ also in cgs units)Q^⋆_D = 9.0× 10^7ergg^-1(D 2cm)^-0.53 +0.5ergcm^-3ρ(D 2cm)^1.36,whose constants have been adjusted provide a global stationary solution for the main belt asteroids <cit.>. The critical size of a projectile able to catastrophically disrupt a target of size D scales as ∝ (Q^⋆_D/v̅^2)^1/3 D and this minimizes the role of internal collisionsinyoung families because impact velocities v̅ are low. . We ran the Boulder code for a 500 kyr timespan and obtained results shown in Fig. <ref>. No change in the family size distribution for D>50 m wasrecorded, likely because the evolution timespan was too short.In families, the change in their size frequency distribution propagates from small to larger sizes, and in 500 kyr it only reaches the ≃ 10 cm range within the Datura family. Similar results were obtained for Adelaide and Rampo families too.Summing up the previous simulations, we conclude that collisional evolution over the timescales corresponding to the ages of the very young families is not capableof producing a transition to a shallower segment of the family size distribution at about 300-400 m. If true, the bias-corrected family population from the current-date observations correspond also to the population of members created at the family origin. Further results and future outlooks.– The estimated parameters of the magnitude distributions obtained above may serve for additional consistency checks. For instance, assuming the bias-corrected populations are representative of those generated right after disruption of the parent body of the family at the observed sizes, we may use the estimated mass in small members to determine further quantities. In the case of the Adelaide family we foundm_ frag/m_ LF≃ 0.0085. With that number, we may estimate (the minimum) size of the crater on (525) Adelaide that has been formed. If we take crater depth to be ≃ 1/10-1/5 of its radius <cit.>, and D_525≃ 9.4 km the size of (525) Adelaide, a simple calculation shows that a crater with D_ crat≃ 3.6-3.9 km would have about the same volume fraction in (525) Adelaide. This is still a reasonable number. Additionally, assuming crater to projectile size ratio of ≃ 10-20 <cit.>, we may estimate the projectile size to about d_ proj≃ 180-390 m. The number of 10 km size asteroids in the inner main belt is N_10≃ 300 <cit.>, and the number of 180-390 m objects in the main belt N_ proj≃ (1-5)× 10^7 <cit.>. Consideringthe mean intrinsic collision probability in the main belt p_ i≃ 2.8× 10^-18 km^-2 yr^-1, we may estimate the frequency of 180-390 m projectiles impacting a 10 km inner main belt target to about f≃ p_ i R^2 N_ proj N_10≃ (0.2-1)× 10^-6 yr^-1. This results in a characteristic timescale of ≃ 1-5 Myr, which is well comparable with the estimated age of the Adelaide family <cit.>. While highly simplified, this reasoning points to rough consistency between the Adelaide family origin and the produced fragment population. Very young asteroid families will certainly occupy interest of planetary scientists in the forthcoming decade. While theoretical studies will continue, perhaps even more important input is expected on the observational side. The planned powerful surveys, such as the Vera C. Rubin observatory <cit.>, promise to increase the known inventory of these clusters by an order of magnitude, pushing the completeness near to the absolute magnitude 20 (at least for clusters in the inner main belt). Unlike the case of large and old asteroid families, the identification of the very young families may be still a straightforward task (profiting from the 5D arena of the osculating orbital elements and a possibility to recognize interlopers using backward orbital propagation). The magnitude distribution of a complete population of members may be set much more reliably, including the critical interval of H in between 19 and 20 magnitude.We thank the referee for very useful suggestions on the submitted version of the paper.We are grateful to the Catalina Sky Survey staff, Eric Christensen and Franck Shelly in particular,for providing us the CSS observations in between 2013 and 2022 in a user-friendly format and allowingus to use them for this research. This work was supported by the Czech Science Foundation (grant 21-11058S).aa 51 natexlab#1#1[Asphaug et al.(2015)Asphaug, Collins, & Jutzi]aetal2015 Asphaug, E., Collins, G., & Jutzi, M. 2015, in Asteroids IV, 661–677[Bottke et al.(2015a)Bottke, Brož, O'Brien, Campo Bagatin, Morbidelli, & Marchi]betal2015 Bottke, W. F., Brož, M., O'Brien, D. P., et al. 2015a, in Asteroids IV, ed. P. Michel, F. E. DeMeo, & W. F. Bottke, 701–724[Bottke et al.(2020)Bottke, Vokrouhlický, Ballouz, Barnouin, Connolly, Elder, Marchi, McCoy, Michel, Nolan, Rizk, Scheeres, Schwartz, Walsh, & Lauretta]bottke2020 Bottke, W. F., Vokrouhlický, D., Ballouz, R. L., et al. 2020, , 160, 14[Bottke et al.(2015b)Bottke, Vokrouhlický, Walsh, Delbo, Michel, Lauretta, Campins, Connolly, Scheeres, & Chelsey]bot2015 Bottke, W. F., Vokrouhlický, D., Walsh, K. J., et al. 2015b, , 247, 191[Carruba et al.(2020)Carruba, Ramos, & Spoto]car2020 Carruba, V., Ramos, L. G. M., & Spoto, F. 2020, , 493, 2556[Christensen et al.(2019)Christensen, Africano, Farneth, Fuls, Gibbs, Grauer, Groeller, Kowalski, Larson, Leonard, Prune, Rankin, Seaman, & Shelly]CSSEPSC2019 Christensen, E., Africano, B., Farneth, G., et al. 2019, in EPSC-DPS Joint Meeting 2019, Vol. 2019, EPSC–DPS2019–1912[Dahlgren(1998)]d1998 Dahlgren, M. 1998, , 336, 1056[Durda et al.(2007)Durda, Bottke, Nesvorný, Enke, Merline, Asphaug, & Richardson]d2007 Durda, D. D., Bottke, W. F., Nesvorný, D., et al. 2007, , 186, 498[Greenberg(1982)]g1982 Greenberg, R. 1982, , 87, 184[Hirayama(1918)]hira1918 Hirayama, K. 1918, , 31, 185[Jacobson & Scheeres(2011)]js2011 Jacobson, S. A. & Scheeres, D. J. 2011, , 214, 161[Jutzi et al.(2015)Jutzi, Holsapple, Wünneman, & Michel]juetal2015 Jutzi, M., Holsapple, K., Wünneman, K., & Michel, P. 2015, in Asteroids IV, 679–699[Marzari et al.(1996)Marzari, Scholl, & Farinella]metal1996 Marzari, F., Scholl, H., & Farinella, P. 1996, , 119, 192[Masiero et al.(2015)Masiero, DeMeo, Kasuga, & Parker]maetal2015 Masiero, J. R., DeMeo, F. E., Kasuga, T., & Parker, A. H. 2015, in Asteroids IV, 323–340[Masiero et al.(2011)Masiero, Mainzer, Grav, Bauer, Cutri, Dailey, Eisenhardt, McMillan, Spahr, Skrutskie, Tholen, Walker, Wright, DeBaun, Elsbury, Gautier, Gomillion, & Wilkins]masiero2011 Masiero, J. R., Mainzer, A. K., Grav, T., et al. 2011, , 741, 68[Melosh(1989)]m1989 Melosh, H. J. 1989, Impact cratering: A geologic process (Oxford University Press, Oxford)[Michel et al.(2015)Michel, Richardson, Durda, Jutzi, & Asphaug]mietal2015 Michel, P., Richardson, D. C., Durda, D. D., Jutzi, M., & Asphaug, E. 2015, in Asteroids IV, 341–354[Migliorini et al.(1995)Migliorini, Zappalà, Vio, & Cellino]mi1995 Migliorini, F., Zappalà, V., Vio, R., & Cellino, A. 1995, , 118, 271[Morbidelli et al.(2009)Morbidelli, Bottke, Nesvorný, & Levison]m2009 Morbidelli, A., Bottke, W. F., Nesvorný, D., & Levison, H. F. 2009, , 204, 558[Mothé-Diniz & Nesvorný(2008)]mdn2008 Mothé-Diniz, T. & Nesvorný, D. 2008, , 486, L9[Nesvorný et al.(2015)Nesvorný, Brož, & Carruba]netal2015 Nesvorný, D., Brož, M., & Carruba, V. 2015, in Asteroids IV, 297–321[Nesvorný & Vokrouhlický(2006)]nv2006 Nesvorný, D. & Vokrouhlický, D. 2006, , 132, 1950[Nesvorný et al.(2006)Nesvorný, Vokrouhlický, & Bottke]daturaSci2006 Nesvorný, D., Vokrouhlický, D., & Bottke, W. F. 2006, Science, 312, 1490[Nesvorný et al.(2023)Nesvorný, Vokrouhlický, Shelly, Deienno, Bottke, Christensen, Jedicke, Naidu, Chesley, Chodas, Farnocchia, & Granvik]nes2023 Nesvorný, D., Vokrouhlický, D., Shelly, F., et al. 2023, , submitted[Novaković & Radović(2019)]ade2019 Novaković, B. & Radović, V. 2019, Research Notes of the American Astronomical Society, 3, 105[Novaković et al.(2022)Novaković, Vokrouhlický, Spoto, & Nesvorný]bojanrev2022 Novaković, B., Vokrouhlický, D., Spoto, F., & Nesvorný, D. 2022, Celestial Mechanics and Dynamical Astronomy, 134, 34[Öpik(1951)]o1951 Öpik, E. J. 1951, Proc. R. Irish Acad. Sect. A, 54, 165[Polishook et al.(2014)Polishook, Moskovitz, Binzel, DeMeo, Vokrouhlický, Žižka, & Oszkiewicz]poli2014 Polishook, D., Moskovitz, N., Binzel, R. P., et al. 2014, , 233, 9[Pravec et al.(2018)Pravec, Fatka, Vokrouhlický, Scheeres, Kušnirák, Hornoch, Galád, Vraštil, Pray, Krugly, Gaftonyuk, Inasaridze, Ayvazian, Kvaratskhelia, Zhuzhunadze, Husárik, Cooney, Gross, Terrell, Világi, Kornoš, Gajdoš, Burkhonov, Ehgamberdiev, Donchev, Borisov, Bonev, Rumyantsev, & Molotov]petal2018 Pravec, P., Fatka, P., Vokrouhlický, D., et al. 2018, , 304, 110[Pravec et al.(2019)Pravec, Fatka, Vokrouhlický, Scheirich, Ďurech, Scheeres, Kušnirák, Hornoch, Galád, Pray, Krugly, Burkhonov, Ehgamberdiev, Pollock, Moskovitz, Thirouin, Ortiz, Morales, Husárik, Inasaridze, Oey, Polishook, Hanuš, Kučáková, Vraštil, Világi, Gajdoš, Kornoš, Vereš, Gaftonyuk, Hromakina, Sergeyev, Slyusarev, Ayvazian, Cooney, Gross, Terrell, Colas, Vachier, Slivan, Skiff, Marchis, Ergashev, Kim, Aznar, Serra-Ricart, Behrend, Roy, Manzini, & Molotov]petal2019 Pravec, P., Fatka, P., Vokrouhlický, D., et al. 2019, , 333, 429[Pravec & Vokrouhlický(2009)]pv2009 Pravec, P. & Vokrouhlický, D. 2009, , 204, 580[Pravec et al.(2010)Pravec, Vokrouhlický, Polishook, Scheeres, Harris, Galád, Vaduvescu, Pozo, Barr, Longa, Vachier, Colas, Pray, Pollock, Reichart, Ivarsen, Haislip, Lacluyze, Kušnirák, Henych, Marchis, Macomber, Jacobson, Krugly, Sergeev, & Leroy]pra2010 Pravec, P., Vokrouhlický, D., Polishook, D., et al. 2010, , 466, 1085[Press et al.(2007)Press, Teukolsky, Vetterling, & Flannery]nr2007 Press, W. H., Teukolsky, S. A., Vetterling, W. T., & Flannery, B. P. 2007, Numerical Recipes: The Art of Scientific Computing (Cambridge University Press, Cambridge)[Rosaev & Plávalová(2016)]rp2016 Rosaev, A. & Plávalová, E. 2016, arXiv e-prints, arXiv:1612.04951[Rosaev & Plávalová(2017)]rp2017 Rosaev, A. & Plávalová, E. 2017, , 140, 21[Rosaev & Plávalová(2018)]rp2018 Rosaev, A. & Plávalová, E. 2018, , 304, 135[Schwamb et al.(2023)Schwamb, Jones, Yoachim, Volk, Dorsey, Opitom, Greenstreet, Lister, Snodgrass, Bolin, Inno, Bannister, Eggl, Solontoi, Kelley, Jurić, Lin, Ragozzine, Bernardinelli, Chesley, Daylan, Ďurech, Fraser, Granvik, Knight, Lisse, Malhotra, Oldroyd, Thirouin, & Ye]schwa2023 Schwamb, M. E., Jones, R. L., Yoachim, P., et al. 2023, , 266, 22[Spoto et al.(2015)Spoto, Milani, & Knežević]spot2015 Spoto, F., Milani, A., & Knežević, Z. 2015, , 257, 275[Tricarico(2017)]tric2017 Tricarico, P. 2017, , 284, 416[Vernazza et al.(2009)Vernazza, Binzel, Rossi, Fulchignoni, & Birlan]ver2009 Vernazza, P., Binzel, R. P., Rossi, A., Fulchignoni, M., & Birlan, M. 2009, , 458, 993[Vernazza et al.(2018)Vernazza, Brož, Drouard, Hanuš, Viikinkoski, Marsset, Jorda, Fetick, Carry, Marchis, Birlan, Fusco, Santana-Ros, Podlewska-Gaca, Jehin, Ferrais, Bartczak, Dudziński, Berthier, Castillo-Rogez, Cipriani, Colas, Dumas, Ďurech, Kaasalainen, Kryszczynska, Lamy, Le Coroller, Marciniak, Michalowski, Michel, Pajuelo, Tanga, Vachier, Vigan, Warner, Witasse, Yang, Asphaug, Richardson, Ševeček, Gillon, & Benkhaldoun]ver2018 Vernazza, P., Brož, M., Drouard, A., et al. 2018, , 618, A154[Vokrouhlický et al.(2006)Vokrouhlický, Brož, Bottke, Nesvorný, & Morbidelli]yy2006 Vokrouhlický, D., Brož, M., Bottke, W. F., Nesvorný, D., & Morbidelli, A. 2006, , 182, 118[Vokrouhlický et al.(2021a)Vokrouhlický, Brož, Novaković, & Nesvorný]hobsonAA2021 Vokrouhlický, D., Brož, M., Novaković, B., & Nesvorný, D. 2021a, , 654, A75[Vokrouhlický & Nesvorný(2008)]vn2008 Vokrouhlický, D. & Nesvorný, D. 2008, , 136, 280[Vokrouhlický et al.(2008)Vokrouhlický, Nesvorný, & Bottke]vetal2008 Vokrouhlický, D., Nesvorný, D., & Bottke, W. F. 2008, , 672, 696[Vokrouhlický et al.(2021b)Vokrouhlický, Novaković, & Nesvorný]adelaideAA2021 Vokrouhlický, D., Novaković, B., & Nesvorný, D. 2021b, , 649, A115[Vokrouhlický et al.(2017a)Vokrouhlický, Pravec, Ďurech, Bolin, Jedicke, Kušnirák, Galád, Hornoch, Kryszczyńska, Colas, Moskovitz, Thirouin, & Nesvorný]daturaAA2017 Vokrouhlický, D., Pravec, P., Ďurech, J., et al. 2017a, , 598, A91[Vokrouhlický et al.(2017b)Vokrouhlický, Pravec, Ďurech, Hornoch, Kušnirák, Galád, Vraštil, Kučáková, Pollock, Ortiz, Morales, Gaftonyuk, Pray, Krugly, Inasaridze, Ayvazian, Molotov, & Colazo]rheinAJ2017 Vokrouhlický, D., Pravec, P., Ďurech, J., et al. 2017b, , 153, 270[Vokrouhlický et al.(2009)Vokrouhlický, Ďurech, Michałowski, Krugly, Gaftonyuk, Kryszczyńska, Colas, Lecacheux, Molotov, Slyusarev, Polińska, Nesvorný, & Beshore]daturaAA2009 Vokrouhlický, D., Ďurech, J., Michałowski, T., et al. 2009, , 507, 495[Vokrouhlický et al.(2011)Vokrouhlický, Ďurech, Polishook, Krugly, Gaftonyuk, Burkhonov, Ehgamberdiev, Karimov, Molotov, Pravec, Hornoch, Kušnirák, Oey, Galád, & Žižka]rheinAJ2011 Vokrouhlický, D., Ďurech, J., Polishook, D., et al. 2011, , 142, 159[Wetherill(1967)]w1967 Wetherill, G. W. 1967, , 72, 2429 § MEMBERS OF THE VERY YOUNG FAMILIESHere we provide an information about membership in the very young asteroid families studied in this paper. Our approach to obtain these results is based on two criteria. First, we search for asteroids located in the vicinity of the largest member in the 5D space of osculating orbital elements (disregarding longitude in orbit λ): semimajor axis a, eccentricity e, inclination I, longitude of node Ω, and argument of perihelion ω. Unlike in <cit.>, we do not use any specific metric function, but simply select all asteroids with orbits in a certain box. In particular, we let the orbital elements vary by the following limits: (i) semimajor axis by ± 0.03 au, (ii) eccentricity by ± 0.03, (iii) inclination by ± 0.2^∘, and (iv) longitude of node and argument of perihelion both by ± 30^∘. These values are larger than the short-period oscillations of these elements due to planetary perturbations, and conservative enough to sense the population even to the smallest currently detectable sizes (note that small members might have been ejected with larger velocity than the larger ones, constituting the family core). Given the large increase in number of discovered asteroids, there is a small but nonzero chance that such a simple selection method may associate background (unrelated) objects to the family even in the vast 5D space. For that reason, we perform in the second step a convergence control. We numerically integrate orbits of all identified asteroids backward in time for 2 Myr. To keep things simple, we use only nominal (best-fit) initial data at MJD epoch 60,000.0 and include only gravitational perturbations from all planets (disregarding thermal accelerations). [We use a well-tested and publicly available integration package swift(<http://www.boulder.swri.edu/ hal/swift.html>)with a short timestep of 2 d. We output the asteroid heliocentric state vectors every5 yr and monitor convergence of the secular angles Ω and ϖ toward thereference values of the largest member in the family.] The planetary configuration at the initial epoch is obtained from the JPL ephemerides file DE 421. As a result, the purpose of this simulation is not to accurately determine the age of the family, which is for most cases known from previous studies, but to eliminate possible interloping objects. We found that the secular angles in the interloper cases show a rapid divergence from the largest body in the family and may be easily identified. Obviously, we eliminated these objects from our analysis of the size distribution of the family members. There was only a limited amount of such objects found. The most crowded situation occurred for Datura family, where we eliminated 80 such objects, i.e., little less than the family members (who are strongly clustered in the simple 5D box of orbital elements that we considered for family-member search). rlccccccc Datura family as of June 2023. Osculating heliocentric orbital elements at epochMJD 60,000.0 from the MPC catalog: semimajor axis a, eccentricity e, inclination I,longitude of node Ω, and argument of perihelion ω. singleopposition orbits arelisted at the end of the table. The third column gives the absolute magnitude H. The lastcolumn indicates, whether the asteroid has been detected by CSS during the phase 2operations (Y=yes). We note two very small, singleopposition asteroids 2016 PL51 and2022 RB57, very likely members of the Datura family too. However, their orbits, based onobservations spanning short arcs (less than a week in the case of 2016 PL51), are stillvery uncertain. We include (429988) 2013 PZ36 residing on a rather chaotic orbit (most likelyinteracting with the exterior E3/10 mean motion resonance with the Earth), such thatproving its membership to the Datura family would require an extensive work beyond the scopeof this paper (see also Fig. <ref>). Luckily, the results discussed in Sec. <ref> are notoverly sensitive to the decision about Datura membership of this body.2cAsteroid 0pt2ex H a e I Ω ω CSS (mag) (au) (deg) (deg) (deg) [1pt]continued. 2cAsteroid 0pt2ex H a e I Ω ω CSS (mag) (au) (deg) (deg) (deg) [1pt]0pt3ex1270 Datura 12.54 2.2344232 0.2080399 5.98629 197.77551 259.06896 Y 60151 1999 UZ6 16.35 2.2347805 0.2078266 5.99466 196.68391 260.77419 Y 89309 2001 VN3616.48 2.2356444 0.2064328 6.01966 192.81960 267.16899 Y 90265 2003 CL5 16.08 2.2347649 0.2074858 5.99650 195.55568 262.11246 Y203370 2001 WY3517.33 2.2352682 0.2074548 5.98927 196.71788 260.76103 Y215619 2003 SQ168 17.24 2.2343739 0.2080248 5.98761 197.36200 259.62134 Y338309 2002 VR1717.68 2.2345469 0.2077371 5.99159 196.68295 260.80932 Y429988 2013 PZ3617.95 2.2306873 0.2109170 5.82996 102.85349 248.77348 Y433382 2013 ST7118.09 2.2343697 0.2079372 5.98521 197.92635 259.05669 Y452713 2005 YP136 18.46 2.2367817 0.2052885 6.04942 186.12897 276.54525 Y485010 2009 VS116 18.23 2.2361205 0.2051146 6.06774 185.69555 277.53365 Y553350 2011 KT1018.15 2.2361705 0.2068784 6.02362 192.38721 267.95500 Y585600 2018 VR7918.54 2.2350642 0.2074864 5.98893 197.32214 259.95573 Y 2002 RH291 17.97 2.2349777 0.2076290 5.99691 195.64466 262.24539 Y2002 UU5819.97 2.2347195 0.2074394 5.99799 196.59631 261.33505 2003 UD112 18.10 2.2347778 0.2073396 5.99912 195.41269 263.14733 Y 2005 RK5418.75 2.2348344 0.2066777 6.03197 192.65329 267.58009 2006 KA7718.31 2.2343902 0.2083194 5.98156 199.49835 256.42782 Y 2006 SY376 20.30 2.2329680 0.2096354 5.96933 107.19877 245.09390 2006 SD382 18.91 2.2361239 0.2055712 6.05939 185.86581 277.02849 Y 2006 WV222 18.80 2.2350979 0.2080650 5.98696 197.78267 259.39354 Y 2007 RM332 18.47 2.2351994 0.2077589 5.98292 198.46057 258.26665 Y2008 YV5118.60 2.2353599 0.2075360 5.98733 197.48832 259.71301 Y2010 VN260 19.20 2.2349143 0.2073926 5.99770 196.14005 262.02148 Y2010 VU261 19.10 2.2346754 0.2076944 5.99159 196.65770 260.93281 Y 2010 VB265 19.10 2.2348391 0.2075374 5.99878 196.29429 261.51547 Y2012 VN143 19.32 2.2346189 0.2074989 5.99550 196.99693 260.66846 2014 NZ8818.80 2.2353179 0.2072581 5.98828 196.96231 260.64555 Y 2014 OY8519.50 2.2343694 0.2085063 5.97725 100.94642 253.92579 2014 OA8618.87 2.2349119 0.2078508 5.97656 100.39381 255.53971 Y 2014 OE206 19.26 2.2354740 0.2070515 5.99678 196.10416 262.05661 Y2014 OR378 18.77 2.2352076 0.2075890 5.98585 197.77828 259.17541 Y 2014 WL9619.30 2.2368677 0.2067078 6.00707 193.95385 265.59361 2014 WT9618.93 2.2355619 0.2072284 5.99606 195.93259 262.19506 2015 DY9418.20 2.2346085 0.2075363 5.99356 196.48915 261.34069 Y 2015 PD191 20.00 2.2360619 0.2070593 6.02521 193.22656 266.765232015 PQ4719.17 2.2343287 0.2077176 5.99164 197.27727 259.91321 2015 PH144 19.56 2.2337492 0.2084264 5.98369 198.81562 257.09515 2015 PR301 18.87 2.2343558 0.2078212 5.98601 198.26718 258.53367 2015 QW3119.00 2.2342126 0.2078737 5.98459 198.21808 258.43923 2015 SS3118.84 2.2351942 0.2063313 6.00527 191.16595 268.71019 Y 2015 TL455 18.59 2.2345918 0.2076533 5.99230 197.58170 259.65709 Y2015 WQ2518.70 2.2347622 0.2074942 5.99209 196.92848 260.47264 Y 2015 XK8818.63 2.2341292 0.2082101 5.98363 199.13007 256.75539 Y 2015 XX321 19.13 2.2349203 0.2073029 5.99831 196.45517 261.40030 Y 2015 XQ432 18.80 2.2347230 0.2073054 5.99311 196.44434 261.32662 Y 2015 XK452 18.98 2.2348068 0.2074621 5.99415 197.07086 260.30732 Y 2016 TW1518.70 2.2346167 0.2077361 5.99193 197.14749 260.35782 Y 2016 TR115 18.70 2.2348542 0.2078892 5.98988 197.68880 259.59729 Y2017 QX8818.88 2.2370037 0.2053516 6.06398 185.78189 277.15110 Y2017 SU3 18.91 2.2352968 0.2074508 5.98488 197.47714 259.99874 Y2017 SV143 18.80 2.2354487 0.2049571 6.06182 186.32039 276.61565 Y 2017 UW137 19.75 2.2343672 0.2084085 5.97502 101.13381 254.12998 Y2017 UU155 19.62 2.2365893 0.2069006 6.00884 193.82864 265.81621 Y2017 VP3719.40 2.2352166 0.2075381 5.99312 197.08958 260.46478 Y 2017 WC5019.53 2.2352780 0.2073674 5.99337 196.33838 261.687382018 TM7 18.71 2.2347907 0.2075211 5.98873 197.80903 259.29224 Y 2018 UN3419.25 2.2352820 0.2068990 5.99899 196.01196 262.59590 Y 2018 UL4019.10 2.2353244 0.2069677 6.00250 196.17580 262.28729 Y 2019 QA1418.60 2.2351143 0.2078881 5.99236 197.28433 260.08707 Y 2019 SE2819.16 2.2342820 0.2087329 5.98412 199.30327 256.39995 Y 2019 XJ1519.12 2.2350562 0.2073834 6.00613 195.41326 263.12590 Y 2020 OS8919.54 2.2348379 0.2075650 6.00508 193.45550 264.93955 2020 PM2819.24 2.2352071 0.2077160 5.99066 197.81753 259.47920 Y 2021 RB114 18.60 2.2352733 0.2073676 5.99044 196.69883 260.81727 Y 2022 QC148 19.50 2.2343589 0.2078442 5.99308 197.08424 260.07491 2022 SV168 19.59 2.2348466 0.2051558 6.06436 186.74092 275.74315 [6pt] 9c– Singleopposition members –0pt3ex2014 WG250 18.95 2.2352239 0.2075190 5.98785 197.36655 259.84769 2014 WM249 19.19 2.2339095 0.2075368 5.98235 197.79023 259.016862015 TU306 19.60 2.2349386 0.2067806 6.01564 195.68816 263.34206 2016 PY2219.44 2.2360027 0.2069173 6.02658 192.76382 267.85893 2017 OS162 19.49 2.2360582 0.2070355 5.98009 190.91999 267.416992017 OU162 19.69 2.2353099 0.2074271 5.98822 197.34285 260.05416 2017 SG152 19.00 2.2357888 0.2064416 6.01963 193.43504 266.60818 Y 2017 SV193 19.60 2.2352198 0.2074399 5.99136 196.97981 260.72387 Y2017 SC233 19.20 2.2350514 0.2074005 6.00705 193.16523 265.479902017 SS269 19.90 2.2360231 0.2070277 6.00573 195.79973 263.470652019 TD2819.60 2.2368243 0.2075650 6.01491 194.64557 265.09847 Y2020 QM3619.00 2.2349968 0.2076282 5.98836 197.10501 260.48247 Y2020 RR103 19.80 2.2361572 0.2077930 5.98241 190.88493 267.16232 Y 2020 UV3719.30 2.2354422 0.2074241 5.97418 190.97581 267.30544 Y 2021 NF4719.49 2.2350049 0.2075666 5.98830 197.42570 259.67732 Y 2021 NK5719.14 2.2353946 0.2069712 5.99919 195.52787 262.99252 Y 2021 PX107 19.22 2.2352389 0.2073418 5.98930 197.13408 260.30247 Y 2021 QZ4019.75 2.2350184 0.2074293 5.99078 196.79357 260.66997 2021 RE149 19.00 2.2349079 0.2077411 5.98546 198.19857 258.44555 Y 2021 VU2020.32 2.2355854 0.2063734 6.00005 195.48181 263.865952022 PN1519.82 2.2352065 0.2070323 6.02051 194.91401 264.611532022 QK6919.85 2.2361785 0.2073031 5.99880 195.16386 263.811822022 QT171 19.62 2.2363260 0.2071205 6.02165 194.40238 265.28039 2022 SO7619.43 2.2339303 0.2084255 5.98316 198.95475 256.96663 2022 TV2220.14 2.2362992 0.2055041 6.06042 185.66342 276.96670 [2pt]rlccccccc Adelaide family as of June2023. Osculating heliocentric orbital elements at epochMJD 60,000.0 from the MPC catalog: semimajor axis a, eccentricity e, inclination I,longitude of node Ω, and argument of perihelion ω. singleopposition orbits arelisted at the end of the table. The third column gives the absolute magnitude H. The lastcolumn indicates, whether the asteroid has been detected by CSS during the phase 2operations (Y=yes). We note asteroid (159941) 2005 WV178 in the near vicinity of the Adelaidefamily, which we discard from the membership due to a dubious convergence to (525) Adelaidein the past Myr.2cAsteroid 0pt2ex H a e I Ω ω CSS (mag) (au) (deg) (deg) (deg) [1pt]continued. 2cAsteroid 0pt2ex H a e I Ω ω CSS (mag) (au) (deg) (deg) (deg) [1pt]0pt3ex 525 Adelaide 12.17 2.2459455 0.1020388 5.99835 203.35936 263.96182 Y 422494 2014 SV342 18.37 2.2457873 0.1036634 6.01316 201.90211 262.11221 Y 452322 2000 GG121 18.43 2.2459962 0.0990051 6.05840 197.12857 277.42396 Y 463394 2013 GV2818.56 2.2452544 0.1014759 6.00185 203.34261 265.44077 Y 475474 2006 SZ152 18.58 2.2446196 0.1025679 5.98782 204.84491 263.14855 Y 486081 2012 UX4118.54 2.2458709 0.1018798 6.01195 200.73242 265.92554 Y 504375 2007 VV7318.76 2.2455021 0.1034966 6.01373 200.64920 264.76138 Y 517580 2014 UZ170 18.68 2.2457137 0.1015922 6.00769 200.32247 269.68162 Y 534611 2014 UC204 18.18 2.2458760 0.1010302 6.02623 199.84308 272.15198 Y 545614 2011 SA4518.41 2.2457694 0.1022064 6.00750 201.81455 264.31208 Y 552867 2010 UF125 18.78 2.2449504 0.1025609 6.01004 202.13835 266.85951 Y 555571 2014 AD3118.51 2.2457958 0.1009470 6.02260 200.55360 270.49477 Y 569552 2005 UK370 19.01 2.2455102 0.1014688 6.02424 200.81248 268.01004 Y 572830 2008 US1718.57 2.2455380 0.1000395 6.04296 198.32560 273.73347 Y 572868 2008 UR182 18.40 2.2458141 0.1034843 6.02337 200.30854 263.50713 Y 578969 2014 JA2 18.26 2.2457211 0.1010352 6.03014 199.49735 269.87273 Y 593790 2015 XZ9018.70 2.2449726 0.1014565 5.99848 203.78817 264.25467 Y 616487 2005 VP8318.45 2.2456512 0.1016943 6.02430 200.22763 267.30317 Y2004 HU7619.02 2.2462293 0.1005166 6.00980 202.63100 267.20087 Y2004 HJ8518.97 2.2461326 0.1032087 6.03593 200.54895 263.66537 Y2005 UF193 18.77 2.2454086 0.1038225 6.01911 200.93875 261.58535 Y2006 SK449 18.40 2.2456295 0.1013371 6.02272 200.42520 271.19772 Y2007 TA504 18.90 2.2455486 0.1024368 6.02680 200.28214 268.89512 Y2007 VT345 18.58 2.2449214 0.1035204 5.99871 203.77264 262.38520 Y2008 ET179 18.60 2.2450348 0.1025046 6.00338 202.63850 266.26648 Y2008 UR414 18.63 2.2460790 0.0989092 6.02968 201.69890 272.77329 Y2009 WJ157 18.77 2.2464285 0.1016017 6.01107 202.07146 265.29561 Y2010 VC228 18.33 2.2450406 0.1014959 6.03451 198.81306 273.39747 Y2010 VF260 18.60 2.2450148 0.1034408 6.04766 197.20466 270.60213 Y2010 XB115 18.81 2.2448215 0.1031887 6.00795 201.45005 265.44057 Y2012 TM342 19.30 2.2451705 0.1015429 6.00110 202.85893 265.027782013 CH251 19.43 2.2457832 0.1021203 6.00697 202.66417 264.04047 Y2013 GR162 18.80 2.2457624 0.1026941 6.00444 202.30810 263.28291 Y2013 HB9719.80 2.2463044 0.1003166 6.02739 200.10111 272.113492013 TY219 18.96 2.2442386 0.1035677 5.99071 204.30989 262.02986 Y2013 TR236 19.54 2.2454223 0.1028989 6.01398 202.86209 264.25702 Y2014 EQ8119.10 2.2469478 0.0999236 6.05462 197.32994 275.67967 Y2014 EU9619.20 2.2464881 0.1040363 6.01320 200.66878 262.24741 Y2014 EM164 18.98 2.2459219 0.1031170 5.99009 204.47170 260.80897 Y2014 JY105 19.10 2.2459512 0.0987461 6.03208 201.11692 273.10746 Y2014 WM167 18.95 2.2458331 0.1018454 6.00962 202.17318 267.55573 Y2015 BE285 19.05 2.2449165 0.1039514 5.99999 203.11323 261.33797 Y2015 HU7218.95 2.2452445 0.1031607 6.00944 202.19152 264.15943 Y2015 RM186 18.73 2.2460358 0.0984971 6.06417 196.43719 279.45108 Y2015 TD4419.26 2.2444571 0.1025243 5.98632 204.98118 260.23068 Y2015 UR1819.30 2.2454281 0.1021862 6.03178 200.65949 265.28313 Y2015 XC9219.07 2.2460168 0.1009429 6.04263 200.06013 269.071192016 AH353 19.77 2.2460923 0.1013139 6.01534 202.02955 267.909612016 AL322 18.90 2.2456315 0.1026982 6.01532 202.92759 262.78436 Y2016 CP9519.33 2.2463603 0.1003278 6.02127 200.61828 271.256492016 CX104 19.24 2.2448809 0.1035784 5.98559 205.29467 258.78148 Y2016 EX318 19.50 2.2449848 0.1030240 5.98796 204.87999 260.79999 Y 2016 FR3319.10 2.2458347 0.1015699 6.05834 196.65477 274.01002 Y 2016 FA3418.86 2.2455945 0.1024636 6.01441 201.96964 265.06660 Y 2016 GO1118.72 2.2450053 0.1037665 5.98548 205.07788 259.12530 Y 2016 QE7118.40 2.2454966 0.1029550 6.01495 201.16100 266.00988 Y 2016 TN4118.90 2.2446126 0.1043580 5.98344 205.11808 258.91716 2016 UO110 19.06 2.2454466 0.1026978 5.99586 203.55032 263.93640 Y 2017 AU3818.78 2.2458978 0.1013304 6.00496 202.56750 266.13825 Y 2017 HL7219.28 2.2459566 0.0998460 6.02303 200.55185 271.45608 Y2017 RS100 19.33 2.2449371 0.1039013 6.00073 204.79324 259.00262 Y2017 TG2618.87 2.2448290 0.1044112 5.98989 204.40015 259.95117 Y2017 UF6519.32 2.2446513 0.1035296 6.00813 202.33968 263.80059 Y2017 WP5019.10 2.2444171 0.1030306 6.00350 203.01479 264.404752019 BT1119.11 2.2453842 0.1009102 6.02325 200.50607 270.84753 Y2019 TC6219.20 2.2471638 0.0996640 6.05763 196.55998 277.07321 Y 2019 YE2919.77 2.2457962 0.1006684 6.02946 200.43589 269.04712 Y 2019 YU3519.86 2.2460175 0.1008068 6.02944 202.15923 266.452302020 ML4519.00 2.2459836 0.1010309 6.02237 201.82072 267.71323 2020 PM7919.50 2.2452151 0.1027546 6.00307 204.32861 262.16107 2022 BM6 19.30 2.2454318 0.1021906 6.00621 202.54005 266.53136 Y 2022 CU1618.98 2.2449320 0.1035621 6.01424 202.07267 263.45712 Y 2022 TC6 18.99 2.2453803 0.1013650 6.00377 203.09324 265.02593 [6pt] 8c– Singleopposition members –0pt3ex2022 BM5019.97 2.2451736 0.1020011 6.00018 201.42183 268.13980 Y2023 AH4 20.32 2.2458161 0.1016177 6.00306 203.14933 265.552822023 BX6 20.23 2.2454761 0.1028669 5.99652 203.75734 262.074332023 BZ6 20.11 2.2461186 0.1005071 6.01469 201.61036 269.592642023 BP9 20.53 2.2493354 0.1021673 6.02529 199.84371 272.574192023 BS1120.57 2.2470465 0.1005654 6.01926 200.54636 271.10920 [2pt]rlccccccc Hobson family as of June2023. Osculating heliocentric orbital elements at epochMJD 60,200.0 from the MPC catalog: semimajor axis a, eccentricity e, inclination I,longitude of node Ω, and argument of perihelion ω. singleopposition orbits arelisted at the end of the table. The third column gives the absolute magnitude H. The lastcolumn indicates, whether the asteroid has been detected by CSS during the phase 2operations (Y=yes). We note a very small, singleopposition asteroids 2019 NF93, 2021 JQ73, 2023 JD27 and 2023 NV2very likely members of the Hobson family too. However, their orbits, especially for 2019 NF93based on observations spanning less than a week, are still very uncertain.2cAsteroid 0pt2ex H a e I Ω ω CSS (mag) (au) (deg) (deg) (deg) [1pt]continued. 2cAsteroid 0pt2ex H a e I Ω ω CSS (mag) (au) (deg) (deg) (deg) [1pt]0pt3ex 18777 Hobson 15.12 2.5633566 0.1833929 4.32167 105.43986 180.62462 Y57738 2001 UZ160 15.27 2.5643655 0.1804590 4.31695 104.86638 181.39287 Y 363118 2001 NH1417.34 2.5640791 0.1802580 4.31088 105.05377 181.32416 Y 381414 2008 JK3717.70 2.5644935 0.1801728 4.32092 104.22939 181.70561 Y 436620 2011 LF1217.35 2.5623364 0.1840800 4.32629 104.88486 180.38868 Y 450571 2006 JH3517.64 2.5622205 0.1830388 4.31799 105.19338 180.53430 Y 465404 2008 HQ4617.67 2.5640097 0.1819144 4.31527 105.23235 182.39770 Y 520394 2014 JJ1018.15 2.5636411 0.1819023 4.31680 105.02491 180.59212 Y 537249 2015 HM190 17.60 2.5618837 0.1853909 4.32936 105.06573 181.29952 Y 548822 2010 VG231 18.08 2.5645095 0.1785389 4.30907 104.44899 180.18738 Y 557505 2014 UB262 18.33 2.5644669 0.1812880 4.30940 105.44687 181.70689 Y2007 EH116 17.60 2.5632497 0.1837133 4.33016 104.12508 181.78781 Y2007 HC5417.10 2.5630025 0.1852376 4.33060 103.90562 183.45131 Y2008 WV149 18.25 2.5616506 0.1860344 4.32935 105.32362 181.64195 Y2009 SY179 18.10 2.5638560 0.1808079 4.31363 105.25771 181.959832010 GN203 18.19 2.5616128 0.1827647 4.31768 105.51576 178.85422 Y2011 SU302 18.40 2.5613443 0.1843413 4.32594 105.00690 180.83267 Y2012 JM7118.34 2.5643906 0.1803477 4.31944 104.45794 181.49202 Y2012 LN3118.15 2.5645366 0.1805664 4.32084 104.18716 181.60272 Y2013 JG4818.42 2.5640127 0.1804333 4.31002 105.27028 181.301202013 MW2018.10 2.5640282 0.1789098 4.30379 105.82861 179.66519 Y2013 NA7317.90 2.5645573 0.1778761 4.31150 104.20836 180.13663 Y2014 HH103 17.96 2.5628983 0.1818845 4.31303 105.17981 179.85773 Y2014 KY102 18.08 2.5643135 0.1802754 4.30634 105.52954 178.702472014 NN7118.22 2.5656862 0.1796550 4.31314 104.34699 180.716642014 OG277 18.40 2.5655560 0.1824004 4.30982 105.58409 182.327442014 OJ6618.94 2.5662203 0.1795922 4.30905 105.02988 179.794482014 PJ8718.30 2.5657155 0.1814269 4.31509 105.41914 181.451222014 QL520 18.41 2.5655987 0.1802430 4.30661 105.04757 180.777442014 QQ580 18.83 2.5657832 0.1791684 4.31163 104.58123 180.170542015 FV225 17.60 2.5626639 0.1856256 4.32625 105.38305 182.32173 Y2015 HV138 18.70 2.5624216 0.1841988 4.32970 104.43650 181.056762015 KA9117.90 2.5623849 0.1834933 4.32926 104.19929 180.34602 Y2015 KM237 19.48 2.5623846 0.1836411 4.33279 103.88926 180.728582015 OP104 18.00 2.5614614 0.1838197 4.32310 104.57302 180.62155 Y2015 PM156 18.40 2.5619337 0.1823281 4.32222 104.32621 179.281492015 PA184 19.20 2.5607541 0.1873221 4.32194 105.96144 182.842032015 XL282 17.79 2.5657472 0.1813020 4.31120 105.23265 181.11965 Y2016 GY256 18.24 2.5636554 0.1832090 4.32326 105.46202 182.61057 Y2016 GW276 18.48 2.5640705 0.1811033 4.31689 105.07693 181.94298 Y2016 GZ310 18.51 2.5642163 0.1812362 4.32011 104.87910 181.95655 Y2017 PA6818.20 2.5644305 0.1794933 4.31098 104.94606 180.904702017 PK7018.80 2.5631898 0.1834456 4.31379 105.99206 183.694672017 SM2518.75 2.5641029 0.1805951 4.31554 104.93415 181.759452017 SQ8318.33 2.5641221 0.1801750 4.31364 105.58058 180.54802 Y2017 WO4718.12 2.5639686 0.1820342 4.32084 104.97053 181.94034 Y2018 NQ4818.79 2.5649606 0.1809579 4.31287 105.27527 181.364402019 NP4418.90 2.5614875 0.1835832 4.32402 104.94548 180.28464 Y2019 NB193 19.09 2.5616801 0.1826859 4.32590 104.31100 179.865772019 PS3018.50 2.5613773 0.1841029 4.32102 105.34292 180.47624 Y 2020 HQ5718.50 2.5648348 0.1794839 4.31150 104.69543 180.06419 Y2020 KP3619.11 2.5648730 0.1791752 4.31520 104.47316 179.58657 2021 MO5 19.07 2.5636887 0.1815050 4.31175 105.50891 182.38281 2023 JA2218.24 2.5617572 0.1834690 4.32797 104.29884 180.14545 [6pt]8c– Singleopposition members –0pt3ex2014 JH120 18.70 2.5642333 0.1818812 4.31659 105.15400 180.802862017 NY2918.95 2.5644299 0.1787774 4.31392 104.20540 181.002972019 GR115 18.80 2.5620456 0.1857152 4.32843 105.21081 181.521842020 JM3118.50 2.5636410 0.1836011 4.32256 105.32188 182.521602020 OY5018.60 2.5626675 0.1856940 4.32491 105.47980 182.452542023 JZ8 18.67 2.5611412 0.1864161 4.32528 105.35980 181.88047 [2pt] rlccccccc Rampo family as of June2023. Osculating heliocentric orbital elements at epochMJD 60,000.0 from the MPC catalog: semimajor axis a, eccentricity e, inclination I,longitude of node Ω, and argument of perihelion ω. singleopposition orbits arelisted at the end of the table. The third column gives the absolute magnitude H. The lastcolumn indicates, whether the asteroid has been detected by CSS during the phase 2operations (Y=yes). We note two very small, singleopposition asteroids 2015 KM284 and2015 KG287, very likely members of the Rampo family too. However, their orbits, based onobservations spanning less than a week, are still very uncertain.2cAsteroid 0pt2ex H a e I Ω ω CSS (mag) (au) (deg) (deg) (deg) [1pt]continued. 2cAsteroid 0pt2ex H a e I Ω ω CSS (mag) (au) (deg) (deg) (deg) [1pt]0pt3ex 10321 Rampo14.37 2.3285978 0.0952815 6.06091 53.88221 278.53547 Y294272 2007 UM101 17.55 2.3294915 0.0944526 6.05304 53.16108 280.07932 Y451686 2013 BR6717.75 2.3278183 0.0943492 6.09428 61.69559 266.60300 Y546329 2010 VO1918.64 2.3284414 0.0929491 6.09152 62.49298 265.50211 Y562123 2015 XH207 18.17 2.3273763 0.0941514 6.09700 62.36822 265.60187 Y601678 2013 JF6918.45 2.3287200 0.0936834 6.08492 60.24854 268.75113 Y 2005 VO2218.60 2.3284379 0.0942207 6.08237 60.44807 269.16712 Y 2006 UA169 18.30 2.3287767 0.0936955 6.07318 58.37005 272.18424 Y 2007 XP6718.38 2.3290438 0.0937905 6.07805 58.76112 271.03067 Y 2008 GZ170 18.33 2.3278729 0.0934523 6.08337 60.94947 268.37592 Y 2008 SW341 18.33 2.3299539 0.0957301 6.04252 51.57990 282.43371 Y 2009 HD9518.15 2.3289378 0.0934420 6.08220 60.06838 269.21119 Y 2009 SR371 18.70 2.3287466 0.0939626 6.06727 56.76277 274.96649 Y 2009 WB276 18.46 2.3282859 0.0941812 6.06667 57.01124 274.26618 2010 VP264 18.71 2.3283338 0.0926865 6.10160 64.02540 263.22675 2011 WC2218.63 2.3277415 0.0937305 6.09825 62.49161 265.21277 Y 2012 VE126 18.70 2.3299965 0.0951578 6.05462 53.54667 279.68212 Y 2013 RL101 18.10 2.3284038 0.0931685 6.08778 61.63153 267.08811 Y 2013 VC3018.53 2.3283365 0.0936238 6.07791 59.32627 270.66675 Y 2013 VE5118.78 2.3280528 0.0931012 6.09217 62.47444 265.73127 Y 2014 HS9 18.38 2.3285282 0.0950748 6.07653 58.51435 271.58717 2014 HN8719.03 2.3279516 0.0942568 6.09716 63.26309 264.63747 2014 ST4418.97 2.3288418 0.0947150 6.06330 55.71611 275.58485 2015 BB184 18.71 2.3285590 0.0927376 6.09722 63.17079 264.60035 2015 HT9118.22 2.3277235 0.0932915 6.08888 62.06807 266.70756 Y 2015 TA367 18.89 2.3291271 0.0954163 6.05779 53.26353 279.52356 2015 TM372 18.57 2.3285477 0.0949021 6.07459 57.66486 273.16183 Y 2015 VK190 19.02 2.3292421 0.0954998 6.04585 51.82977 282.18049 2016 GJ353 19.20 2.3296639 0.0942243 6.06093 54.67358 277.07351 2016 PR196 19.36 2.3298272 0.0945697 6.03487 50.33266 284.58940 Y 2016 TE8718.09 2.3281180 0.0941459 6.07157 57.95426 272.79711 Y 2017 UH2118.38 2.3289973 0.0933377 6.08745 60.32819 268.78683 Y 2018 NN9 18.82 2.3281713 0.0946714 6.08543 59.97434 269.59068 2018 PS6818.39 2.3291862 0.0955066 6.03521 49.93007 285.17948 Y 2019 PC4118.75 2.3285442 0.0939345 6.08000 59.64944 270.69700 Y2020 PJ5318.90 2.3297760 0.0940307 6.05985 54.29035 278.193542021 QC8119.05 2.3283427 0.0938521 6.08798 60.49374 268.426392022 QE6118.93 2.3301748 0.0957688 6.03862 50.74900 283.82069 Y2022 QU7619.08 2.3294832 0.0954286 6.05628 54.51791 278.35866 2022 QY123 18.97 2.3282912 0.0942403 6.09698 63.00157 264.91167 [6pt]8c– Singleopposition members –0pt3ex 2020 MO1918.70 2.3285350 0.0928385 6.09814 63.13642 264.21127 2022 RX7618.99 2.3292702 0.0952251 6.06717 56.80863 274.65354 [2pt] rlccccccc Wasserburg family as of June2023. Osculating heliocentric orbital elements at epochMJD 60,000.0 from the MPC catalog: semimajor axis a, eccentricity e, inclination I,longitude of node Ω, and argument of perihelion ω. The third columngives the absolute magnitude H. The lastcolumn indicates, whether the asteroid has been detected by CSS during the phase 2operations (Y=yes).2cAsteroid 0pt2ex H a e I Ω ω CSS (mag) (au) (deg) (deg) (deg) [1pt]continued. 2cAsteroid 0pt2ex H a e I Ω ω CSS (mag) (au) (deg) (deg) (deg) [1pt]0pt3ex4765 Wasserburg 14.05 1.9453591 0.0599697 23.71330 76.50142 108.59143 Y 350716 2001 XO105 18.00 1.9459411 0.0597860 23.70790 76.45874 108.33248 Y2012 KH5619.22 1.9456701 0.0604509 23.70963 76.44694 108.32313 Y2016 GL253 19.18 1.9457396 0.0598318 23.71026 76.46677 108.53996 Y 2017 DU131 18.90 1.9456063 0.0604246 23.70749 76.42791 108.29692 Y2017 KO4619.27 1.9453538 0.0604115 23.70825 76.50900 108.09233 Y2018 YF1618.94 1.9454573 0.0602688 23.70620 76.40472 108.19370 Y2020 HF2119.01 1.9455092 0.0604480 23.70669 76.45580 108.27241 Y [2pt]rlccccccc Martes family as of June2023. Osculating heliocentric orbital elements at epochMJD 60,000.0 from the MPC catalog: semimajor axis a, eccentricity e, inclination I,longitude of node Ω, and argument of perihelion ω. The third columngives the absolute magnitude H. The lastcolumn indicates, whether the asteroid has been detected by CSS during the phase 2operations (Y=yes).2cAsteroid 0pt2ex H a e I Ω ω CSS (mag) (au) (deg) (deg) (deg) [1pt]continued. 2cAsteroid 0pt2ex H a e I Ω ω CSS (mag) (au) (deg) (deg) (deg) [1pt]0pt3ex 5026 Martes 14.10 2.3785050 0.2419535 4.28293 304.74872 17.59761 Y2005 WW113 17.92 2.3766591 0.2431729 4.29300 304.86627 17.41694 Y2010 TB155 17.90 2.3771299 0.2421036 4.28760 304.75606 17.10483 Y2011 RF4019.87 2.3771146 0.2442609 4.29445 304.60737 17.410622022 QB5920.10 2.3770235 0.2441532 4.29430 304.61079 17.409732022 RM5020.13 2.3769466 0.2440863 4.29433 304.61266 17.38851 [2pt] rlccccccc Lucascavin family as of June 2023. Osculating heliocentric orbital elements at epochMJD 60,000.0 from the MPC catalog: semimajor axis a, eccentricity e, inclination I,longitude of node Ω, and argument of perihelion ω. The third columngives the absolute magnitude H. The lastcolumn indicates, whether the asteroid has been detected by CSS during the phase 2operations (Y=yes).2cAsteroid 0pt2ex H a e I Ω ω CSS (mag) (au) (deg) (deg) (deg) [1pt]continued. 2cAsteroid 0pt2ex H a e I Ω ω CSS (mag) (au) (deg) (deg) (deg) [1pt]0pt3ex 21509 Lucascavin 15.06 2.2804908 0.11265435.98061 70.146274.71189 Y180255 2003 VM9 17.21 2.2806359 0.11264335.98101 70.378214.12878 Y209570 2004 XL4017.24 2.2815589 0.11142655.97968 69.957554.92455 Y [2pt]
http://arxiv.org/abs/2310.17985v1
{ "authors": [ "David Vokrouhlický", "David Nesvorný", "Miroslav Brož", "William F. Bottke" ], "categories": [ "astro-ph.EP" ], "primary_category": "astro-ph.EP", "published": "20231027085831", "title": "Debiased population of very young asteroid families" }